Sample records for computational problems related

  1. Assessment of computer-related health problems among post-graduate nursing students.

    PubMed

    Khan, Shaheen Akhtar; Sharma, Veena

    2013-01-01

    The study was conducted to assess computer-related health problems among post-graduate nursing students and to develop a Self Instructional Module for prevention of computer-related health problems in a selected university situated in Delhi. A descriptive survey with co-relational design was adopted. A total of 97 samples were selected from different faculties of Jamia Hamdard by multi stage sampling with systematic random sampling technique. Among post-graduate students, majority of sample subjects had average compliance with computer-related ergonomics principles. As regards computer related health problems, majority of post graduate students had moderate computer-related health problems, Self Instructional Module developed for prevention of computer-related health problems was found to be acceptable by the post-graduate students.

  2. Perceived problems with computer gaming and internet use among adolescents: measurement tool for non-clinical survey studies

    PubMed Central

    2014-01-01

    Background Existing instruments for measuring problematic computer and console gaming and internet use are often lengthy and often based on a pathological perspective. The objective was to develop and present a new and short non-clinical measurement tool for perceived problems related to computer use and gaming among adolescents and to study the association between screen time and perceived problems. Methods Cross-sectional school-survey of 11-, 13-, and 15-year old students in thirteen schools in the City of Aarhus, Denmark, participation rate 89%, n = 2100. The main exposure was time spend on weekdays on computer- and console-gaming and internet use for communication and surfing. The outcome measures were three indexes on perceived problems related to computer and console gaming and internet use. Results The three new indexes showed high face validity and acceptable internal consistency. Most schoolchildren with high screen time did not experience problems related to computer use. Still, there was a strong and graded association between time use and perceived problems related to computer gaming, console gaming (only boys) and internet use, odds ratios ranging from 6.90 to 10.23. Conclusion The three new measures of perceived problems related to computer and console gaming and internet use among adolescents are appropriate, reliable and valid for use in non-clinical surveys about young people’s everyday life and behaviour. These new measures do not assess Internet Gaming Disorder as it is listed in the DSM and therefore has no parity with DSM criteria. We found an increasing risk of perceived problems with increasing time spent with gaming and internet use. Nevertheless, most schoolchildren who spent much time with gaming and internet use did not experience problems. PMID:24731270

  3. Perceived problems with computer gaming and Internet use are associated with poorer social relations in adolescence.

    PubMed

    Rasmussen, Mette; Meilstrup, Charlotte Riebeling; Bendtsen, Pernille; Pedersen, Trine Pagh; Nielsen, Line; Madsen, Katrine Rich; Holstein, Bjørn E

    2015-02-01

    Young people's engagement in electronic gaming and Internet communication have caused concerns about potential harmful effects on their social relations, but the literature is inconclusive. The aim of this paper was to examine whether perceived problems with computer gaming and Internet communication are associated with young people's social relations. Cross-sectional questionnaire survey in 13 schools in the city of Aarhus, Denmark, in 2009. Response rate 89%, n = 2,100 students in grades 5, 7, and 9. Independent variables were perceived problems related to computer gaming and Internet use, respectively. Outcomes were measures of structural (number of days/week with friends, number of friends) and functional (confidence in others, being bullied, bullying others) dimensions of student's social relations. Perception of problems related to computer gaming were associated with almost all aspects of poor social relations among boys. Among girls, an association was only seen for bullying. For both boys and girls, perceived problems related to Internet use were associated with bullying only. Although the study is cross-sectional, the findings suggest that computer gaming and Internet use may be harmful to young people's social relations.

  4. Use of a Computer Simulation To Develop Mental Simulations for Understanding Relative Motion Concepts.

    ERIC Educational Resources Information Center

    Monaghan, James M.; Clement, John

    1999-01-01

    Presents evidence for students' qualitative and quantitative difficulties with apparently simple one-dimensional relative-motion problems, students' spontaneous visualization of relative-motion problems, the visualizations facilitating solution of these problems, and students' memories of the online computer simulation used as a framework for…

  5. An analysis of computer-related patient safety incidents to inform the development of a classification.

    PubMed

    Magrabi, Farah; Ong, Mei-Sing; Runciman, William; Coiera, Enrico

    2010-01-01

    To analyze patient safety incidents associated with computer use to develop the basis for a classification of problems reported by health professionals. Incidents submitted to a voluntary incident reporting database across one Australian state were retrieved and a subset (25%) was analyzed to identify 'natural categories' for classification. Two coders independently classified the remaining incidents into one or more categories. Free text descriptions were analyzed to identify contributing factors. Where available medical specialty, time of day and consequences were examined. Descriptive statistics; inter-rater reliability. A search of 42,616 incidents from 2003 to 2005 yielded 123 computer related incidents. After removing duplicate and unrelated incidents, 99 incidents describing 117 problems remained. A classification with 32 types of computer use problems was developed. Problems were grouped into information input (31%), transfer (20%), output (20%) and general technical (24%). Overall, 55% of problems were machine related and 45% were attributed to human-computer interaction. Delays in initiating and completing clinical tasks were a major consequence of machine related problems (70%) whereas rework was a major consequence of human-computer interaction problems (78%). While 38% (n=26) of the incidents were reported to have a noticeable consequence but no harm, 34% (n=23) had no noticeable consequence. Only 0.2% of all incidents reported were computer related. Further work is required to expand our classification using incident reports and other sources of information about healthcare IT problems. Evidence based user interface design must focus on the safe entry and retrieval of clinical information and support users in detecting and correcting errors and malfunctions.

  6. An analysis of computer-related patient safety incidents to inform the development of a classification

    PubMed Central

    Ong, Mei-Sing; Runciman, William; Coiera, Enrico

    2010-01-01

    Objective To analyze patient safety incidents associated with computer use to develop the basis for a classification of problems reported by health professionals. Design Incidents submitted to a voluntary incident reporting database across one Australian state were retrieved and a subset (25%) was analyzed to identify ‘natural categories’ for classification. Two coders independently classified the remaining incidents into one or more categories. Free text descriptions were analyzed to identify contributing factors. Where available medical specialty, time of day and consequences were examined. Measurements Descriptive statistics; inter-rater reliability. Results A search of 42 616 incidents from 2003 to 2005 yielded 123 computer related incidents. After removing duplicate and unrelated incidents, 99 incidents describing 117 problems remained. A classification with 32 types of computer use problems was developed. Problems were grouped into information input (31%), transfer (20%), output (20%) and general technical (24%). Overall, 55% of problems were machine related and 45% were attributed to human–computer interaction. Delays in initiating and completing clinical tasks were a major consequence of machine related problems (70%) whereas rework was a major consequence of human–computer interaction problems (78%). While 38% (n=26) of the incidents were reported to have a noticeable consequence but no harm, 34% (n=23) had no noticeable consequence. Conclusion Only 0.2% of all incidents reported were computer related. Further work is required to expand our classification using incident reports and other sources of information about healthcare IT problems. Evidence based user interface design must focus on the safe entry and retrieval of clinical information and support users in detecting and correcting errors and malfunctions. PMID:20962128

  7. Problems experienced by people with arthritis when using a computer.

    PubMed

    Baker, Nancy A; Rogers, Joan C; Rubinstein, Elaine N; Allaire, Saralynn H; Wasko, Mary Chester

    2009-05-15

    To describe the prevalence of computer use problems experienced by a sample of people with arthritis, and to determine differences in the magnitude of these problems among people with rheumatoid arthritis (RA), osteoarthritis (OA), and fibromyalgia (FM). Subjects were recruited from the Arthritis Network Disease Registry and asked to complete a survey, the Computer Problems Survey, which was developed for this study. Descriptive statistics were calculated for the total sample and the 3 diagnostic subgroups. Ordinal regressions were used to determine differences between the diagnostic subgroups with respect to each equipment item while controlling for confounding demographic variables. A total of 359 respondents completed a survey. Of the 315 respondents who reported using a computer, 84% reported a problem with computer use attributed to their underlying disorder, and approximately 77% reported some discomfort related to computer use. Equipment items most likely to account for problems and discomfort were the chair, keyboard, mouse, and monitor. Of the 3 subgroups, significantly more respondents with FM reported more severe discomfort, more problems, and greater limitations related to computer use than those with RA or OA for all 4 equipment items. Computer use is significantly affected by arthritis. This could limit the ability of a person with arthritis to participate in work and home activities. Further study is warranted to delineate disease-related limitations and develop interventions to reduce them.

  8. Third Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D. (Editor)

    2000-01-01

    The proceedings of the Third Computational Aeroacoustics (CAA) Workshop on Benchmark Problems cosponsored by the Ohio Aerospace Institute and the NASA Glenn Research Center are the subject of this report. Fan noise was the chosen theme for this workshop with representative problems encompassing four of the six benchmark problem categories. The other two categories were related to jet noise and cavity noise. For the first time in this series of workshops, the computational results for the cavity noise problem were compared to experimental data. All the other problems had exact solutions, which are included in this report. The Workshop included a panel discussion by representatives of industry. The participants gave their views on the status of applying computational aeroacoustics to solve practical industry related problems and what issues need to be addressed to make CAA a robust design tool.

  9. Computer Assisted Problem Solving in an Introductory Statistics Course. Technical Report No. 56.

    ERIC Educational Resources Information Center

    Anderson, Thomas H.; And Others

    The computer assisted problem solving system (CAPS) described in this booklet administered "homework" problem sets designed to develop students' computational, estimation, and procedural skills. These skills were related to important concepts in an introductory statistics course. CAPS generated unique data, judged student performance,…

  10. A POSTERIORI ERROR ANALYSIS OF TWO STAGE COMPUTATION METHODS WITH APPLICATION TO EFFICIENT DISCRETIZATION AND THE PARAREAL ALGORITHM.

    PubMed

    Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff

    2016-01-01

    We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.

  11. Application of computational aero-acoustics to real world problems

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    The application of computational aeroacoustics (CAA) to real problems is discussed in relation to the analysis performed with the aim of assessing the application of the various techniques. It is considered that the applications are limited by the inability of the computational resources to resolve the large range of scales involved in high Reynolds number flows. Possible simplifications are discussed. It is considered that problems remain to be solved in relation to the efficient use of the power of parallel computers and in the development of turbulent modeling schemes. The goal of CAA is stated as being the implementation of acoustic design studies on a computer terminal with reasonable run times.

  12. Enhancing Digital Fluency through a Training Program for Creative Problem Solving Using Computer Programming

    ERIC Educational Resources Information Center

    Kim, SugHee; Chung, KwangSik; Yu, HeonChang

    2013-01-01

    The purpose of this paper is to propose a training program for creative problem solving based on computer programming. The proposed program will encourage students to solve real-life problems through a creative thinking spiral related to cognitive skills with computer programming. With the goal of enhancing digital fluency through this proposed…

  13. Vision-related problems among the workers engaged in jewellery manufacturing.

    PubMed

    Salve, Urmi Ravindra

    2015-01-01

    American Optometric Association defines Computer Vision Syndrome (CVS) as "complex of eye and vision problems related to near work which are experienced during or related to computer use." This happens when visual demand of the tasks exceeds the visual ability of the users. Even though problems were initially attributed to computer-related activities subsequently similar problems are also reported while carrying any near point task. Jewellery manufacturing activities involves precision designs, setting the tiny metals and stones which requires high visual attention and mental concentration and are often near point task. It is therefore expected that the workers engaged in jewellery manufacturing may also experience symptoms like CVS. Keeping the above in mind, this study was taken up (1) To identify the prevalence of symptoms like CVS among the workers of the jewellery manufacturing and compare the same with the workers working at computer workstation and (2) To ascertain whether such symptoms have any permanent vision-related problems. Case control study. The study was carried out in Zaveri Bazaar region and at an IT-enabled organization in Mumbai. The study involved the identification of symptoms of CVS using a questionnaire of Eye Strain Journal, opthalmological check-ups and measurement of Spontaneous Eye Blink rate. The data obtained from the jewellery manufacturing was compared with the data of the subjects engaged in computer work and with the data available in the literature. A comparative inferential statistics was used. Results showed that visual demands of the task carried out in jewellery manufacturing were much higher than that of carried out in computer-related work.

  14. Computer use problems and accommodation strategies at work and home for people with systemic sclerosis: a needs assessment.

    PubMed

    Baker, Nancy A; Aufman, Elyse L; Poole, Janet L

    2012-01-01

    We identified the extent of the need for interventions and assistive technology to prevent computer use problems in people with systemic sclerosis (SSc) and the accommodation strategies they use to alleviate such problems. Respondents were recruited through the Scleroderma Foundation. Twenty-seven people with SSc who used a computer and reported difficulty in working completed the Computer Problems Survey. All but 1 of the respondents reported one problem with at least one equipment type. The highest number of respondents reported problems with keyboards (88%) and chairs (85%). More than half reported discomfort in the past month associated with the chair, keyboard, and mouse. Respondents used a variety of accommodation strategies. Many respondents experienced problems and discomfort related to computer use. The characteristic symptoms of SSc may contribute to these problems. Occupational therapy interventions for computer use problems in clients with SSc need to be tested. Copyright © 2012 by the American Occupational Therapy Association, Inc.

  15. Computation of Transonic Nozzle Sound Transmission and Rotor Problems by the Dispersion-Relation-Preserving Scheme

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Aganin, Alexei

    2000-01-01

    The transonic nozzle transmission problem and the open rotor noise radiation problem are solved computationally. Both are multiple length scales problems. For efficient and accurate numerical simulation, the multiple-size-mesh multiple-time-step Dispersion-Relation-Preserving scheme is used to calculate the time periodic solution. To ensure an accurate solution, high quality numerical boundary conditions are also needed. For the nozzle problem, a set of nonhomogeneous, outflow boundary conditions are required. The nonhomogeneous boundary conditions not only generate the incoming sound waves but also, at the same time, allow the reflected acoustic waves and entropy waves, if present, to exit the computation domain without reflection. For the open rotor problem, there is an apparent singularity at the axis of rotation. An analytic extension approach is developed to provide a high quality axis boundary treatment.

  16. Classical problems in computational aero-acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    In relation to the expected problems in the development of computational aeroacoustics (CAA), the preliminary applications were to classical problems where the known analytical solutions could be used to validate the numerical results. Such comparisons were used to overcome the numerical problems inherent in these calculations. Comparisons were made between the various numerical approaches to the problems such as direct simulations, acoustic analogies and acoustic/viscous splitting techniques. The aim was to demonstrate the applicability of CAA as a tool in the same class as computational fluid dynamics. The scattering problems that occur are considered and simple sources are discussed.

  17. Identification and Addressing Reduction-Related Misconceptions

    ERIC Educational Resources Information Center

    Gal-Ezer, Judith; Trakhtenbrot, Mark

    2016-01-01

    Reduction is one of the key techniques used for problem-solving in computer science. In particular, in the theory of computation and complexity (TCC), mapping and polynomial reductions are used for analysis of decidability and computational complexity of problems, including the core concept of NP-completeness. Reduction is a highly abstract…

  18. Analysis of Five Instructional Methods for Teaching Sketchpad to Junior High Students

    ERIC Educational Resources Information Center

    Wright, Geoffrey; Shumway, Steve; Terry, Ronald; Bartholomew, Scott

    2012-01-01

    This manuscript addresses a problem teachers of computer software applications face today: What is an effective method for teaching new computer software? Technology and engineering teachers, specifically those with communications and other related courses that involve computer software applications, face this problem when teaching computer…

  19. Amoeba-inspired nanoarchitectonic computing: solving intractable computational problems using nanoscale photoexcitation transfer dynamics.

    PubMed

    Aono, Masashi; Naruse, Makoto; Kim, Song-Ju; Wakabayashi, Masamitsu; Hori, Hirokazu; Ohtsu, Motoichi; Hara, Masahiko

    2013-06-18

    Biologically inspired computing devices and architectures are expected to overcome the limitations of conventional technologies in terms of solving computationally demanding problems, adapting to complex environments, reducing energy consumption, and so on. We previously demonstrated that a primitive single-celled amoeba (a plasmodial slime mold), which exhibits complex spatiotemporal oscillatory dynamics and sophisticated computing capabilities, can be used to search for a solution to a very hard combinatorial optimization problem. We successfully extracted the essential spatiotemporal dynamics by which the amoeba solves the problem. This amoeba-inspired computing paradigm can be implemented by various physical systems that exhibit suitable spatiotemporal dynamics resembling the amoeba's problem-solving process. In this Article, we demonstrate that photoexcitation transfer phenomena in certain quantum nanostructures mediated by optical near-field interactions generate the amoebalike spatiotemporal dynamics and can be used to solve the satisfiability problem (SAT), which is the problem of judging whether a given logical proposition (a Boolean formula) is self-consistent. SAT is related to diverse application problems in artificial intelligence, information security, and bioinformatics and is a crucially important nondeterministic polynomial time (NP)-complete problem, which is believed to become intractable for conventional digital computers when the problem size increases. We show that our amoeba-inspired computing paradigm dramatically outperforms a conventional stochastic search method. These results indicate the potential for developing highly versatile nanoarchitectonic computers that realize powerful solution searching with low energy consumption.

  20. Cost Considerations in Nonlinear Finite-Element Computing

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R. J.; Islam, M.; Salama, M.

    1985-01-01

    Conference paper discusses computational requirements for finiteelement analysis using quasi-linear approach to nonlinear problems. Paper evaluates computational efficiency of different computer architecturtural types in terms of relative cost and computing time.

  1. Computational procedure for finite difference solution of one-dimensional heat conduction problems reduces computer time

    NASA Technical Reports Server (NTRS)

    Iida, H. T.

    1966-01-01

    Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.

  2. A multidisciplinary approach to solving computer related vision problems.

    PubMed

    Long, Jennifer; Helland, Magne

    2012-09-01

    This paper proposes a multidisciplinary approach to solving computer related vision issues by including optometry as a part of the problem-solving team. Computer workstation design is increasing in complexity. There are at least ten different professions who contribute to workstation design or who provide advice to improve worker comfort, safety and efficiency. Optometrists have a role identifying and solving computer-related vision issues and in prescribing appropriate optical devices. However, it is possible that advice given by optometrists to improve visual comfort may conflict with other requirements and demands within the workplace. A multidisciplinary approach has been advocated for solving computer related vision issues. There are opportunities for optometrists to collaborate with ergonomists, who coordinate information from physical, cognitive and organisational disciplines to enact holistic solutions to problems. This paper proposes a model of collaboration and examples of successful partnerships at a number of professional levels including individual relationships between optometrists and ergonomists when they have mutual clients/patients, in undergraduate and postgraduate education and in research. There is also scope for dialogue between optometry and ergonomics professional associations. A multidisciplinary approach offers the opportunity to solve vision related computer issues in a cohesive, rather than fragmented way. Further exploration is required to understand the barriers to these professional relationships. © 2012 The College of Optometrists.

  3. Ergonomics Considerations in Microcomputing.

    ERIC Educational Resources Information Center

    Torok, Andrew G.

    1984-01-01

    Discusses evolution of ergonomics and development of computer ergonomics with its sub-fields of hardware ergonomics (user-equipment-related problems including workstation design); software ergonomics (problems in communication with computers); and peopleware ergonomics (psychological impact). Ergonomic features of VDTs, keyboards, and printers are…

  4. Parallel computation with molecular-motor-propelled agents in nanofabricated networks.

    PubMed

    Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V

    2016-03-08

    The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.

  5. Hiding in Plain Sight: Identifying Computational Thinking in the Ontario Elementary School Curriculum

    ERIC Educational Resources Information Center

    Hennessey, Eden J. V.; Mueller, Julie; Beckett, Danielle; Fisher, Peter A.

    2017-01-01

    Given a growing digital economy with complex problems, demands are being made for education to address computational thinking (CT)--an approach to problem solving that draws on the tenets of computer science. We conducted a comprehensive content analysis of the Ontario elementary school curriculum documents for 44 CT-related terms to examine the…

  6. Impact of computer use on children's vision.

    PubMed

    Kozeis, N

    2009-10-01

    Today, millions of children use computers on a daily basis. Extensive viewing of the computer screen can lead to eye discomfort, fatigue, blurred vision and headaches, dry eyes and other symptoms of eyestrain. These symptoms may be caused by poor lighting, glare, an improper work station set-up, vision problems of which the person was not previously aware, or a combination of these factors. Children can experience many of the same symptoms related to computer use as adults. However, some unique aspects of how children use computers may make them more susceptible than adults to the development of these problems. In this study, the most common eye symptoms related to computer use in childhood, the possible causes and ways to avoid them are reviewed.

  7. Math and numeracy in young adults with spina bifida and hydrocephalus.

    PubMed

    Dennis, Maureen; Barnes, Marcia

    2002-01-01

    The developmental stability of poor math skill was studied in 31 young adults with spina bifida and hydrocephalus (SBH), a neurodevelopmental disorder involving malformations of the brain and spinal cord. Longitudinally, individuals with poor math problem solving as children grew into adults with poor problem solving and limited functional numeracy. As a group, young adults with SBH had poor computation accuracy, computation speed, problem solving, a ndfunctional numeracy. Computation accuracy was related to a supporting cognitive system (working memory for numbers), and functional numeracy was related to one medical history variable (number of lifetime shunt revisions). Adult functional numeracy, but not functional literacy, was predictive of higher levels of social, personal, and community independence.

  8. Research related to improved computer aided design software package. [comparative efficiency of finite, boundary, and hybrid element methods in elastostatics

    NASA Technical Reports Server (NTRS)

    Walston, W. H., Jr.

    1986-01-01

    The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.

  9. Regressive Imagery in Creative Problem-Solving: Comparing Verbal Protocols of Expert and Novice Visual Artists and Computer Programmers

    ERIC Educational Resources Information Center

    Kozbelt, Aaron; Dexter, Scott; Dolese, Melissa; Meredith, Daniel; Ostrofsky, Justin

    2015-01-01

    We applied computer-based text analyses of regressive imagery to verbal protocols of individuals engaged in creative problem-solving in two domains: visual art (23 experts, 23 novices) and computer programming (14 experts, 14 novices). Percentages of words involving primary process and secondary process thought, plus emotion-related words, were…

  10. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    NASA Astrophysics Data System (ADS)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  11. Examining the Relationship between Technological, Organizational, and Environmental Factors and Cloud Computing Adoption

    ERIC Educational Resources Information Center

    Tweel, Abdeneaser

    2012-01-01

    High uncertainties related to cloud computing adoption may hinder IT managers from making solid decisions about adopting cloud computing. The problem addressed in this study was the lack of understanding of the relationship between factors related to the adoption of cloud computing and IT managers' interest in adopting this technology. In…

  12. [Problem list in computer-based patient records].

    PubMed

    Ludwig, C A

    1997-01-14

    Computer-based clinical information systems are capable of effectively processing even large amounts of patient-related data. However, physicians depend on rapid access to summarized, clearly laid out data on the computer screen to inform themselves about a patient's current clinical situation. In introducing a clinical workplace system, we therefore transformed the problem list-which for decades has been successfully used in clinical information management-into an electronic equivalent and integrated it into the medical record. The table contains a concise overview of diagnoses and problems as well as related findings. Graphical information can also be integrated into the table, and an additional space is provided for a summary of planned examinations or interventions. The digital form of the problem list makes it possible to use the entire list or selected text elements for generating medical documents. Diagnostic terms for medical reports are transferred automatically to corresponding documents. Computer technology has an immense potential for the further development of problem list concepts. With multimedia applications sound and images will be included in the problem list. For hyperlink purpose the problem list could become a central information board and table of contents of the medical record, thus serving as the starting point for database searches and supporting the user in navigating through the medical record.

  13. [Computer-assisted education in problem-solving in neurology; a randomized educational study].

    PubMed

    Weverling, G J; Stam, J; ten Cate, T J; van Crevel, H

    1996-02-24

    To determine the effect of computer-based medical teaching (CBMT) as a supplementary method to teach clinical problem-solving during the clerkship in neurology. Randomized controlled blinded study. Academic Medical Centre, Amsterdam, the Netherlands. 103 Students were assigned at random to a group with access to CBMT and a control group. CBMT consisted of 20 computer-simulated patients with neurological diseases, and was permanently available during five weeks to students in the CBMT group. The ability to recognize and solve neurological problems was assessed with two free-response tests, scored by two blinded observers. The CBMT students scored significantly better on the test related to the CBMT cases (mean score 7.5 on a zero to 10 point scale; control group 6.2; p < 0.001). There was no significant difference on the control test not related to the problems practised with CBMT. CBMT can be an effective method for teaching clinical problem-solving, when used as a supplementary teaching facility during a clinical clerkship. The increased ability to solve problems learned by CBMT had no demonstrable effect on the performance with other neurological problems.

  14. Problems and accommodation strategies reported by computer users with rheumatoid arthritis or fibromyalgia.

    PubMed

    Baker, Nancy A; Rubinstein, Elaine N; Rogers, Joan C

    2012-09-01

    Little is known about the problems experienced by and the accommodation strategies used by computer users with rheumatoid arthritis (RA) or fibromyalgia (FM). This study (1) describes specific problems and accommodation strategies used by people with RA and FM during computer use; and (2) examines if there were significant differences in the problems and accommodation strategies between the different equipment items for each diagnosis. Subjects were recruited from the Arthritis Network Disease Registry. Respondents completed a self-report survey, the Computer Problems Survey. Data were analyzed descriptively (percentages; 95% confidence intervals). Differences in the number of problems and accommodation strategies were calculated using nonparametric tests (Friedman's test and Wilcoxon Signed Rank Test). Eighty-four percent of respondents reported at least one problem with at least one equipment item (RA = 81.5%; FM = 88.9%), with most respondents reporting problems with their chair. Respondents most commonly used timing accommodation strategies to cope with mouse and keyboard problems, personal accommodation strategies to cope with chair problems and environmental accommodation strategies to cope with monitor problems. The number of problems during computer use was substantial in our sample, and our respondents with RA and FM may not implement the most effective strategies to deal with their chair, keyboard, or mouse problems. This study suggests that workers with RA and FM might potentially benefit from education and interventions to assist with the development of accommodation strategies to reduce problems related to computer use.

  15. Probabilistic data integration and computational complexity

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.; Mosegaard, K.

    2016-12-01

    Inverse problems in Earth Sciences typically refer to the problem of inferring information about properties of the Earth from observations of geophysical data (the result of nature's solution to the `forward' problem). This problem can be formulated more generally as a problem of `integration of information'. A probabilistic formulation of data integration is in principle simple: If all information available (from e.g. geology, geophysics, remote sensing, chemistry…) can be quantified probabilistically, then different algorithms exist that allow solving the data integration problem either through an analytical description of the combined probability function, or sampling the probability function. In practice however, probabilistic based data integration may not be easy to apply successfully. This may be related to the use of sampling methods, which are known to be computationally costly. But, another source of computational complexity is related to how the individual types of information are quantified. In one case a data integration problem is demonstrated where the goal is to determine the existence of buried channels in Denmark, based on multiple sources of geo-information. Due to one type of information being too informative (and hence conflicting), this leads to a difficult sampling problems with unrealistic uncertainty. Resolving this conflict prior to data integration, leads to an easy data integration problem, with no biases. In another case it is demonstrated how imperfections in the description of the geophysical forward model (related to solving the wave-equation) can lead to a difficult data integration problem, with severe bias in the results. If the modeling error is accounted for, the data integration problems becomes relatively easy, with no apparent biases. Both examples demonstrate that biased information can have a dramatic effect on the computational efficiency solving a data integration problem and lead to biased results, and under-estimation of uncertainty. However, in both examples, one can also analyze the performance of the sampling methods used to solve the data integration problem to indicate the existence of biased information. This can be used actively to avoid biases in the available information and subsequently in the final uncertainty evaluation.

  16. The year 2000 threat: preparing radiology for nine realms of risk.

    PubMed

    Berland, L L

    1999-01-01

    The year 2000 computer problem arises from a long-standing and often-duplicated computer programming error. Affected programs use only two digits to represent years, which may lead to a variety of computer malfunctions and data errors related to crossing from 1999 (99) to 2000 (00), at which point computers may interpret 00 as 1900 or other incorrect dates. Radiology and medicine may be seriously affected by this problem as it relates to the function of its equipment; business functions such as scheduling, billing and purchasing; the reliability of infrastructure such as power and telecommunications; the availability of supplies; and many other issues. It is crucial that radiologists, as practitioners of one of the most computer-oriented medical specialties, help lead the effort to ensure continuity of operations as the year 2000 boundary approaches and passes. This article provides suggestions for a structured approach, as well as tools and checklists, to guide project leaders attempting to identify and remediate year 2000-associated problems within radiology facilities.

  17. Evaluating Preclinical Medical Students by Using Computer-Based Problem-Solving Examinations.

    ERIC Educational Resources Information Center

    Stevens, Ronald H.; And Others

    1989-01-01

    A study to determine the feasibility of creating and administering computer-based problem-solving examinations for evaluating second-year medical students in immunology and to determine how students would perform on these tests relative to their performances on concurrently administered objective and essay examinations is described. (Author/MLW)

  18. RESEARCH STRATEGIES FOR THE APPLICATION OF THE TECHNIQUES OF COMPUTATIONAL BIOLOGICAL CHEMISTRY TO ENVIRONMENTAL PROBLEMS

    EPA Science Inventory

    On October 25 and 26, 1984, the U.S. EPA sponsored a workshop to consider the potential applications of the techniques of computational biological chemistry to problems in environmental health. Eleven extramural scientists from the various related disciplines and a similar number...

  19. Detection of medication-related problems in hospital practice: a review

    PubMed Central

    Manias, Elizabeth

    2013-01-01

    This review examines the effectiveness of detection methods in terms of their ability to identify and accurately determine medication-related problems in hospitals. A search was conducted of databases from inception to June 2012. The following keywords were used in combination: medication error or adverse drug event or adverse drug reaction, comparison, detection, hospital and method. Seven detection methods were considered: chart review, claims data review, computer monitoring, direct care observation, interviews, prospective data collection and incident reporting. Forty relevant studies were located. Detection methods that were better able to identify medication-related problems compared with other methods tested in the same study included chart review, computer monitoring, direct care observation and prospective data collection. However, only small numbers of studies were involved in comparisons with direct care observation (n = 5) and prospective data collection (n = 6). There was little focus on detecting medication-related problems during various stages of the medication process, and comparisons associated with the seriousness of medication-related problems were examined in 19 studies. Only 17 studies involved appropriate comparisons with a gold standard, which provided details about sensitivities and specificities. In view of the relatively low identification of medication-related problems with incident reporting, use of this method in tracking trends over time should be met with some scepticism. Greater attention should be placed on combining methods, such as chart review and computer monitoring in examining trends. More research is needed on the use of claims data, direct care observation, interviews and prospective data collection as detection methods. PMID:23194349

  20. Relative motion using analytical differential gravity

    NASA Technical Reports Server (NTRS)

    Gottlieb, Robert G.

    1988-01-01

    This paper presents a new approach to the computation of the motion of one satellite relative to another. The trajectory of the reference satellite is computed accurately subject to geopotential perturbations. This precise trajectory is used as a reference in computing the position of a nearby body, or bodies. The problem that arises in this approach is differencing nearly equal terms in the geopotential model, especially as the separation of the reference and nearby bodies approaches zero. By developing closed form expressions for differences in higher order and degree geopotential terms, the numerical problem inherent in the differencing approach is eliminated.

  1. Correlation between Academic and Skills-Based Tests in Computer Networks

    ERIC Educational Resources Information Center

    Buchanan, William

    2006-01-01

    Computing-related programmes and modules have many problems, especially related to large class sizes, large-scale plagiarism, module franchising, and an increased requirement from students for increased amounts of hands-on, practical work. This paper presents a practical computer networks module which uses a mixture of online examinations and a…

  2. Specialized computer architectures for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  3. Safety in the Automated Office.

    ERIC Educational Resources Information Center

    Graves, Pat R.; Greathouse, Lillian R.

    1990-01-01

    Office automation has introduced new hazards to the workplace: electrical hazards related to computer wiring, musculoskeletal problems resulting from use of computer terminals and design of work stations, and environmental concerns related to ventilation, noise levels, and office machine chemicals. (SK)

  4. Analysis of Computer Teachers' Online Discussion Forum Messages about Their Occupational Problems

    ERIC Educational Resources Information Center

    Deryakulu, Deniz; Olkun, Sinan

    2007-01-01

    This study, using content analysis technique, examined the types of job-related problems that the Turkish computer teachers experienced and the types of social support provided by reciprocal discussions in an online forum. Results indicated that role conflict, inadequate teacher induction policies, lack of required technological infrastructure and…

  5. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical aspects of the development of a robot computer problem solving system were investigated. The distinctive characteristics were formulated of the approach taken in relation to various studies of cognition and robotics. Vehicle and eye control systems were structured, and the information to be generated by the visual system is defined.

  6. Computer use and vision-related problems among university students in ajman, United arab emirate.

    PubMed

    Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K

    2014-03-01

    The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology were recruited into this study. Demographic characteristics, pattern of usage of computers and associated visual symptoms were recorded in a validated self-administered questionnaire. Chi-square test was used to determine the significance of the observed differences between the variables. The level of statistical significance was at P < 0.05. The crude odds ratio (OR) was determined using simple binary logistic regression and adjusted OR was calculated using the multiple logistic regression. The mean age of participants was 20.4 (3.2) years. The analysis of racial data reveals that 50% (236/471) students were from Middle East, 32% (151/471) from other parts of Asia, 11% (52/471) from Africa, 4% (19/471) from America and 3% (14/471) from Europe. The most common visual problems reported among computer users were headache - 53.3% (251/471), burning sensation in the eyes - 54.8% (258/471) and tired eyes - 48% (226/471). Female students were found to be at a higher risk. Nearly 72% of students reported frequent interruption of computer work. Headache caused interruption of work in 43.85% (110/168) of the students while tired eyes caused interruption of work in 43.5% (98/168) of the students. When the screen was viewed at distance more than 50 cm, the prevalence of headaches decreased by 38% (50-100 cm - OR: 0.62, 95% of the confidence interval [CI]: 0.42-0.92). Prevalence of tired eyes increased by 89% when screen filters were not used (OR: 1.894, 95% CI: 1.065-3.368). High prevalence of vision related problems was noted among university students. Sustained periods of close screen work without screen filters were found to be associated with occurrence of the symptoms and increased interruptions of work of the students. There is a need to increase the ergonomic awareness among students and corrective measures need to be implemented to reduce the impact of computer related vision problems.

  7. Individualized Math Problems in Percent. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. This volume includes problems concerned with computing percents.…

  8. Individualized Math Problems in Fractions. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. This package contains problems involving computation with common…

  9. Individualized Math Problems in Measurement and Conversion. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. This volume includes problems involving measurement, computation of…

  10. Computing relative plate velocities: a primer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevis, M.

    1987-08-01

    Standard models of present-day plate motions are framed in terms of rates and poles of rotation, in accordance with the well-known theorem due to Euler. This article shows how computation of relative plate velocities from such models can be viewed as a simple problem in spherical trigonometry. A FORTRAN subroutine is provided to perform the necessary computations.

  11. Neurally and Ocularly Informed Graph-Based Models for Searching 3D Environments

    DTIC Science & Technology

    2014-06-03

    hBCI = hybrid brain–computer interface, TAG = transductive annotation by graph, CV = computer vision, TSP = traveling salesman problem . are navigated...environment that are most likely to contain objects that the subject would like to visit. 2.9. Route planning A traveling salesman problem (TSP) solver...fixations in a visual search task using fixation-related potentials J. Vis. 13 Croes G 1958 A method for solving traveling - salesman problems Oper. Res

  12. On the 'principle of the quantumness', the quantumness of Relativity, and the computational grand-unification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Ariano, Giacomo Mauro

    2010-05-04

    I will argue that the proposal of establishing operational foundations of Quantum Theory should have top-priority, and that the Lucien Hardy's program on Quantum Gravity should be paralleled by an analogous program on Quantum Field Theory (QFT), which needs to be reformulated, notwithstanding its experimental success. In this paper, after reviewing recently suggested operational 'principles of the quantumness', I address the problem on whether Quantum Theory and Special Relativity are unrelated theories, or instead, if the one implies the other. I show how Special Relativity can be indeed derived from causality of Quantum Theory, within the computational paradigm 'the universemore » is a huge quantum computer', reformulating QFT as a Quantum-Computational Field Theory (QCFT). In QCFT Special Relativity emerges from the fabric of the computational network, which also naturally embeds gauge invariance. In this scheme even the quantization rule and the Planck constant can in principle be derived as emergent from the underlying causal tapestry of space-time. In this way Quantum Theory remains the only theory operating the huge computer of the universe.Is the computational paradigm only a speculative tautology (theory as simulation of reality), or does it have a scientific value? The answer will come from Occam's razor, depending on the mathematical simplicity of QCFT. Here I will just start scratching the surface of QCFT, analyzing simple field theories, including Dirac's. The number of problems and unmotivated recipes that plague QFT strongly motivates us to undertake the QCFT project, since QCFT makes all such problems manifest, and forces a re-foundation of QFT.« less

  13. Vectorization on the star computer of several numerical methods for a fluid flow problem

    NASA Technical Reports Server (NTRS)

    Lambiotte, J. J., Jr.; Howser, L. M.

    1974-01-01

    A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.

  14. Individualized Math Problems in Whole Numbers. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this set require computations involving whole numbers.…

  15. Individualized Math Problems in Volume. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this booklet require the computation of volumes of solids,…

  16. Evaluation of Mathematical Self-Explanations with LSA in a Counterintuitive Problem of Probabilities

    ERIC Educational Resources Information Center

    Guiu, Jordi Maja

    2012-01-01

    In this paper different type of mathematical explanations are presented in relation to the mathematical problem of probabilities Monty Hall (card version) and the computational tool Latent Semantic Analyses (LSA) is used. At the moment the results in the literature about this computational tool to study texts show that this technique is…

  17. On Stable Marriages and Greedy Matchings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manne, Fredrik; Naim, Md; Lerring, Hakon

    2016-12-11

    Research on stable marriage problems has a long and mathematically rigorous history, while that of exploiting greedy matchings in combinatorial scientific computing is a younger and less developed research field. In this paper we consider the relationships between these two areas. In particular we show that several problems related to computing greedy matchings can be formulated as stable marriage problems and as a consequence several recently proposed algorithms for computing greedy matchings are in fact special cases of well known algorithms for the stable marriage problem. However, in terms of implementations and practical scalable solutions on modern hardware, the greedymore » matching community has made considerable progress. We show that due to the strong relationship between these two fields many of these results are also applicable for solving stable marriage problems.« less

  18. Reconfigurability in MDO Problem Synthesis. Part 1

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2004-01-01

    Integrating autonomous disciplines into a problem amenable to solution presents a major challenge in realistic multidisciplinary design optimization (MDO). We propose a linguistic approach to MDO problem description, formulation, and solution we call reconfigurable multidisciplinary synthesis (REMS). With assistance from computer science techniques, REMS comprises an abstract language and a collection of processes that provide a means for dynamic reasoning about MDO problems in a range of contexts. The approach may be summarized as follows. Description of disciplinary data according to the rules of a grammar, followed by lexical analysis and compilation, yields basic computational components that can be assembled into various MDO problem formulations and solution algorithms, including hybrid strategies, with relative ease. The ability to re-use the computational components is due to the special structure of the MDO problem. The range of contexts for reasoning about MDO spans tasks from error checking and derivative computation to formulation and reformulation of optimization problem statements. In highly structured contexts, reconfigurability can mean a straightforward transformation among problem formulations with a single operation. We hope that REMS will enable experimentation with a variety of problem formulations in research environments, assist in the assembly of MDO test problems, and serve as a pre-processor in computational frameworks in production environments. This paper, Part 1 of two companion papers, discusses the fundamentals of REMS. Part 2 illustrates the methodology in more detail.

  19. Reconfigurability in MDO Problem Synthesis. Part 2

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2004-01-01

    Integrating autonomous disciplines into a problem amenable to solution presents a major challenge in realistic multidisciplinary design optimization (MDO). We propose a linguistic approach to MDO problem description, formulation, and solution we call reconfigurable multidisciplinary synthesis (REMS). With assistance from computer science techniques, REMS comprises an abstract language and a collection of processes that provide a means for dynamic reasoning about MDO problems in a range of contexts. The approach may be summarized as follows. Description of disciplinary data according to the rules of a grammar, followed by lexical analysis and compilation, yields basic computational components that can be assembled into various MDO problem formulations and solution algorithms, including hybrid strategies, with relative ease. The ability to re-use the computational components is due to the special structure of the MDO problem. The range of contexts for reasoning about MDO spans tasks from error checking and derivative computation to formulation and reformulation of optimization problem statements. In highly structured contexts, reconfigurability can mean a straightforward transformation among problem formulations with a single operation. We hope that REMS will enable experimentation with a variety of problem formulations in research environments, assist in the assembly of MDO test problems, and serve as a pre-processor in computational frameworks in production environments. Part 1 of two companion papers, discusses the fundamentals of REMS. This paper, Part 2 illustrates the methodology in more detail.

  20. Applied Computational Electromagnetics Society Journal. Volume 7, Number 1, Summer 1992

    DTIC Science & Technology

    1992-01-01

    previously-solved computational problem in electrical engineering, physics, or related fields of study. The technical activities promoted by this...in solution technique or in data input/output; identification of new applica- tions for electromagnetics modeling codes and techniques; integration of...papers will represent the computational electromagnetics aspects of research in electrical engineering, physics, or related disciplines. However, papers

  1. Numerical Boundary Conditions for Computational Aeroacoustics Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Tam, Chritsopher K. W.; Kurbatskii, Konstantin A.; Fang, Jun

    1997-01-01

    Category 1, Problems 1 and 2, Category 2, Problem 2, and Category 3, Problem 2 are solved computationally using the Dispersion-Relation-Preserving (DRP) scheme. All these problems are governed by the linearized Euler equations. The resolution requirements of the DRP scheme for maintaining low numerical dispersion and dissipation as well as accurate wave speeds in solving the linearized Euler equations are now well understood. As long as 8 or more mesh points per wavelength is employed in the numerical computation, high quality results are assured. For the first three categories of benchmark problems, therefore, the real challenge is to develop high quality numerical boundary conditions. For Category 1, Problems 1 and 2, it is the curved wall boundary conditions. For Category 2, Problem 2, it is the internal radiation boundary conditions inside the duct. For Category 3, Problem 2, they are the inflow and outflow boundary conditions upstream and downstream of the blade row. These are the foci of the present investigation. Special nonhomogeneous radiation boundary conditions that generate the incoming disturbances and at the same time allow the outgoing reflected or scattered acoustic disturbances to leave the computation domain without significant reflection are developed. Numerical results based on these boundary conditions are provided.

  2. Linear solver performance in elastoplastic problem solution on GPU cluster

    NASA Astrophysics Data System (ADS)

    Khalevitsky, Yu. V.; Konovalov, A. V.; Burmasheva, N. V.; Partin, A. S.

    2017-12-01

    Applying the finite element method to severe plastic deformation problems involves solving linear equation systems. While the solution procedure is relatively hard to parallelize and computationally intensive by itself, a long series of large scale systems need to be solved for each problem. When dealing with fine computational meshes, such as in the simulations of three-dimensional metal matrix composite microvolume deformation, tens and hundreds of hours may be needed to complete the whole solution procedure, even using modern supercomputers. In general, one of the preconditioned Krylov subspace methods is used in a linear solver for such problems. The method convergence highly depends on the operator spectrum of a problem stiffness matrix. In order to choose the appropriate method, a series of computational experiments is used. Different methods may be preferable for different computational systems for the same problem. In this paper we present experimental data obtained by solving linear equation systems from an elastoplastic problem on a GPU cluster. The data can be used to substantiate the choice of the appropriate method for a linear solver to use in severe plastic deformation simulations.

  3. A clinical study on "Computer vision syndrome" and its management with Triphala eye drops and Saptamrita Lauha.

    PubMed

    Gangamma, M P; Poonam; Rajagopala, Manjusha

    2010-04-01

    American Optometric Association (AOA) defines computer vision syndrome (CVS) as "Complex of eye and vision problems related to near work, which are experienced during or related to computer use". Most studies indicate that Video Display Terminal (VDT) operators report more eye related problems than non-VDT office workers. The causes for the inefficiencies and the visual symptoms are a combination of individual visual problems and poor office ergonomics. In this clinical study on "CVS", 151 patients were registered, out of whom 141 completed the treatment. In Group A, 45 patients had been prescribed Triphala eye drops; in Group B, 53 patients had been prescribed the Triphala eye drops and SaptamritaLauha tablets internally, and in Group C, 43 patients had been prescribed the placebo eye drops and placebo tablets. In total, marked improvement was observed in 48.89, 54.71 and 06.98% patients in groups A, B and C, respectively.

  4. Computer-Assisted Problem Solving in School Mathematics

    ERIC Educational Resources Information Center

    Hatfield, Larry L.; Kieren, Thomas E.

    1972-01-01

    A test of the hypothesis that writing and using computer programs related to selected mathematical content positively affects performance on those topics. Results particularly support the hypothesis. (MM)

  5. Perceptions and performance using computer-based testing: One institution's experience.

    PubMed

    Bloom, Timothy J; Rich, Wesley D; Olson, Stephanie M; Adams, Michael L

    2018-02-01

    The purpose of this study was to evaluate student and faculty perceptions of the transition to a required computer-based testing format and to identify any impact of this transition on student exam performance. Separate questionnaires sent to students and faculty asked about perceptions of and problems with computer-based testing. Exam results from program-required courses for two years prior to and two years following the adoption of computer-based testing were compared to determine if this testing format impacted student performance. Responses to Likert-type questions about perceived ease of use showed no difference between students with one and three semesters experience with computer-based testing. Of 223 student-reported problems, 23% related to faculty training with the testing software. Students most commonly reported improved feedback (46% of responses) and ease of exam-taking (17% of responses) as benefits to computer-based testing. Faculty-reported difficulties were most commonly related to problems with student computers during an exam (38% of responses) while the most commonly identified benefit was collecting assessment data (32% of responses). Neither faculty nor students perceived an impact on exam performance due to computer-based testing. An analysis of exam grades confirmed there was no consistent performance difference between the paper and computer-based formats. Both faculty and students rapidly adapted to using computer-based testing. There was no evidence that switching to computer-based testing had any impact on student exam performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Summary of the Tandem Cylinder Solutions from the Benchmark Problems for Airframe Noise Computations-I Workshop

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2011-01-01

    Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.

  7. Mathematical Problem Solving: A Review of the Literature.

    ERIC Educational Resources Information Center

    Funkhouser, Charles

    The major perspectives on problem solving of the twentieth century are reviewed--associationism, Gestalt psychology, and cognitive science. The results of the review on teaching problem solving and the uses of computers to teach problem solving are included. Four major issues related to the teaching of problem solving are discussed: (1)…

  8. Bridging Social and Semantic Computing - Design and Evaluation of User Interfaces for Hybrid Systems

    ERIC Educational Resources Information Center

    Bostandjiev, Svetlin Alex I.

    2012-01-01

    The evolution of the Web brought new interesting problems to computer scientists that we loosely classify in the fields of social and semantic computing. Social computing is related to two major paradigms: computations carried out by a large amount of people in a collective intelligence fashion (i.e. wikis), and performing computations on social…

  9. Examining the Relationship between Psychosocial Work Factors and Musculoskeletal Discomfort among Computer Users in Malaysia

    PubMed Central

    Zakerian, SA; Subramaniam, ID

    2011-01-01

    Background: With computers rapidly carving a niche in virtually every nook and crevice of today’s fast-paced society, musculoskeletal disorders are becoming more prevalent among computer users, which comprise a wide spectrum of the Malaysian population, including office workers. While extant literature depicts extensive research on musculoskeletal disorders in general, the five dimensions of psychosocial work factors (job demands, job contentment, job control, computer-related problems and social interaction) attributed to work-related musculoskeletal disorders have been neglected. This study examines the aforementioned elements in detail, pertaining to their relationship with musculoskeletal disorders, focusing in particular, on 120 office workers at Malaysian public sector organizations, whose jobs require intensive computer usage. Methods: Research was conducted between March and July 2009 in public service organizations in Malaysia. This study was conducted via a survey utilizing self-complete questionnaires and diary. The relationship between psychosocial work factors and musculoskeletal discomfort was ascertained through regression analyses, which revealed that some factors were more important than others were. Results: The results indicate a significant relationship among psychosocial work factors and musculoskeletal discomfort among computer users. Several of these factors such as job control, computer-related problem and social interaction of psychosocial work factors are found to be more important than others in musculoskeletal discomfort. Conclusion: With computer usage on the rise among users, the prevalence of musculoskeletal discomfort could lead to unnecessary disabilities, hence, the vital need for greater attention to be given on this aspect in the work place, to alleviate to some extent, potential problems in future. PMID:23113058

  10. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    NASA Astrophysics Data System (ADS)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.

  11. GPU-based High-Performance Computing for Radiation Therapy

    PubMed Central

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B.

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. Graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past a few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of studies have been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this article, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. PMID:24486639

  12. Institute for Computational Mechanics in Propulsion (ICOMP)

    NASA Technical Reports Server (NTRS)

    Feiler, Charles E. (Compiler)

    1993-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) was established at the NASA Lewis Research Center in Cleveland, Ohio to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. The activities at ICOMP during 1992 are described.

  13. Government regulations and other influences on the medical use of computers.

    PubMed

    Mishelevich, D J; Grams, R R; Mize, S G; Smith, J P

    1979-01-01

    This paper presents points brought out in a panel discussion held at the 12th Hawaiian International Conference on System Sciences, January 1979. The session was attended by approximately two dozen interested parties from various segments of the academic, government, and health care communities. The broad categories covered include the specific problems of government regulations and their impact on specific clinical information systems installed at The University of Texas Health Science Center at Dallas, opportunities in a regulated environment, problems in a regulated environment, vendor-related issues in the marketing and manufacture of computer-based information systems, rational approaches to government control, and specific issues related to medical computer science.

  14. Computer Use and Vision-Related Problems Among University Students In Ajman, United Arab Emirate

    PubMed Central

    Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K

    2014-01-01

    Background: The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. Aim: This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. Materials and Methods: A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology were recruited into this study. Demographic characteristics, pattern of usage of computers and associated visual symptoms were recorded in a validated self-administered questionnaire. Chi-square test was used to determine the significance of the observed differences between the variables. The level of statistical significance was at P < 0.05. The crude odds ratio (OR) was determined using simple binary logistic regression and adjusted OR was calculated using the multiple logistic regression. Results: The mean age of participants was 20.4 (3.2) years. The analysis of racial data reveals that 50% (236/471) students were from Middle East, 32% (151/471) from other parts of Asia, 11% (52/471) from Africa, 4% (19/471) from America and 3% (14/471) from Europe. The most common visual problems reported among computer users were headache - 53.3% (251/471), burning sensation in the eyes - 54.8% (258/471) and tired eyes - 48% (226/471). Female students were found to be at a higher risk. Nearly 72% of students reported frequent interruption of computer work. Headache caused interruption of work in 43.85% (110/168) of the students while tired eyes caused interruption of work in 43.5% (98/168) of the students. When the screen was viewed at distance more than 50 cm, the prevalence of headaches decreased by 38% (50-100 cm – OR: 0.62, 95% of the confidence interval [CI]: 0.42-0.92). Prevalence of tired eyes increased by 89% when screen filters were not used (OR: 1.894, 95% CI: 1.065-3.368). Conclusion: High prevalence of vision related problems was noted among university students. Sustained periods of close screen work without screen filters were found to be associated with occurrence of the symptoms and increased interruptions of work of the students. There is a need to increase the ergonomic awareness among students and corrective measures need to be implemented to reduce the impact of computer related vision problems. PMID:24761249

  15. Redesigning the Quantum Mechanics Curriculum to Incorporate Problem Solving Using a Computer Algebra System

    NASA Astrophysics Data System (ADS)

    Roussel, Marc R.

    1999-10-01

    One of the traditional obstacles to learning quantum mechanics is the relatively high level of mathematical proficiency required to solve even routine problems. Modern computer algebra systems are now sufficiently reliable that they can be used as mathematical assistants to alleviate this difficulty. In the quantum mechanics course at the University of Lethbridge, the traditional three lecture hours per week have been replaced by two lecture hours and a one-hour computer-aided problem solving session using a computer algebra system (Maple). While this somewhat reduces the number of topics that can be tackled during the term, students have a better opportunity to familiarize themselves with the underlying theory with this course design. Maple is also available to students during examinations. The use of a computer algebra system expands the class of feasible problems during a time-limited exercise such as a midterm or final examination. A modern computer algebra system is a complex piece of software, so some time needs to be devoted to teaching the students its proper use. However, the advantages to the teaching of quantum mechanics appear to outweigh the disadvantages.

  16. The Evolution of Computer Based Learning Software Design: Computer Assisted Teaching Unit Experience.

    ERIC Educational Resources Information Center

    Blandford, A. E.; Smith, P. R.

    1986-01-01

    Describes the style of design of computer simulations developed by Computer Assisted Teaching Unit at Queen Mary College with reference to user interface, input and initialization, input data vetting, effective display screen use, graphical results presentation, and need for hard copy. Procedures and problems relating to academic involvement are…

  17. Assessment regarding the use of the computer aided analytical models in the calculus of the general strength of a ship hull

    NASA Astrophysics Data System (ADS)

    Hreniuc, V.; Hreniuc, A.; Pescaru, A.

    2017-08-01

    Solving a general strength problem of a ship hull may be done using analytical approaches which are useful to deduce the buoyancy forces distribution, the weighting forces distribution along the hull and the geometrical characteristics of the sections. These data are used to draw the free body diagrams and to compute the stresses. The general strength problems require a large amount of calculi, therefore it is interesting how a computer may be used to solve such problems. Using computer programming an engineer may conceive software instruments based on analytical approaches. However, before developing the computer code the research topic must be thoroughly analysed, in this way being reached a meta-level of understanding of the problem. The following stage is to conceive an appropriate development strategy of the original software instruments useful for the rapid development of computer aided analytical models. The geometrical characteristics of the sections may be computed using a bool algebra that operates with ‘simple’ geometrical shapes. By ‘simple’ we mean that for the according shapes we have direct calculus relations. In the set of ‘simple’ shapes we also have geometrical entities bounded by curves approximated as spline functions or as polygons. To conclude, computer programming offers the necessary support to solve general strength ship hull problems using analytical methods.

  18. Low-Budget Computer Programming in Your School (An Alternative to the Cost of Large Computers). Illinois Series on Educational Applications of Computers. No. 14.

    ERIC Educational Resources Information Center

    Dennis, J. Richard; Thomson, David

    This paper is concerned with a low cost alternative for providing computer experience to secondary school students. The brief discussion covers the programmable calculator and its relevance for teaching the concepts and the rudiments of computer programming and for computer problem solving. A list of twenty-five programming activities related to…

  19. Searching for memories, Sudoku, implicit check bits, and the iterative use of not-always-correct rapid neural computation.

    PubMed

    Hopfield, J J

    2008-05-01

    The algorithms that simple feedback neural circuits representing a brain area can rapidly carry out are often adequate to solve easy problems but for more difficult problems can return incorrect answers. A new excitatory-inhibitory circuit model of associative memory displays the common human problem of failing to rapidly find a memory when only a small clue is present. The memory model and a related computational network for solving Sudoku puzzles produce answers that contain implicit check bits in the representation of information across neurons, allowing a rapid evaluation of whether the putative answer is correct or incorrect through a computation related to visual pop-out. This fact may account for our strong psychological feeling of right or wrong when we retrieve a nominal memory from a minimal clue. This information allows more difficult computations or memory retrievals to be done in a serial fashion by using the fast but limited capabilities of a computational module multiple times. The mathematics of the excitatory-inhibitory circuits for associative memory and for Sudoku, both of which are understood in terms of energy or Lyapunov functions, is described in detail.

  20. Relate@IU>>>Share@IU: A New and Different Computer-Based Communications Paradigm.

    ERIC Educational Resources Information Center

    Frick, Theodore W.; Roberto, Joseph; Korkmaz, Ali; Oh, Jeong-En; Twal, Riad

    The purpose of this study was to examine problems with the current computer-based electronic communication systems and to initially test and revise a new and different paradigm for e-collaboration, Relate@IU. Understanding the concept of sending links to resources, rather than sending the resource itself, is at the core of how Relate@IU differs…

  1. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows

    PubMed Central

    Wang, Di; Kleinberg, Robert D.

    2009-01-01

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596

  2. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.

    PubMed

    Wang, Di; Kleinberg, Robert D

    2009-11-28

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

  3. Problems Related to Computer Ethics: Origins of the Problems and Suggested Solutions

    ERIC Educational Resources Information Center

    Kuzu, Abdullah

    2009-01-01

    Increasing use of information and communication technologies (ICTs) help individuals to solve several everyday problems, which used to be harder, more complicated and time consuming. Even though ICTs provide individuals with many advantages, they might also serve as grounds for several societal and ethical problems which vary in accordance with…

  4. Educational NASA Computational and Scientific Studies (enCOMPASS)

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2013-01-01

    Educational NASA Computational and Scientific Studies (enCOMPASS) is an educational project of NASA Goddard Space Flight Center aimed at bridging the gap between computational objectives and needs of NASA's scientific research, missions, and projects, and academia's latest advances in applied mathematics and computer science. enCOMPASS achieves this goal via bidirectional collaboration and communication between NASA and academia. Using developed NASA Computational Case Studies in university computer science/engineering and applied mathematics classes is a way of addressing NASA's goals of contributing to the Science, Technology, Education, and Math (STEM) National Objective. The enCOMPASS Web site at http://encompass.gsfc.nasa.gov provides additional information. There are currently nine enCOMPASS case studies developed in areas of earth sciences, planetary sciences, and astrophysics. Some of these case studies have been published in AIP and IEEE's Computing in Science and Engineering magazines. A few university professors have used enCOMPASS case studies in their computational classes and contributed their findings to NASA scientists. In these case studies, after introducing the science area, the specific problem, and related NASA missions, students are first asked to solve a known problem using NASA data and past approaches used and often published in a scientific/research paper. Then, after learning about the NASA application and related computational tools and approaches for solving the proposed problem, students are given a harder problem as a challenge for them to research and develop solutions for. This project provides a model for NASA scientists and engineers on one side, and university students, faculty, and researchers in computer science and applied mathematics on the other side, to learn from each other's areas of work, computational needs and solutions, and the latest advances in research and development. This innovation takes NASA science and engineering applications to computer science and applied mathematics university classes, and makes NASA objectives part of the university curricula. There is great potential for growth and return on investment of this program to the point where every major university in the U.S. would use at least one of these case studies in one of their computational courses, and where every NASA scientist and engineer facing a computational challenge (without having resources or expertise to solve it) would use enCOMPASS to formulate the problem as a case study, provide it to a university, and get back their solutions and ideas.

  5. Problem-Solving Rules for Genetics.

    ERIC Educational Resources Information Center

    Collins, Angelo

    The categories and applications of strategic knowledge as these relate to problem solving in the area of transmission genetics are examined in this research study. The role of computer simulations in helping students acquire the strategic knowledge necessary to solve realistic transmission genetics problems was emphasized. The Genetics…

  6. Present status of computational tools for maglev development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Z.; Chen, S.S.; Rote, D.M.

    1991-10-01

    High-speed vehicles that employ magnetic levitation (maglev) have received great attention worldwide as a means of relieving both highway and air-traffic congestion. At this time, Japan and Germany are leading the development of maglev. After fifteen years of inactivity that is attributed to technical policy decisions, the federal government of the United States has reconsidered the possibility of using maglev in the United States. The National Maglev Initiative (NMI) was established in May 1990 to assess the potential of maglev in the United States. One of the tasks of the NMI, which is also the objective of this report, ismore » to determine the status of existing computer software that can be applied to maglev-related problems. The computational problems involved in maglev assessment, research, and development can be classified into two categories: electromagnetic and mechanical. Because most maglev problems are complicated and difficult to solve analytically, proper numerical methods are needed to find solutions. To determine the status of maglev-related software, developers and users of computer codes were surveyed. The results of the survey are described in this report. 25 refs.« less

  7. Computer Graphics-aided systems analysis: application to well completion design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detamore, J.E.; Sarma, M.P.

    1985-03-01

    The development of an engineering tool (in the form of a computer model) for solving design and analysis problems related with oil and gas well production operations is discussed. The development of the method is based on integrating the concepts of ''Systems Analysis'' with the techniques of ''Computer Graphics''. The concepts behind the method are very general in nature. This paper, however, illustrates the application of the method in solving gas well completion design problems. The use of the method will save time and improve the efficiency of such design and analysis problems. The method can be extended to othermore » design and analysis aspects of oil and gas wells.« less

  8. Providing Practical Applications of Computer Technology for Fifth Grade Students in Career Awareness Laboratories.

    ERIC Educational Resources Information Center

    Pereno, Joan S.

    This practicum addressed the problem of providing practical computer application experiences to fifth grade students as they relate to real life work situations. The primary goal was to have students become cognizant of computer functions within the work setting as contrasted with viewing computer activities as instruments used for games or…

  9. Factors Influencing Computer Anxiety and Its Impact on E-Learning Effectiveness: A Review of Literature

    ERIC Educational Resources Information Center

    Chien, Tien-Chen

    2008-01-01

    Computer is not only a powerful technology for managing information and enhancing productivity, but also an efficient tool for education and training. Computer anxiety can be one of the major problems that affect the effectiveness of learning. Through analyzing related literature, this study describes the phenomenon of computer anxiety,…

  10. Elders, Students & Computers--Background Information. Illinois Series on Educational Technology of Computers. Number 8.

    ERIC Educational Resources Information Center

    Jaycox, Kathy; Hicks, Bruce

    This report reviews the literature relating to computer uses for elders. Topics include: (1) variables affecting computer use by elders; (2) organizations and programs serving elders in Champaign County, Illinois; (3) University of Illinois workshops on problems of older people; (4) The Senior Citizens Project of Volunteer Illini Projects; (5)…

  11. Fostering Recursive Thinking in Combinatorics through the Use of Manipulatives and Computing Technology.

    ERIC Educational Resources Information Center

    Abramovich, Sergei; Pieper, Anne

    1996-01-01

    Describes the use of manipulatives for solving simple combinatorial problems which can lead to the discovery of recurrence relations for permutations and combinations. Numerical evidence and visual imagery generated by a computer spreadsheet through modeling these relations can enable students to experience the ease and power of combinatorial…

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koniges, A.E.; Craddock, G.G.; Schnack, D.D.

    The purpose of the workshop was to assemble workers, both within and outside of the fusion-related computations areas, for discussion regarding the issues of dynamically adaptive gridding. There were three invited talks related to adaptive gridding application experiences in various related fields of computational fluid dynamics (CFD), and nine short talks reporting on the progress of adaptive techniques in the specific areas of scrape-off-layer (SOL) modeling and magnetohydrodynamic (MHD) stability. Adaptive mesh methods have been successful in a number of diverse fields of CFD for over a decade. The method involves dynamic refinement of computed field profiles in a waymore » that disperses uniformly the numerical errors associated with discrete approximations. Because the process optimizes computational effort, adaptive mesh methods can be used to study otherwise the intractable physical problems that involve complex boundary shapes or multiple spatial/temporal scales. Recent results indicate that these adaptive techniques will be required for tokamak fluid-based simulations involving the diverted tokamak SOL modeling and MHD simulations problems related to the highest priority ITER relevant issues.Individual papers are indexed separately on the energy data bases.« less

  13. Technology, attributions, and emotions in post-secondary education: An application of Weiner’s attribution theory to academic computing problems

    PubMed Central

    Hall, Nathan C.; Goetz, Thomas; Chiarella, Andrew; Rahimi, Sonia

    2018-01-01

    As technology becomes increasingly integrated with education, research on the relationships between students’ computing-related emotions and motivation following technological difficulties is critical to improving learning experiences. Following from Weiner’s (2010) attribution theory of achievement motivation, the present research examined relationships between causal attributions and emotions concerning academic computing difficulties in two studies. Study samples consisted of North American university students enrolled in both traditional and online universities (total N = 559) who responded to either hypothetical scenarios or experimental manipulations involving technological challenges experienced in academic settings. Findings from Study 1 showed stable and external attributions to be emotionally maladaptive (more helplessness, boredom, guilt), particularly in response to unexpected computing problems. Additionally, Study 2 found stable attributions for unexpected problems to predict more anxiety for traditional students, with both external and personally controllable attributions for minor problems proving emotionally beneficial for students in online degree programs (more hope, less anxiety). Overall, hypothesized negative effects of stable attributions were observed across both studies, with mixed results for personally controllable attributions and unanticipated emotional benefits of external attributions for academic computing problems warranting further study. PMID:29529039

  14. Technology, attributions, and emotions in post-secondary education: An application of Weiner's attribution theory to academic computing problems.

    PubMed

    Maymon, Rebecca; Hall, Nathan C; Goetz, Thomas; Chiarella, Andrew; Rahimi, Sonia

    2018-01-01

    As technology becomes increasingly integrated with education, research on the relationships between students' computing-related emotions and motivation following technological difficulties is critical to improving learning experiences. Following from Weiner's (2010) attribution theory of achievement motivation, the present research examined relationships between causal attributions and emotions concerning academic computing difficulties in two studies. Study samples consisted of North American university students enrolled in both traditional and online universities (total N = 559) who responded to either hypothetical scenarios or experimental manipulations involving technological challenges experienced in academic settings. Findings from Study 1 showed stable and external attributions to be emotionally maladaptive (more helplessness, boredom, guilt), particularly in response to unexpected computing problems. Additionally, Study 2 found stable attributions for unexpected problems to predict more anxiety for traditional students, with both external and personally controllable attributions for minor problems proving emotionally beneficial for students in online degree programs (more hope, less anxiety). Overall, hypothesized negative effects of stable attributions were observed across both studies, with mixed results for personally controllable attributions and unanticipated emotional benefits of external attributions for academic computing problems warranting further study.

  15. Computer memory: the LLL experience. [Octopus computer network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J.G.

    1976-02-01

    Those aspects of Octopus computer network design are reviewed that relate to memory and storage. Emphasis is placed on the difficulties and problems that arise because of the limitations of present storage devices, and indications are made of the directions in which technological advance could be of most value. (auth)

  16. Developing a Computer Literate Faculty at College of DuPage.

    ERIC Educational Resources Information Center

    Carlson, Bart

    Until 1978, academic and administrative departments at College of DuPage, an Illinois community college, bought computer related equipment and software without an overall plan or coordination. The development of a coordination plan focused on finding an internal mechanism to solve two problems: individual departments buying computer-related…

  17. Computers in Science: Thinking Outside the Discipline.

    ERIC Educational Resources Information Center

    Hamilton, Todd M.

    2003-01-01

    Describes the Computers in Science course which integrates computer-related techniques into the science disciplines of chemistry, physics, biology, and Earth science. Uses a team teaching approach and teaches students how to solve chemistry problems with spreadsheets, identify minerals with X-rays, and chemical and force analysis. (Contains 14…

  18. THE COMPUTER AND THE ARCHITECTURAL PROFESSION.

    ERIC Educational Resources Information Center

    HAVILAND, DAVID S.

    THE ROLE OF ADVANCING TECHNOLOGY IN THE FIELD OF ARCHITECTURE IS DISCUSSED IN THIS REPORT. PROBLEMS IN COMMUNICATION AND THE DESIGN PROCESS ARE IDENTIFIED. ADVANTAGES AND DISADVANTAGES OF COMPUTERS ARE MENTIONED IN RELATION TO MAN AND MACHINE INTERACTION. PRESENT AND FUTURE IMPLICATIONS OF COMPUTER USAGE ARE IDENTIFIED AND DISCUSSED WITH RESPECT…

  19. Relations between work and upper extremity musculoskeletal problems (UEMSP) and the moderating role of psychosocial work factors on the relation between computer work and UEMSP.

    PubMed

    Nicolakakis, Nektaria; Stock, Susan R; Abrahamowicz, Michal; Kline, Rex; Messing, Karen

    2017-11-01

    Computer work has been identified as a risk factor for upper extremity musculoskeletal problems (UEMSP). But few studies have investigated how psychosocial and organizational work factors affect this relation. Nor have gender differences in the relation between UEMSP and these work factors  been studied. We sought to estimate: (1) the association between UEMSP and a range of physical, psychosocial and organizational work exposures, including the duration of computer work, and (2) the moderating effect of psychosocial work exposures on the relation between computer work and UEMSP. Using 2007-2008 Québec survey data on 2478 workers, we carried out gender-stratified multivariable logistic regression modeling and two-way interaction analyses. In both genders, odds of UEMSP were higher with exposure to high physical work demands and emotionally demanding work. Additionally among women, UEMSP were associated with duration of occupational computer exposure, sexual harassment, tense situations when dealing with clients, high quantitative demands and lack of prospects for promotion, and among men, with low coworker support, episodes of unemployment, low job security and contradictory work demands. Among women, the effect of computer work on UEMSP was considerably increased in the presence of emotionally demanding work, and may also be moderated by low recognition at work, contradictory work demands, and low supervisor support. These results suggest that the relations between UEMSP and computer work are moderated by psychosocial work exposures and that the relations between working conditions and UEMSP are somewhat different for each gender, highlighting the complexity of these relations and the importance of considering gender.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; Riensche, Roderick M.; Haack, Jereme N.

    “Gamification”, the application of gameplay to real-world problems, enables the development of human computation systems that support decision-making through the integration of social and machine intelligence. One of gamification’s major benefits includes the creation of a problem solving environment where the influence of cognitive and cultural biases on human judgment can be curtailed through collaborative and competitive reasoning. By reducing biases on human judgment, gamification allows human computation systems to exploit human creativity relatively unhindered by human error. Operationally, gamification uses simulation to harvest human behavioral data that provide valuable insights for the solution of real-world problems.

  1. Numerical methods on some structured matrix algebra problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1996-06-01

    This proposal concerned the design, analysis, and implementation of serial and parallel algorithms for certain structured matrix algebra problems. It emphasized large order problems and so focused on methods that can be implemented efficiently on distributed-memory MIMD multiprocessors. Such machines supply the computing power and extensive memory demanded by the large order problems. We proposed to examine three classes of matrix algebra problems: the symmetric and nonsymmetric eigenvalue problems (especially the tridiagonal cases) and the solution of linear systems with specially structured coefficient matrices. As all of these are of practical interest, a major goal of this work was tomore » translate our research in linear algebra into useful tools for use by the computational scientists interested in these and related applications. Thus, in addition to software specific to the linear algebra problems, we proposed to produce a programming paradigm and library to aid in the design and implementation of programs for distributed-memory MIMD computers. We now report on our progress on each of the problems and on the programming tools.« less

  2. Effects of the Multiple Solutions and Question Prompts on Generalization and Justification for Non-Routine Mathematical Problem Solving in a Computer Game Context

    ERIC Educational Resources Information Center

    Lee, Chun-Yi; Chen, Ming-Jang; Chang, Wen-Long

    2014-01-01

    The aim of this study is to investigate the effects of solution methods and question prompts on generalization and justification of non-routine problem solving for Grade 9 students. The learning activities are based on the context of the frog jumping game. In addition, related computer tools were used to support generalization and justification of…

  3. We introduce an algorithm for the simultaneous reconstruction of faults and slip fields. We prove that the minimum of a related regularized functional converges to the unique solution of the fault inverse problem. We consider a Bayesian approach. We use a parallel multi-core platform and we discuss techniques to save on computational time.

    NASA Astrophysics Data System (ADS)

    Volkov, D.

    2017-12-01

    We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.

  4. Automatically Generated Algorithms for the Vertex Coloring Problem

    PubMed Central

    Contreras Bolton, Carlos; Gatica, Gustavo; Parada, Víctor

    2013-01-01

    The vertex coloring problem is a classical problem in combinatorial optimization that consists of assigning a color to each vertex of a graph such that no adjacent vertices share the same color, minimizing the number of colors used. Despite the various practical applications that exist for this problem, its NP-hardness still represents a computational challenge. Some of the best computational results obtained for this problem are consequences of hybridizing the various known heuristics. Automatically revising the space constituted by combining these techniques to find the most adequate combination has received less attention. In this paper, we propose exploring the heuristics space for the vertex coloring problem using evolutionary algorithms. We automatically generate three new algorithms by combining elementary heuristics. To evaluate the new algorithms, a computational experiment was performed that allowed comparing them numerically with existing heuristics. The obtained algorithms present an average 29.97% relative error, while four other heuristics selected from the literature present a 59.73% error, considering 29 of the more difficult instances in the DIMACS benchmark. PMID:23516506

  5. Assessing collaborative computing: development of the Collaborative-Computing Observation Instrument (C-COI)

    NASA Astrophysics Data System (ADS)

    Israel, Maya; Wherfel, Quentin M.; Shehab, Saadeddine; Ramos, Evan A.; Metzger, Adam; Reese, George C.

    2016-07-01

    This paper describes the development, validation, and uses of the Collaborative Computing Observation Instrument (C-COI), a web-based analysis instrument that classifies individual and/or collaborative behaviors of students during computing problem-solving (e.g. coding, programming). The C-COI analyzes data gathered through video and audio screen recording software that captures students' computer screens as they program, and their conversations with their peers or adults. The instrument allows researchers to organize and quantify these data to track behavioral patterns that could be further analyzed for deeper understanding of persistence and/or collaborative interactions. The article provides a rationale for the C-COI including the development of a theoretical framework for measuring collaborative interactions in computer-mediated environments. This theoretical framework relied on the computer-supported collaborative learning literature related to adaptive help seeking, the joint problem-solving space in which collaborative computing occurs, and conversations related to outcomes and products of computational activities. Instrument development and validation also included ongoing advisory board feedback from experts in computer science, collaborative learning, and K-12 computing as well as classroom observations to test out the constructs in the C-COI. These processes resulted in an instrument with rigorous validation procedures and a high inter-rater reliability.

  6. Workshop report on large-scale matrix diagonalization methods in chemistry theory institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C.H.; Shepard, R.L.; Huss-Lederman, S.

    The Large-Scale Matrix Diagonalization Methods in Chemistry theory institute brought together 41 computational chemists and numerical analysts. The goal was to understand the needs of the computational chemistry community in problems that utilize matrix diagonalization techniques. This was accomplished by reviewing the current state of the art and looking toward future directions in matrix diagonalization techniques. This institute occurred about 20 years after a related meeting of similar size. During those 20 years the Davidson method continued to dominate the problem of finding a few extremal eigenvalues for many computational chemistry problems. Work on non-diagonally dominant and non-Hermitian problems asmore » well as parallel computing has also brought new methods to bear. The changes and similarities in problems and methods over the past two decades offered an interesting viewpoint for the success in this area. One important area covered by the talks was overviews of the source and nature of the chemistry problems. The numerical analysts were uniformly grateful for the efforts to convey a better understanding of the problems and issues faced in computational chemistry. An important outcome was an understanding of the wide range of eigenproblems encountered in computational chemistry. The workshop covered problems involving self- consistent-field (SCF), configuration interaction (CI), intramolecular vibrational relaxation (IVR), and scattering problems. In atomic structure calculations using the Hartree-Fock method (SCF), the symmetric matrices can range from order hundreds to thousands. These matrices often include large clusters of eigenvalues which can be as much as 25% of the spectrum. However, if Cl methods are also used, the matrix size can be between 10{sup 4} and 10{sup 9} where only one or a few extremal eigenvalues and eigenvectors are needed. Working with very large matrices has lead to the development of« less

  7. Integrating Micro-computers with a Centralized DBMS: ORACLE, SEED AND INGRES

    NASA Technical Reports Server (NTRS)

    Hoerger, J.

    1984-01-01

    Users of ADABAS, a relational-like data base management system (ADABAS) with its data base programming language (NATURAL) are acquiring microcomputers with hopes of solving their individual word processing, office automation, decision support, and simple data processing problems. As processor speeds, memory sizes, and disk storage capacities increase, individual departments begin to maintain "their own" data base on "their own" micro-computer. This situation can adversely affect several of the primary goals set for implementing a centralized DBMS. In order to avoid this potential problem, these micro-computers must be integrated with the centralized DBMS. An easy to use and flexible means for transferring logic data base files between the central data base machine and micro-computers must be provided. Some of the problems encounted in an effort to accomplish this integration and possible solutions are discussed.

  8. The benefits of computer-generated feedback for mathematics problem solving.

    PubMed

    Fyfe, Emily R; Rittle-Johnson, Bethany

    2016-07-01

    The goal of the current research was to better understand when and why feedback has positive effects on learning and to identify features of feedback that may improve its efficacy. In a randomized experiment, second-grade children received instruction on a correct problem-solving strategy and then solved a set of relevant problems. Children were assigned to receive no feedback, immediate feedback, or summative feedback from the computer. On a posttest the following day, feedback resulted in higher scores relative to no feedback for children who started with low prior knowledge. Immediate feedback was particularly effective, facilitating mastery of the material for children with both low and high prior knowledge. Results suggest that minimal computer-generated feedback can be a powerful form of guidance during problem solving. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Toward Theory-Based Instruction in Scientific Problem Solving.

    ERIC Educational Resources Information Center

    Heller, Joan I.; And Others

    Several empirical and theoretical analyses related to scientific problem-solving are reviewed, including: detailed studies of individuals at different levels of expertise, and computer models simulating some aspects of human information processing during problem solving. Analysis of these studies has revealed many facets about the nature of the…

  10. Relationship of Selected Abilities to Problem Solving Performance.

    ERIC Educational Resources Information Center

    Harmel, Sarah Jane

    This study investigated five ability tests related to the water-jug problem. Previous analyses identified two processes used during solution: means-ends analysis and memory of visited states. Subjects were 240 undergraduate psychology students. A real-time computer system presented the problem and recorded responses. Ability tests were paper and…

  11. Problem Solving Software for Math Classes.

    ERIC Educational Resources Information Center

    Troutner, Joanne

    1987-01-01

    Described are 10 computer software programs for problem solving related to mathematics. Programs described are: (1) Box Solves Story Problems; (2) Safari Search; (3) Puzzle Tanks; (4) The King's Rule; (5) The Factory; (6) The Royal Rules; (7) The Enchanted Forest; (8) Gears; (9) The Super Factory; and (10) Creativity Unlimited. (RH)

  12. Computer-Mediated Assessment of Higher-Order Thinking Development

    ERIC Educational Resources Information Center

    Tilchin, Oleg; Raiyn, Jamal

    2015-01-01

    Solving complicated problems in a contemporary knowledge-based society requires higher-order thinking (HOT). The most productive way to encourage development of HOT in students is through use of the Problem-based Learning (PBL) model. This model organizes learning by solving corresponding problems relative to study courses. Students are directed…

  13. Amoeba-inspired nanoarchitectonic computing implemented using electrical Brownian ratchets.

    PubMed

    Aono, M; Kasai, S; Kim, S-J; Wakabayashi, M; Miwa, H; Naruse, M

    2015-06-12

    In this study, we extracted the essential spatiotemporal dynamics that allow an amoeboid organism to solve a computationally demanding problem and adapt to its environment, thereby proposing a nature-inspired nanoarchitectonic computing system, which we implemented using a network of nanowire devices called 'electrical Brownian ratchets (EBRs)'. By utilizing the fluctuations generated from thermal energy in nanowire devices, we used our system to solve the satisfiability problem, which is a highly complex combinatorial problem related to a wide variety of practical applications. We evaluated the dependency of the solution search speed on its exploration parameter, which characterizes the fluctuation intensity of EBRs, using a simulation model of our system called 'AmoebaSAT-Brownian'. We found that AmoebaSAT-Brownian enhanced the solution searching speed dramatically when we imposed some constraints on the fluctuations in its time series and it outperformed a well-known stochastic local search method. These results suggest a new computing paradigm, which may allow high-speed problem solving to be implemented by interacting nanoscale devices with low power consumption.

  14. Brief exposure to a self-paced computer-based reading programme and how it impacts reading ability and behaviour problems.

    PubMed

    Hughes, J Antony; Phillips, Gordon; Reed, Phil

    2013-01-01

    Basic literacy skills underlie much future adult functioning, and are targeted in children through a variety of means. Children with reading problems were exposed either to a self-paced computer programme that focused on improving phonetic ability, or underwent a classroom-based reading intervention. Exposure was limited to 3 40-min sessions a week, for six weeks. The children were assessed in terms of their reading, spelling, and mathematics abilities, as well as for their externalising and internalising behaviour problems, before the programme commenced, and immediately after the programme terminated. Relative to the control group, the computer-programme improved reading by about seven months in boys (but not in girls), but had no impact on either spelling or mathematics. Children on the programme also demonstrated fewer externalising and internalising behaviour problems than the control group. The results suggest that brief exposure to a self-paced phonetic computer-teaching programme had some benefits for the sample.

  15. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey

    2018-02-01

    At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  16. A Purposeful MOOC to Alleviate Insufficient CS Education in Finnish Schools

    ERIC Educational Resources Information Center

    Kurhila, Jaakko; Vihavainen, Arto

    2015-01-01

    The Finnish national school curriculum, effective from 2004, does not include any topics related to Computer Science (CS). To alleviate the problem that school students are not able to study CS-related topics, the Department of Computer Science at the University of Helsinki prepared a completely online course that is open to pupils and students in…

  17. On Riemann solvers and kinetic relations for isothermal two-phase flows with surface tension

    NASA Astrophysics Data System (ADS)

    Rohde, Christian; Zeiler, Christoph

    2018-06-01

    We consider a sharp interface approach for the inviscid isothermal dynamics of compressible two-phase flow that accounts for phase transition and surface tension effects. Kinetic relations are frequently used to fix the mass exchange and entropy dissipation rate across the interface. The complete unidirectional dynamics can then be understood by solving generalized two-phase Riemann problems. We present new well-posedness theorems for the Riemann problem and corresponding computable Riemann solvers that cover quite general equations of state, metastable input data and curvature effects. The new Riemann solver is used to validate different kinetic relations on physically relevant problems including a comparison with experimental data. Riemann solvers are building blocks for many numerical schemes that are used to track interfaces in two-phase flow. It is shown that the new Riemann solver enables reliable and efficient computations for physical situations that could not be treated before.

  18. Computational problems and signal processing in SETI

    NASA Technical Reports Server (NTRS)

    Deans, Stanley R.; Cullers, D. K.; Stauduhar, Richard

    1991-01-01

    The Search for Extraterrestrial Intelligence (SETI), currently being planned at NASA, will require that an enormous amount of data (on the order of 10 exp 11 distinct signal paths for a typical observation) be analyzed in real time by special-purpose hardware. Even though the SETI system design is not based on maximum entropy and Bayesian methods (partly due to the real-time processing constraint), it is expected that enough data will be saved to be able to apply these and other methods off line where computational complexity is not an overriding issue. Interesting computational problems that relate directly to the system design for processing such an enormous amount of data have emerged. Some of these problems are discussed, along with the current status on their solution.

  19. Institute for Computational Mechanics in Propulsion (ICOMP)

    NASA Technical Reports Server (NTRS)

    Feiler, Charles E. (Editor)

    1991-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) is operated jointly by Case Western Reserve University and the NASA Lewis Research Center in Cleveland, Ohio. The purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. The activities at ICOMP during 1990 are described.

  20. Institute for Computational Mechanics in Propulsion (ICOMP)

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) is operated jointly by Case Western Reserve University and the NASA Lewis Research Center in Cleveland, Ohio. The purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. This report describes the activities at ICOMP during 1988.

  1. Institute for Computational Mechanics in Propulsion (ICOMP)

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) is operated jointly by Case Western Reserve University and the NASA Lewis Research Center in Cleveland, Ohio. The purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. Described are the activities of ICOMP during 1987.

  2. The Role of Context-Related Parameters in Adults' Mental Computational Acts

    ERIC Educational Resources Information Center

    Naresh, Nirmala; Presmeg, Norma

    2012-01-01

    Researchers who have carried out studies pertaining to mental computation and everyday mathematics point out that adults and children reason intuitively based upon experiences within specific contexts; they use invented strategies of their own to solve real-life problems. We draw upon research areas of mental computation and everyday mathematics…

  3. Institute for Computational Mechanics in Propulsion (ICOMP) fourth annual review, 1989

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) is operated jointly by Case Western Reserve University and the NASA Lewis Research Center. The purpose of ICOMP is to develop techniques to improve problem solving capabilities in all aspects of computational mechanics related to propulsion. The activities at ICOMP during 1989 are described.

  4. Institute for Computational Mechanics in Propulsion (ICOMP)

    NASA Technical Reports Server (NTRS)

    Feiler, Charles E. (Editor)

    1992-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) is a combined activity of Case Western Reserve University, Ohio Aerospace Institute (OAI) and NASA Lewis. The purpose of ICOMP is to develop techniques to improve problem solving capabilities in all aspects of computational mechanics related to propulsion. The activities at ICOMP during 1991 are described.

  5. a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems

    NASA Astrophysics Data System (ADS)

    Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.

    2017-12-01

    We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.

  6. Attentional bias and disinhibition toward gaming cues are related to problem gaming in male adolescents.

    PubMed

    van Holst, Ruth J; Lemmens, Jeroen S; Valkenburg, Patti M; Peter, Jochen; Veltman, Dick J; Goudriaan, Anna E

    2012-06-01

    The aim of this study was to examine whether behavioral tendencies commonly related to addictive behaviors are also related to problematic computer and video game playing in adolescents. The study of attentional bias and response inhibition, characteristic for addictive disorders, is relevant to the ongoing discussion on whether problematic gaming should be classified as an addictive disorder. We tested the relation between self-reported levels of problem gaming and two behavioral domains: attentional bias and response inhibition. Ninety-two male adolescents performed two attentional bias tasks (addiction-Stroop, dot-probe) and a behavioral inhibition task (go/no-go). Self-reported problem gaming was measured by the game addiction scale, based on the Diagnostic and Statistical Manual of Mental Disorders-fourth edition criteria for pathological gambling and time spent on computer and/or video games. Male adolescents with higher levels of self-reported problem gaming displayed signs of error-related attentional bias to game cues. Higher levels of problem gaming were also related to more errors on response inhibition, but only when game cues were presented. These findings are in line with the findings of attentional bias reported in clinically recognized addictive disorders, such as substance dependence and pathological gambling, and contribute to the discussion on the proposed concept of "Addiction and Related Disorders" (which may include non-substance-related addictive behaviors) in the Diagnostic and Statistical Manual of Mental Disorders-fourth edition. Copyright © 2012 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  7. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    NASA Astrophysics Data System (ADS)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  8. A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path.

    PubMed

    Xie, Zhiqiang; Shao, Xia; Xin, Yu

    2016-01-01

    To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective.

  9. A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path

    PubMed Central

    Xie, Zhiqiang; Shao, Xia; Xin, Yu

    2016-01-01

    To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective. PMID:27490901

  10. Discrete-time neural network for fast solving large linear L1 estimation problems and its application to image restoration.

    PubMed

    Xia, Youshen; Sun, Changyin; Zheng, Wei Xing

    2012-05-01

    There is growing interest in solving linear L1 estimation problems for sparsity of the solution and robustness against non-Gaussian noise. This paper proposes a discrete-time neural network which can calculate large linear L1 estimation problems fast. The proposed neural network has a fixed computational step length and is proved to be globally convergent to an optimal solution. Then, the proposed neural network is efficiently applied to image restoration. Numerical results show that the proposed neural network is not only efficient in solving degenerate problems resulting from the nonunique solutions of the linear L1 estimation problems but also needs much less computational time than the related algorithms in solving both linear L1 estimation and image restoration problems.

  11. Quantum speedup in solving the maximal-clique problem

    NASA Astrophysics Data System (ADS)

    Chang, Weng-Long; Yu, Qi; Li, Zhaokai; Chen, Jiahui; Peng, Xinhua; Feng, Mang

    2018-03-01

    The maximal-clique problem, to find the maximally sized clique in a given graph, is classically an NP-complete computational problem, which has potential applications ranging from electrical engineering, computational chemistry, and bioinformatics to social networks. Here we develop a quantum algorithm to solve the maximal-clique problem for any graph G with n vertices with quadratic speedup over its classical counterparts, where the time and spatial complexities are reduced to, respectively, O (√{2n}) and O (n2) . With respect to oracle-related quantum algorithms for the NP-complete problems, we identify our algorithm as optimal. To justify the feasibility of the proposed quantum algorithm, we successfully solve a typical clique problem for a graph G with two vertices and one edge by carrying out a nuclear magnetic resonance experiment involving four qubits.

  12. Hypercluster Parallel Processor

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Cole, Gary L.; Milner, Edward J.; Quealy, Angela

    1992-01-01

    Hypercluster computer system includes multiple digital processors, operation of which coordinated through specialized software. Configurable according to various parallel-computing architectures of shared-memory or distributed-memory class, including scalar computer, vector computer, reduced-instruction-set computer, and complex-instruction-set computer. Designed as flexible, relatively inexpensive system that provides single programming and operating environment within which one can investigate effects of various parallel-computing architectures and combinations on performance in solution of complicated problems like those of three-dimensional flows in turbomachines. Hypercluster software and architectural concepts are in public domain.

  13. Traveling front solutions to directed diffusion-limited aggregation, digital search trees, and the Lempel-Ziv data compression algorithm.

    PubMed

    Majumdar, Satya N

    2003-08-01

    We use the traveling front approach to derive exact asymptotic results for the statistics of the number of particles in a class of directed diffusion-limited aggregation models on a Cayley tree. We point out that some aspects of these models are closely connected to two different problems in computer science, namely, the digital search tree problem in data structures and the Lempel-Ziv algorithm for data compression. The statistics of the number of particles studied here is related to the statistics of height in digital search trees which, in turn, is related to the statistics of the length of the longest word formed by the Lempel-Ziv algorithm. Implications of our results to these computer science problems are pointed out.

  14. Traveling front solutions to directed diffusion-limited aggregation, digital search trees, and the Lempel-Ziv data compression algorithm

    NASA Astrophysics Data System (ADS)

    Majumdar, Satya N.

    2003-08-01

    We use the traveling front approach to derive exact asymptotic results for the statistics of the number of particles in a class of directed diffusion-limited aggregation models on a Cayley tree. We point out that some aspects of these models are closely connected to two different problems in computer science, namely, the digital search tree problem in data structures and the Lempel-Ziv algorithm for data compression. The statistics of the number of particles studied here is related to the statistics of height in digital search trees which, in turn, is related to the statistics of the length of the longest word formed by the Lempel-Ziv algorithm. Implications of our results to these computer science problems are pointed out.

  15. A new approach to impulsive rendezvous near circular orbit

    NASA Astrophysics Data System (ADS)

    Carter, Thomas; Humi, Mayer

    2012-04-01

    A new approach is presented for the problem of planar optimal impulsive rendezvous of a spacecraft in an inertial frame near a circular orbit in a Newtonian gravitational field. The total characteristic velocity to be minimized is replaced by a related characteristic-value function and this related optimization problem can be solved in closed form. The solution of this problem is shown to approach the solution of the original problem in the limit as the boundary conditions approach those of a circular orbit. Using a form of primer-vector theory the problem is formulated in a way that leads to relatively easy calculation of the optimal velocity increments. A certain vector that can easily be calculated from the boundary conditions determines the number of impulses required for solution of the optimization problem and also is useful in the computation of these velocity increments. Necessary and sufficient conditions for boundary conditions to require exactly three nonsingular non-degenerate impulses for solution of the related optimal rendezvous problem, and a means of calculating these velocity increments are presented. A simple example of a three-impulse rendezvous problem is solved and the resulting trajectory is depicted. Optimal non-degenerate nonsingular two-impulse rendezvous for the related problem is found to consist of four categories of solutions depending on the four ways the primer vector locus intersects the unit circle. Necessary and sufficient conditions for each category of solutions are presented. The region of the boundary values that admit each category of solutions of the related problem are found, and in each case a closed-form solution of the optimal velocity increments is presented. Similar results are presented for the simpler optimal rendezvous that require only one-impulse. For brevity degenerate and singular solutions are not discussed in detail, but should be presented in a following study. Although this approach is thought to provide simpler computations than existing methods, its main contribution may be in establishing a new approach to the more general problem.

  16. ELM Meets Urban Big Data Analysis: Case Studies

    PubMed Central

    Chen, Huajun; Chen, Jiaoyan

    2016-01-01

    In the latest years, the rapid progress of urban computing has engendered big issues, which creates both opportunities and challenges. The heterogeneous and big volume of data and the big difference between physical and virtual worlds have resulted in lots of problems in quickly solving practical problems in urban computing. In this paper, we propose a general application framework of ELM for urban computing. We present several real case studies of the framework like smog-related health hazard prediction and optimal retain store placement. Experiments involving urban data in China show the efficiency, accuracy, and flexibility of our proposed framework. PMID:27656203

  17. Approximate Bayesian computation for spatial SEIR(S) epidemic models.

    PubMed

    Brown, Grant D; Porter, Aaron T; Oleson, Jacob J; Hinman, Jessica A

    2018-02-01

    Approximate Bayesia n Computation (ABC) provides an attractive approach to estimation in complex Bayesian inferential problems for which evaluation of the kernel of the posterior distribution is impossible or computationally expensive. These highly parallelizable techniques have been successfully applied to many fields, particularly in cases where more traditional approaches such as Markov chain Monte Carlo (MCMC) are impractical. In this work, we demonstrate the application of approximate Bayesian inference to spatially heterogeneous Susceptible-Exposed-Infectious-Removed (SEIR) stochastic epidemic models. These models have a tractable posterior distribution, however MCMC techniques nevertheless become computationally infeasible for moderately sized problems. We discuss the practical implementation of these techniques via the open source ABSEIR package for R. The performance of ABC relative to traditional MCMC methods in a small problem is explored under simulation, as well as in the spatially heterogeneous context of the 2014 epidemic of Chikungunya in the Americas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Fast and Efficient Discrimination of Traveling Salesperson Problem Stimulus Difficulty

    ERIC Educational Resources Information Center

    Dry, Matthew J.; Fontaine, Elizabeth L.

    2014-01-01

    The Traveling Salesperson Problem (TSP) is a computationally difficult combinatorial optimization problem. In spite of its relative difficulty, human solvers are able to generate close-to-optimal solutions in a close-to-linear time frame, and it has been suggested that this is due to the visual system's inherent sensitivity to certain geometric…

  19. How Readability and Topic Incidence Relate to Performance on Mathematics Story Problems in Computer-Based Curricula

    ERIC Educational Resources Information Center

    Walkington, Candace; Clinton, Virginia; Ritter, Steven N.; Nathan, Mitchell J.

    2015-01-01

    Solving mathematics story problems requires text comprehension skills. However, previous studies have found few connections between traditional measures of text readability and performance on story problems. We hypothesized that recently developed measures of readability and topic incidence measured by text-mining tools may illuminate associations…

  20. Fundamental organometallic reactions: Applications on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Rappe, A. K.

    1984-01-01

    Two of the most challenging problems of Organometallic chemistry (loosely defined) are pollution control with the large space velocities needed and nitrogen fixation, a process so capably done by nature and so relatively poorly done by man (industry). For a computational chemist these problems are on the fringe of what is possible with conventional computers (large models needed and accurate energetics required). A summary of the algorithmic modification needed to address these problems on a vector processor such as the CYBER 205 and a sketch of findings to date on deNOx catalysis and nitrogen fixation are presented.

  1. Identification and addressing reduction-related misconceptions

    NASA Astrophysics Data System (ADS)

    Gal-Ezer, Judith; Trakhtenbrot, Mark

    2016-07-01

    Reduction is one of the key techniques used for problem-solving in computer science. In particular, in the theory of computation and complexity (TCC), mapping and polynomial reductions are used for analysis of decidability and computational complexity of problems, including the core concept of NP-completeness. Reduction is a highly abstract technique that involves revealing close non-trivial connections between problems that often seem to have nothing in common. As a result, proper understanding and application of reduction is a serious challenge for students and a source of numerous misconceptions. The main contribution of this paper is detection of such misconceptions, analysis of their roots, and proposing a way to address them in an undergraduate TCC course. Our observations suggest that the main source of the misconceptions is the false intuitive rule "the bigger is a set/problem, the harder it is to solve". Accordingly, we developed a series of exercises for proactive prevention of these misconceptions.

  2. THE COMPUTER AND SMALL BUSINESS.

    DTIC Science & Technology

    The place of the computer in small business is investigated with respect to what type of problems it can solve for small business and how the small...firm can acquire time on one. The decision-making process and the importance of information is discussed in relation to small business . Several...applications of computers are examined to show how the firm can use the computer in day-to-day business operations. The capabilities of a digital computer

  3. Computers and Management Structure: Some Empirical Findings Re-examined

    ERIC Educational Resources Information Center

    Robey, Daniel

    1977-01-01

    Studies that relate computerization to either centralization or decentralization of organizational decision making are reviewed. Four issues are addressed that relate to conceptual or methodological problems. (Author/MLF)

  4. Dynamic optimization of chemical processes using ant colony framework.

    PubMed

    Rajesh, J; Gupta, K; Kusumakar, H S; Jayaraman, V K; Kulkarni, B D

    2001-11-01

    Ant colony framework is illustrated by considering dynamic optimization of six important bench marking examples. This new computational tool is simple to implement and can tackle problems with state as well as terminal constraints in a straightforward fashion. It requires fewer grid points to reach the global optimum at relatively very low computational effort. The examples with varying degree of complexities, analyzed here, illustrate its potential for solving a large class of process optimization problems in chemical engineering.

  5. Dynamically Reconfigurable Approach to Multidisciplinary Problems

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalie M.; Lewis, Robert Michael

    2003-01-01

    The complexity and autonomy of the constituent disciplines and the diversity of the disciplinary data formats make the task of integrating simulations into a multidisciplinary design optimization problem extremely time-consuming and difficult. We propose a dynamically reconfigurable approach to MDO problem formulation wherein an appropriate implementation of the disciplinary information results in basic computational components that can be combined into different MDO problem formulations and solution algorithms, including hybrid strategies, with relative ease. The ability to re-use the computational components is due to the special structure of the MDO problem. We believe that this structure can and should be used to formulate and solve optimization problems in the multidisciplinary context. The present work identifies the basic computational components in several MDO problem formulations and examines the dynamically reconfigurable approach in the context of a popular class of optimization methods. We show that if the disciplinary sensitivity information is implemented in a modular fashion, the transfer of sensitivity information among the formulations under study is straightforward. This enables not only experimentation with a variety of problem formations in a research environment, but also the flexible use of formulations in a production design environment.

  6. Applications of hybrid and digital computation methods in aerospace-related sciences and engineering. [problem solving methods at the University of Houston

    NASA Technical Reports Server (NTRS)

    Huang, C. J.; Motard, R. L.

    1978-01-01

    The computing equipment in the engineering systems simulation laboratory of the Houston University Cullen College of Engineering is described and its advantages are summarized. The application of computer techniques in aerospace-related research psychology and in chemical, civil, electrical, industrial, and mechanical engineering is described in abstracts of 84 individual projects and in reprints of published reports. Research supports programs in acoustics, energy technology, systems engineering, and environment management as well as aerospace engineering.

  7. Institute for Computational Mechanics in Propulsion (ICOMP)

    NASA Technical Reports Server (NTRS)

    Feiler, Charles E. (Editor)

    1994-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) is operated by the Ohio Aerospace Institute (OAI) and the NASA Lewis Research Center in Cleveland, Ohio. The purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. This report describes the accomplishments and activities at ICOMP during 1993.

  8. DEP : a computer program for evaluating lumber drying costs and investments

    Treesearch

    Stewart Holmes; George B. Harpole; Edward Bilek

    1983-01-01

    The DEP computer program is a modified discounted cash flow computer program designed for analysis of problems involving economic analysis of wood drying processes. Wood drying processes are different from other processes because of the large amounts of working capital required to finance inventories, and because of relatively large shares of costs charged to inventory...

  9. 29 CFR 778.313 - Computing overtime pay under the Act for employees compensated on task basis.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Computing overtime pay under the Act for employees compensated on task basis. 778.313 Section 778.313 Labor Regulations Relating to Labor (Continued) WAGE AND... TO REGULATIONS OVERTIME COMPENSATION Special Problems âtaskâ Basis of Payment § 778.313 Computing...

  10. Computers, Information and Communication Technology within Society--Educational-Political and Pedagogical Reactions to New Demands.

    ERIC Educational Resources Information Center

    Kell, Adolf; Schmidt, Anne

    1989-01-01

    Discusses the pedagogical and didactic problems of education relative to the use of computers, the application of long-distance transfer of data, and the combination of computers, machines, instruments, and media in integrated systems. Sets forth seven pedagogical postulates in order to analyze developments worthy of support and those to be…

  11. Reports of alcohol-related problems and alcohol dependence for demographic subgroups using interactive voice response versus telephone surveys: the 2005 US National Alcohol Survey.

    PubMed

    Midanik, Lorraine T; Greenfield, Thomas K

    2010-07-01

    Interactive voice response (IVR), a computer-based interviewing technique, can be used within a computer-assisted telephone interview (CATI) survey to increase privacy and the accuracy of reports of sensitive attitudes and behaviours. Previous research using the 2005 National Alcohol Survey indicated no overall significant differences between IVR and CATI responses to alcohol-related problems and alcohol dependence. To determine if this result holds for demographic subgroups that could respond differently to modes of data collection, this study compares the prevalence rates of lifetime and last-year alcohol-related problems by gender, ethnicity, age and income subgroups obtained by IVR versus continuous CATI interviewing. As part of the 2005 National Alcohol Survey, subsamples of English-speaking respondents were randomly assigned to an IVR group that received an embedded IVR module on alcohol-related problems (n = 450 lifetime drinkers) and a control group that were asked identical alcohol-related problem items using continuous CATI (n = 432 lifetime drinkers). Overall, there were few significant associations. Among lifetime drinkers, higher rates of legal problems were found for white and higher income respondents in the IVR group. For last-year drinkers, a higher percentage of indicators of alcohol dependence was found for Hispanic respondents and women respondents in the CATI group. Data on alcohol problems collected by CATI provide largely comparable results to those from an embedded IVR module. Thus, incorporation of IVR technology in a CATI interview does not appear strongly indicated even for several key subgroups.

  12. Reduction of community alcohol problems: computer simulation experiments in three counties.

    PubMed

    Holder, H D; Blose, J O

    1987-03-01

    A series of alcohol abuse prevention strategies was evaluated using computer simulation for three counties in the United States: Wake County, North Carolina, Washington County, Vermont and Alameda County, California. A system dynamics model composed of a network of interacting variables was developed for the pattern of alcoholic beverage consumption in a community. The relationship of community drinking patterns to various stimulus factors was specified in the model based on available empirical research. Stimulus factors included disposable income, alcoholic beverage prices, advertising exposure, minimum drinking age and changes in cultural norms. After a generic model was developed and validated on the national level, a computer-based system dynamics model was developed for each county, and a series of experiments was conducted to project the potential impact of specific prevention strategies. The project concluded that prevention efforts can both lower current levels of alcohol abuse and reduce projected increases in alcohol-related problems. Without such efforts, already high levels of alcohol-related family disruptions in the three counties could be expected to rise an additional 6% and drinking-related work problems 1-5%, over the next 10 years after controlling for population growth. Of the strategies tested, indexing the price of alcoholic beverages to the consumer price index in conjunction with the implementation of a community educational program with well-defined target audiences has the best potential for significant problem reduction in all three counties.

  13. A Spectral Algorithm for Envelope Reduction of Sparse Matrices

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.

    1993-01-01

    The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.

  14. Conjugate gradient based projection - A new explicit methodology for frictional contact

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Li, Maocheng; Sha, Desong

    1993-01-01

    With special attention towards the applicability to parallel computation or vectorization, a new and effective explicit approach for linear complementary formulations involving a conjugate gradient based projection methodology is proposed in this study for contact problems with Coulomb friction. The overall objectives are focussed towards providing an explicit methodology of computation for the complete contact problem with friction. In this regard, the primary idea for solving the linear complementary formulations stems from an established search direction which is projected to a feasible region determined by the non-negative constraint condition; this direction is then applied to the Fletcher-Reeves conjugate gradient method resulting in a powerful explicit methodology which possesses high accuracy, excellent convergence characteristics, fast computational speed and is relatively simple to implement for contact problems involving Coulomb friction.

  15. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  16. Partially annotated bibliography for computer protection and related topics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huskamp, J.C.

    1976-07-20

    References for the commonly cited technical papers in the area of computer protection are given. Great care is taken to exclude papers with no technical content or merit. For the purposes of this bibliography, computer protection is broadly defined to encompass all facets of the protection problem. The papers cover, but are not limited to, the topics of protection features in operating systems (e.g., MULTICS and HYDRA), hardware implementations of protection facilities (e.g., Honeywell 6180, System 250, BCC 5000, B6500), data base protection controls, confinement and protection models. Since computer protection is related to many other areas in computer sciencemore » and electrical engineering, a bibliography of related areas is included after the protection bibliography. These sections also include articles of general interest in the named areas which are not necessarily related to protection.« less

  17. Solving search problems by strongly simulating quantum circuits

    PubMed Central

    Johnson, T. H.; Biamonte, J. D.; Clark, S. R.; Jaksch, D.

    2013-01-01

    Simulating quantum circuits using classical computers lets us analyse the inner workings of quantum algorithms. The most complete type of simulation, strong simulation, is believed to be generally inefficient. Nevertheless, several efficient strong simulation techniques are known for restricted families of quantum circuits and we develop an additional technique in this article. Further, we show that strong simulation algorithms perform another fundamental task: solving search problems. Efficient strong simulation techniques allow solutions to a class of search problems to be counted and found efficiently. This enhances the utility of strong simulation methods, known or yet to be discovered, and extends the class of search problems known to be efficiently simulable. Relating strong simulation to search problems also bounds the computational power of efficiently strongly simulable circuits; if they could solve all problems in P this would imply that all problems in NP and #P could be solved in polynomial time. PMID:23390585

  18. Distinct patterns of Internet and smartphone-related problems among adolescents by gender: Latent class analysis.

    PubMed

    Lee, Seung-Yup; Lee, Donghwan; Nam, Cho Rong; Kim, Da Yea; Park, Sera; Kwon, Jun-Gun; Kweon, Yong-Sil; Lee, Youngjo; Kim, Dai Jin; Choi, Jung-Seok

    2018-05-23

    Background and objectives The ubiquitous Internet connections by smartphones weakened the traditional boundaries between computers and mobile phones. We sought to explore whether smartphone-related problems differ from those of computer use according to gender using latent class analysis (LCA). Methods After informed consents, 555 Korean middle-school students completed surveys on gaming, Internet use, and smartphone usage patterns. They also completed various psychosocial instruments. LCA was performed for the whole group and by gender. In addition to ANOVA and χ 2 tests, post-hoc tests were conducted to examine differences among the LCA subgroups. Results In the whole group (n = 555), four subtypes were identified: dual-problem users (49.5%), problematic Internet users (7.7%), problematic smartphone users (32.1%), and "healthy" users (10.6%). Dual-problem users scored highest for addictive behaviors and other psychopathologies. The gender-stratified LCA revealed three subtypes for each gender. With dual-problem and healthy subgroup as common, problematic Internet subgroup was classified in the males, whereas problematic smartphone subgroup was classified in the females in the gender-stratified LCA. Thus, distinct patterns were observed according to gender with higher proportion of dual-problem present in males. While gaming was associated with problematic Internet use in males, aggression and impulsivity demonstrated associations with problematic smartphone use in females. Conclusions An increase in the number of digital media-related problems was associated with worse outcomes in various psychosocial scales. Gaming may play a crucial role in males solely displaying Internet-related problems. The heightened impulsivity and aggression seen in our female problematic smartphone users requires further research.

  19. Parceling the Power.

    ERIC Educational Resources Information Center

    Hiatt, Blanchard; Gwynne, Peter

    1984-01-01

    To make computing power broadly available and truly friendly, both soft and hard meshing and synchronization problems will have to be solved. Possible solutions and research related to these problems are discussed. Topics considered include compilers, parallelism, networks, distributed sensors, dataflow, CEDAR system (using dataflow principles),…

  20. The influence of cardiorespiratory fitness on strategic, behavioral, and electrophysiological indices of arithmetic cognition in preadolescent children

    PubMed Central

    Moore, R. Davis; Drollette, Eric S.; Scudder, Mark R.; Bharij, Aashiv; Hillman, Charles H.

    2014-01-01

    The current study investigated the influence of cardiorespiratory fitness on arithmetic cognition in forty 9–10 year old children. Measures included a standardized mathematics achievement test to assess conceptual and computational knowledge, self-reported strategy selection, and an experimental arithmetic verification task (including small and large addition problems), which afforded the measurement of event-related brain potentials (ERPs). No differences in math achievement were observed as a function of fitness level, but all children performed better on math concepts relative to math computation. Higher fit children reported using retrieval more often to solve large arithmetic problems, relative to lower fit children. During the arithmetic verification task, higher fit children exhibited superior performance for large problems, as evidenced by greater d' scores, while all children exhibited decreased accuracy and longer reaction time for large relative to small problems, and incorrect relative to correct solutions. On the electrophysiological level, modulations of early (P1, N170) and late ERP components (P3, N400) were observed as a function of problem size and solution correctness. Higher fit children exhibited selective modulations for N170, P3, and N400 amplitude relative to lower fit children, suggesting that fitness influences symbolic encoding, attentional resource allocation and semantic processing during arithmetic tasks. The current study contributes to the fitness-cognition literature by demonstrating that the benefits of cardiorespiratory fitness extend to arithmetic cognition, which has important implications for the educational environment and the context of learning. PMID:24829556

  1. Artificial intelligence issues related to automated computing operations

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1989-01-01

    Large data processing installations represent target systems for effective applications of artificial intelligence (AI) constructs. The system organization of a large data processing facility at the NASA Marshall Space Flight Center is presented. The methodology and the issues which are related to AI application to automated operations within a large-scale computing facility are described. Problems to be addressed and initial goals are outlined.

  2. Teaching Differential Diagnosis by Computer: A Pathophysiological Approach

    ERIC Educational Resources Information Center

    Goroll, Allan H.; And Others

    1977-01-01

    An interactive, computer-based teaching exercise in diagnosis that emphasizes pathophysiology in the analysis of clinical data is described. Called the Jaundice Program, its objective is to simplify the pattern recognition problem by relating clinical findings to diagnosis via reference to disease mechanisms. (LBH)

  3. Three computational mise-en-scènes of red- and blue-shifted hydrogen bonding motifs: Concept of negative intramolecular coupling-What else?

    NASA Astrophysics Data System (ADS)

    Kryachko, Eugene S.

    This work is a kind of attempt to rethink some problems which are related to the blue-shifted "hydrogen bonds" and which have been left in the past decade as not yet fully resolved. The impetus for such rethink is originated from the three computational mise-en-scènes on red- and blue-shifted hydrogen bonding motifs, which are aimed to be thoroughly studied in this work, thus resolving the above problems.

  4. Computational modeling of the cell-autonomous mammalian circadian oscillator.

    PubMed

    Podkolodnaya, Olga A; Tverdokhleb, Natalya N; Podkolodnyy, Nikolay L

    2017-02-24

    This review summarizes various mathematical models of cell-autonomous mammalian circadian clock. We present the basics necessary for understanding of the cell-autonomous mammalian circadian oscillator, modern experimental data essential for its reconstruction and some special problems related to the validation of mathematical circadian oscillator models. This work compares existing mathematical models of circadian oscillator and the results of the computational studies of the oscillating systems. Finally, we discuss applications of the mathematical models of mammalian circadian oscillator for solving specific problems in circadian rhythm biology.

  5. "Cloud" functions and templates of engineering calculations for nuclear power plants

    NASA Astrophysics Data System (ADS)

    Ochkov, V. F.; Orlov, K. A.; Ko, Chzho Ko

    2014-10-01

    The article deals with an important problem of setting up computer-aided design calculations of various circuit configurations and power equipment carried out using the templates and standard computer programs available in the Internet. Information about the developed Internet-based technology for carrying out such calculations using the templates accessible in the Mathcad Prime software package is given. The technology is considered taking as an example the solution of two problems relating to the field of nuclear power engineering.

  6. High-Performance Computing and Four-Dimensional Data Assimilation: The Impact on Future and Current Problems

    NASA Technical Reports Server (NTRS)

    Makivic, Miloje S.

    1996-01-01

    This is the final technical report for the project entitled: "High-Performance Computing and Four-Dimensional Data Assimilation: The Impact on Future and Current Problems", funded at NPAC by the DAO at NASA/GSFC. First, the motivation for the project is given in the introductory section, followed by the executive summary of major accomplishments and the list of project-related publications. Detailed analysis and description of research results is given in subsequent chapters and in the Appendix.

  7. Dual-scan technique for the customization of zirconia computer-aided design/computer-aided manufacturing frameworks.

    PubMed

    Andreiuolo, Rafael Ferrone; Sabrosa, Carlos Eduardo; Dias, Katia Regina H Cervantes

    2013-09-01

    The use of bi-layered all-ceramic crowns has continuously grown since the introduction of computer-aided design/computer-aided manufacturing (CAD/CAM) zirconia cores. Unfortunately, despite the outstanding mechanical properties of zirconia, problems related to porcelain cracking or chipping remain. One of the reasons for this is that ceramic copings are usually milled to uniform thicknesses of 0.3-0.6 mm around the whole tooth preparation. This may not provide uniform thickness or appropriate support for the veneering porcelain. To prevent these problems, the dual-scan technique demonstrates an alternative that allows the restorative team to customize zirconia CAD/CAM frameworks with adequate porcelain thickness and support in a simple manner.

  8. Application of Information and Communication Technologies by the Future Primary School Teachers in the Context of Inclusive Education in the Republic of Kazakhstan

    ERIC Educational Resources Information Center

    Oralbekova, Aliya K.; Arzymbetova, Sholpan Zh.; Begalieva, Saule B.; Ospanbekova, Meirgul N.; Mussabekova, Gulvira A.; Dauletova, Ainash S.

    2016-01-01

    Many children with disabilities in the Republic of Kazakhstan face up to physiological difficulties in moving, communicating, learning, along with problems related to learning various computer programs. Computer technologies are of particular importance for children with disabilities. By using information and computer technologies, these children…

  9. Institute for Computational Mechanics in Propulsion (ICOMP)

    NASA Technical Reports Server (NTRS)

    Feiler, Charles E. (Editor)

    1995-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) is operated by the Ohio Aerospace Institute (OAI) and funded under a cooperative agreement by the NASA Lewis Research Center in Cleveland, Ohio. The purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. This report describes the activities at ICOMP during 1994.

  10. Institute for Computational Mechanics in Propulsion (ICOMP). 10

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)

    1996-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) is operated by the Ohio Aerospace Institute (OAI) and funded under a cooperative agreement by the NASA Lewis Research Center in Cleveland, Ohio. The purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. This report describes the activities at ICOUP during 1995.

  11. Institute for Computational Mechanics in Propulsion (ICOMP)

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)

    1997-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) is operated by the Ohio Aerospace Institute (OAI) and funded under a cooperative agreement by the NASA Lewis Research Center in Cleveland, Ohio. Thee purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. This report describes the activities at ICOMP during 1996.

  12. Visuospatial referents facilitate the learning and transfer of mathematical operations: extending the role of the angular gyrus.

    PubMed

    Pyke, Aryn; Betts, Shawn; Fincham, Jon M; Anderson, John R

    2015-03-01

    Different external representations for learning and solving mathematical operations may affect learning and transfer. To explore the effects of learning representations, learners were each introduced to two new operations (b↑n and b↓n) via either formulas or graphical representations. Both groups became adept at solving regular (trained) problems. During transfer, no external formulas or graphs were present; however, graph learners' knowledge could allow them to mentally associate problem expressions with visuospatial referents. The angular gyrus (AG) has recently been hypothesized to map problems to mental referents (e.g., symbolic answers; Grabner, Ansari, Koschutnig, Reishofer, & Ebner Human Brain Mapping, 34, 1013-1024, 2013), and we sought to test this hypothesis for visuospatial referents. To determine whether the AG and other math (horizontal intraparietal sulcus) and visuospatial (fusiform and posterior superior parietal lobule [PSPL]) regions were implicated in processing visuospatial mental referents, we included two types of transfer problems, computational and relational, which differed in referential load (one graph vs. two). During solving, the activations in AG, PSPL, and fusiform reflected the referential load manipulation among graph but not formula learners. Furthermore, the AG was more active among graph learners overall, which is consistent with its hypothesized referential role. Behavioral performance was comparable across the groups on computational transfer problems, which could be solved in a way that incorporated learners' respective procedures for regular problems. However, graph learners were more successful on relational transfer problems, which assessed their understanding of the relations between pairs of similar problems within and across operations. On such problems, their behavioral performance correlated with activation in the AG, fusiform, and a relational processing region (BA 10).

  13. Algorithms in nature: the convergence of systems biology and computational thinking

    PubMed Central

    Navlakha, Saket; Bar-Joseph, Ziv

    2011-01-01

    Computer science and biology have enjoyed a long and fruitful relationship for decades. Biologists rely on computational methods to analyze and integrate large data sets, while several computational methods were inspired by the high-level design principles of biological systems. Recently, these two directions have been converging. In this review, we argue that thinking computationally about biological processes may lead to more accurate models, which in turn can be used to improve the design of algorithms. We discuss the similar mechanisms and requirements shared by computational and biological processes and then present several recent studies that apply this joint analysis strategy to problems related to coordination, network analysis, and tracking and vision. We also discuss additional biological processes that can be studied in a similar manner and link them to potential computational problems. With the rapid accumulation of data detailing the inner workings of biological systems, we expect this direction of coupling biological and computational studies to greatly expand in the future. PMID:22068329

  14. Global computing for bioinformatics.

    PubMed

    Loewe, Laurence

    2002-12-01

    Global computing, the collaboration of idle PCs via the Internet in a SETI@home style, emerges as a new way of massive parallel multiprocessing with potentially enormous CPU power. Its relations to the broader, fast-moving field of Grid computing are discussed without attempting a review of the latter. This review (i) includes a short table of milestones in global computing history, (ii) lists opportunities global computing offers for bioinformatics, (iii) describes the structure of problems well suited for such an approach, (iv) analyses the anatomy of successful projects and (v) points to existing software frameworks. Finally, an evaluation of the various costs shows that global computing indeed has merit, if the problem to be solved is already coded appropriately and a suitable global computing framework can be found. Then, either significant amounts of computing power can be recruited from the general public, or--if employed in an enterprise-wide Intranet for security reasons--idle desktop PCs can substitute for an expensive dedicated cluster.

  15. Complex network problems in physics, computer science and biology

    NASA Astrophysics Data System (ADS)

    Cojocaru, Radu Ionut

    There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe lattice at zero temperature and then we apply this formalism to the K-SAT problem defined in Chapter 1. The phase transition which physicists study often corresponds to a change in the computational complexity of the corresponding computer science problem. Chapter 3 presents phase transitions which are specific to the problems discussed in Chapter 1 and also known results for the K-SAT problem. We discuss the replica method and experimental evidences of replica symmetry breaking. The physics approach to hard problems is based on replica methods which are difficult to understand. In Chapter 4 we develop novel methods for studying hard problems using methods similar to the message passing techniques that were discussed in Chapter 2. Although we concentrated on the symmetric case, cavity methods show promise for generalizing our methods to the un-symmetric case. As has been highlighted by John Hopfield, several key features of biological systems are not shared by physical systems. Although living entities follow the laws of physics and chemistry, the fact that organisms adapt and reproduce introduces an essential ingredient that is missing in the physical sciences. In order to extract information from networks many algorithm have been developed. In Chapter 5 we apply polynomial algorithms like minimum spanning tree in order to study and construct gene regulatory networks from experimental data. As future work we propose the use of algorithms like min-cut/max-flow and Dijkstra for understanding key properties of these networks.

  16. Developing the Fundamental Theorem of Calculus. Applications of Calculus to Work, Area, and Distance Problems. [and] Atmospheric Pressure in Relation to Height and Temperature. Applications of Calculus to Atmospheric Pressure. [and] The Gradient and Some of Its Applications. Applications of Multivariate Calculus to Physics. [and] Kepler's Laws and the Inverse Square Law. Applications of Calculus to Physics. UMAP Units 323, 426, 431, 473.

    ERIC Educational Resources Information Center

    Lindstrom, Peter A.; And Others

    This document consists of four units. The first of these views calculus applications to work, area, and distance problems. It is designed to help students gain experience in: 1) computing limits of Riemann sums; 2) computing definite integrals; and 3) solving elementary area, distance, and work problems by integration. The second module views…

  17. Differential geometric treewidth estimation in adiabatic quantum computation

    NASA Astrophysics Data System (ADS)

    Wang, Chi; Jonckheere, Edmond; Brun, Todd

    2016-10-01

    The D-Wave adiabatic quantum computing platform is designed to solve a particular class of problems—the Quadratic Unconstrained Binary Optimization (QUBO) problems. Due to the particular "Chimera" physical architecture of the D-Wave chip, the logical problem graph at hand needs an extra process called minor embedding in order to be solvable on the D-Wave architecture. The latter problem is itself NP-hard. In this paper, we propose a novel polynomial-time approximation to the closely related treewidth based on the differential geometric concept of Ollivier-Ricci curvature. The latter runs in polynomial time and thus could significantly reduce the overall complexity of determining whether a QUBO problem is minor embeddable, and thus solvable on the D-Wave architecture.

  18. Automation of the aircraft design process

    NASA Technical Reports Server (NTRS)

    Heldenfels, R. R.

    1974-01-01

    The increasing use of the computer to automate the aerospace product development and engineering process is examined with emphasis on structural analysis and design. Examples of systems of computer programs in aerospace and other industries are reviewed and related to the characteristics of aircraft design in its conceptual, preliminary, and detailed phases. Problems with current procedures are identified, and potential improvements from optimum utilization of integrated disciplinary computer programs by a man/computer team are indicated.

  19. Improving Multi-Objective Management of Water Quality Tipping Points: Revisiting the Classical Shallow Lake Problem

    NASA Astrophysics Data System (ADS)

    Quinn, J. D.; Reed, P. M.; Keller, K.

    2015-12-01

    Recent multi-objective extensions of the classical shallow lake problem are useful for exploring the conceptual and computational challenges that emerge when managing irreversible water quality tipping points. Building on this work, we explore a four objective version of the lake problem where a hypothetical town derives economic benefits from polluting a nearby lake, but at the risk of irreversibly tipping the lake into a permanently polluted state. The trophic state of the lake exhibits non-linear threshold dynamics; below some critical phosphorus (P) threshold it is healthy and oligotrophic, but above this threshold it is irreversibly eutrophic. The town must decide how much P to discharge each year, a decision complicated by uncertainty in the natural P inflow to the lake. The shallow lake problem provides a conceptually rich set of dynamics, low computational demands, and a high level of mathematical difficulty. These properties maximize its value for benchmarking the relative merits and limitations of emerging decision support frameworks, such as Direct Policy Search (DPS). Here, we explore the use of DPS as a formal means of developing robust environmental pollution control rules that effectively account for deeply uncertain system states and conflicting objectives. The DPS reformulation of the shallow lake problem shows promise in formalizing pollution control triggers and signposts, while dramatically reducing the computational complexity of the multi-objective pollution control problem. More broadly, the insights from the DPS variant of the shallow lake problem formulated in this study bridge emerging work related to socio-ecological systems management, tipping points, robust decision making, and robust control.

  20. Are Computers Hazardous to Your Child's Health?

    ERIC Educational Resources Information Center

    Personal Computing, 1981

    1981-01-01

    Two potential health hazards have been suggested in relation to long-term use of computer video monitors: radiation and vision problems (fatigue, eyestrain, eye damage). This article examines some available evidence on these issues. Journal availability: Hayden Publishing Company, 50 Essex Street, Rochelle Park, NJ 07662. (SJL)

  1. Asymptotic analysis of the narrow escape problem in dendritic spine shaped domain: three dimensions

    NASA Astrophysics Data System (ADS)

    Li, Xiaofei; Lee, Hyundae; Wang, Yuliang

    2017-08-01

    This paper deals with the three-dimensional narrow escape problem in a dendritic spine shaped domain, which is composed of a relatively big head and a thin neck. The narrow escape problem is to compute the mean first passage time of Brownian particles traveling from inside the head to the end of the neck. The original model is to solve a mixed Dirichlet-Neumann boundary value problem for the Poisson equation in the composite domain, and is computationally challenging. In this paper we seek to transfer the original problem to a mixed Robin-Neumann boundary value problem by dropping the thin neck part, and rigorously derive the asymptotic expansion of the mean first passage time with high order terms. This study is a nontrivial three-dimensional generalization of the work in Li (2014 J. Phys. A: Math. Theor. 47 505202), where a two-dimensional analogue domain is considered.

  2. ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics (CAA)

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C. (Editor); Ristorcelli, J. Ray (Editor); Tam, Christopher K. W. (Editor)

    1995-01-01

    The proceedings of the Benchmark Problems in Computational Aeroacoustics Workshop held at NASA Langley Research Center are the subject of this report. The purpose of the Workshop was to assess the utility of a number of numerical schemes in the context of the unusual requirements of aeroacoustical calculations. The schemes were assessed from the viewpoint of dispersion and dissipation -- issues important to long time integration and long distance propagation in aeroacoustics. Also investigated were the effect of implementation of different boundary conditions. The Workshop included a forum in which practical engineering problems related to computational aeroacoustics were discussed. This discussion took the form of a dialogue between an industrial panel and the workshop participants and was an effort to suggest the direction of evolution of this field in the context of current engineering needs.

  3. Fast parallel algorithms that compute transitive closure of a fuzzy relation

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik YA.

    1993-01-01

    The notion of a transitive closure of a fuzzy relation is very useful for clustering in pattern recognition, for fuzzy databases, etc. The original algorithm proposed by L. Zadeh (1971) requires the computation time O(n(sup 4)), where n is the number of elements in the relation. In 1974, J. C. Dunn proposed a O(n(sup 2)) algorithm. Since we must compute n(n-1)/2 different values s(a, b) (a not equal to b) that represent the fuzzy relation, and we need at least one computational step to compute each of these values, we cannot compute all of them in less than O(n(sup 2)) steps. So, Dunn's algorithm is in this sense optimal. For small n, it is ok. However, for big n (e.g., for big databases), it is still a lot, so it would be desirable to decrease the computation time (this problem was formulated by J. Bezdek). Since this decrease cannot be done on a sequential computer, the only way to do it is to use a computer with several processors working in parallel. We show that on a parallel computer, transitive closure can be computed in time O((log(sub 2)(n))2).

  4. Quantum computational complexity, Einstein's equations and accelerated expansion of the Universe

    NASA Astrophysics Data System (ADS)

    Ge, Xian-Hui; Wang, Bin

    2018-02-01

    We study the relation between quantum computational complexity and general relativity. The quantum computational complexity is proposed to be quantified by the shortest length of geodesic quantum curves. We examine the complexity/volume duality in a geodesic causal ball in the framework of Fermi normal coordinates and derive the full non-linear Einstein equation. Using insights from the complexity/action duality, we argue that the accelerated expansion of the universe could be driven by the quantum complexity and free from coincidence and fine-tunning problems.

  5. Learning Relative Motion Concepts in Immersive and Non-immersive Virtual Environments

    NASA Astrophysics Data System (ADS)

    Kozhevnikov, Michael; Gurlitt, Johannes; Kozhevnikov, Maria

    2013-12-01

    The focus of the current study is to understand which unique features of an immersive virtual reality environment have the potential to improve learning relative motion concepts. Thirty-seven undergraduate students learned relative motion concepts using computer simulation either in immersive virtual environment (IVE) or non-immersive desktop virtual environment (DVE) conditions. Our results show that after the simulation activities, both IVE and DVE groups exhibited a significant shift toward a scientific understanding in their conceptual models and epistemological beliefs about the nature of relative motion, and also a significant improvement on relative motion problem-solving tests. In addition, we analyzed students' performance on one-dimensional and two-dimensional questions in the relative motion problem-solving test separately and found that after training in the simulation, the IVE group performed significantly better than the DVE group on solving two-dimensional relative motion problems. We suggest that egocentric encoding of the scene in IVE (where the learner constitutes a part of a scene they are immersed in), as compared to allocentric encoding on a computer screen in DVE (where the learner is looking at the scene from "outside"), is more beneficial than DVE for studying more complex (two-dimensional) relative motion problems. Overall, our findings suggest that such aspects of virtual realities as immersivity, first-hand experience, and the possibility of changing different frames of reference can facilitate understanding abstract scientific phenomena and help in displacing intuitive misconceptions with more accurate mental models.

  6. A hybrid genetic-simulated annealing algorithm for the location-inventory-routing problem considering returns under e-supply chain environment.

    PubMed

    Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  7. Fast Katz and Commuters: Efficient Estimation of Social Relatedness in Large Networks

    NASA Astrophysics Data System (ADS)

    Esfandiar, Pooya; Bonchi, Francesco; Gleich, David F.; Greif, Chen; Lakshmanan, Laks V. S.; On, Byung-Won

    Motivated by social network data mining problems such as link prediction and collaborative filtering, significant research effort has been devoted to computing topological measures including the Katz score and the commute time. Existing approaches typically approximate all pairwise relationships simultaneously. In this paper, we are interested in computing: the score for a single pair of nodes, and the top-k nodes with the best scores from a given source node. For the pairwise problem, we apply an iterative algorithm that computes upper and lower bounds for the measures we seek. This algorithm exploits a relationship between the Lanczos process and a quadrature rule. For the top-k problem, we propose an algorithm that only accesses a small portion of the graph and is related to techniques used in personalized PageRank computing. To test the scalability and accuracy of our algorithms we experiment with three real-world networks and find that these algorithms run in milliseconds to seconds without any preprocessing.

  8. Fast katz and commuters : efficient estimation of social relatedness in large networks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    On, Byung-Won; Lakshmanan, Laks V. S.; Greif, Chen

    Motivated by social network data mining problems such as link prediction and collaborative filtering, significant research effort has been devoted to computing topological measures including the Katz score and the commute time. Existing approaches typically approximate all pairwise relationships simultaneously. In this paper, we are interested in computing: the score for a single pair of nodes, and the top-k nodes with the best scores from a given source node. For the pairwise problem, we apply an iterative algorithm that computes upper and lower bounds for the measures we seek. This algorithm exploits a relationship between the Lanczos process and amore » quadrature rule. For the top-k problem, we propose an algorithm that only accesses a small portion of the graph and is related to techniques used in personalized PageRank computing. To test the scalability and accuracy of our algorithms we experiment with three real-world networks and find that these algorithms run in milliseconds to seconds without any preprocessing.« less

  9. Diffuse-Interface Methods in Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Anderson, D. M.; McFadden, G. B.; Wheeler, A. A.

    1997-01-01

    The authors review the development of diffuse-interface models of hydrodynamics and their application to a wide variety of interfacial phenomena. The authors discuss the issues involved in formulating diffuse-interface models for single-component and binary fluids. Recent applications and computations using these models are discussed in each case. Further, the authors address issues including sharp-interface analyses that relate these models to the classical free-boundary problem, related computational approaches to describe interfacial phenomena, and related approaches describing fully-miscible fluids.

  10. Robust computation of dipole electromagnetic fields in arbitrarily anisotropic, planar-stratified environments.

    PubMed

    Sainath, Kamalesh; Teixeira, Fernando L; Donderici, Burkay

    2014-01-01

    We develop a general-purpose formulation, based on two-dimensional spectral integrals, for computing electromagnetic fields produced by arbitrarily oriented dipoles in planar-stratified environments, where each layer may exhibit arbitrary and independent anisotropy in both its (complex) permittivity and permeability tensors. Among the salient features of our formulation are (i) computation of eigenmodes (characteristic plane waves) supported in arbitrarily anisotropic media in a numerically robust fashion, (ii) implementation of an hp-adaptive refinement for the numerical integration to evaluate the radiation and weakly evanescent spectra contributions, and (iii) development of an adaptive extension of an integral convergence acceleration technique to compute the strongly evanescent spectrum contribution. While other semianalytic techniques exist to solve this problem, none have full applicability to media exhibiting arbitrary double anisotropies in each layer, where one must account for the whole range of possible phenomena (e.g., mode coupling at interfaces and nonreciprocal mode propagation). Brute-force numerical methods can tackle this problem but only at a much higher computational cost. The present formulation provides an efficient and robust technique for field computation in arbitrary planar-stratified environments. We demonstrate the formulation for a number of problems related to geophysical exploration.

  11. Fourth-order convergence of a compact scheme for the one-dimensional biharmonic equation

    NASA Astrophysics Data System (ADS)

    Fishelov, D.; Ben-Artzi, M.; Croisille, J.-P.

    2012-09-01

    The convergence of a fourth-order compact scheme to the one-dimensional biharmonic problem is established in the case of general Dirichlet boundary conditions. The compact scheme invokes value of the unknown function as well as Pade approximations of its first-order derivative. Using the Pade approximation allows us to approximate the first-order derivative within fourth-order accuracy. However, although the truncation error of the discrete biharmonic scheme is of fourth-order at interior point, the truncation error drops to first-order at near-boundary points. Nonetheless, we prove that the scheme retains its fourth-order (optimal) accuracy. This is done by a careful inspection of the matrix elements of the discrete biharmonic operator. A number of numerical examples corroborate this effect. We also present a study of the eigenvalue problem uxxxx = νu. We compute and display the eigenvalues and the eigenfunctions related to the continuous and the discrete problems. By the positivity of the eigenvalues, one can deduce the stability of of the related time-dependent problem ut = -uxxxx. In addition, we study the eigenvalue problem uxxxx = νuxx. This is related to the stability of the linear time-dependent equation uxxt = νuxxxx. Its continuous and discrete eigenvalues and eigenfunction (or eigenvectors) are computed and displayed graphically.

  12. Repetitive Domain-Referenced Testing Using Computers: the TITA System.

    ERIC Educational Resources Information Center

    Olympia, P. L., Jr.

    The TITA (Totally Interactive Testing and Analysis) System algorithm for the repetitive construction of domain-referenced tests utilizes a compact data bank, is highly portable, is useful in any discipline, requires modest computer hardware, and does not present a security problem. Clusters of related keyphrases, statement phrases, and distractors…

  13. Mapping University Students' Epistemic Framing of Computational Physics Using Network Analysis

    ERIC Educational Resources Information Center

    Bodin, Madelen

    2012-01-01

    Solving physics problem in university physics education using a computational approach requires knowledge and skills in several domains, for example, physics, mathematics, programming, and modeling. These competences are in turn related to students' beliefs about the domains as well as about learning. These knowledge and beliefs components are…

  14. NACCIS Working Papers.

    ERIC Educational Resources Information Center

    National Advisory Council for Computer Implementation in Schools, Tucson, AZ.

    Problems and issues related to the financing and use of educational technologies in Arizona schools are addressed from both short- and long-term considerations. A brief review of the spread of computers in schools includes projections for future use of computers in education and a discussion of factors affecting the market, the convergence of…

  15. Practical Problem-Based Learning in Computing Education

    ERIC Educational Resources Information Center

    O'Grady, Michael J.

    2012-01-01

    Computer Science (CS) is a relatively new disciple and how best to introduce it to new students remains an open question. Likewise, the identification of appropriate instructional strategies for the diverse topics that constitute the average curriculum remains open to debate. One approach considered by a number of practitioners in CS education…

  16. Computational Thinking for All: Pedagogical Approaches to Embedding 21st Century Problem Solving in K-12 Classrooms

    ERIC Educational Resources Information Center

    Yadav, Aman; Hong, Hai; Stephenson, Chris

    2016-01-01

    The recent focus on computational thinking as a key 21st century skill for all students has led to a number of curriculum initiatives to embed it in K-12 classrooms. In this paper, we discuss the key computational thinking constructs, including algorithms, abstraction, and automation. We further discuss how these ideas are related to current…

  17. Institute for Computational Mechanics in Propulsion (ICOMP)

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)

    1999-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) was formed to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. ICOMP is operated by the Ohio Aerospace Institute (OAI) and funded via numerous cooperative agreements by the NASA Glenn Research Center in Cleveland, Ohio. This report describes the activities at ICOMP during 1998, the Institutes thirteenth year of operation.

  18. PETSc Users Manual Revision 3.7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balay, Satish; Abhyankar, S.; Adams, M.

    This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication.

  19. Institute for Computational Mechanics in Propulsion (ICOMP)

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)

    2001-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) was formed to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. ICOMP is operated by the Ohio Aerospace Institute (OAI) and funded via numerous cooperative agreements by the NASA Glenn Research Center in Cleveland, Ohio. This report describes the activities at ICOMP during 1999, the Institute's fourteenth year of operation.

  20. Institute for Computational Mechanics in Propulsion (ICOMP)

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)

    1998-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) was formed to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. ICOMP is operated by the Ohio Aerospace Institute (OAI) and funded via numerous cooperative agreements by the NASA Lewis Research Center in Cleveland, Ohio. This report describes the activities at ICOMP during 1997, the Institute's twelfth year of operation.

  1. PETSc Users Manual Revision 3.8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balay, S.; Abhyankar, S.; Adams, M.

    This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication.

  2. Hyperbolic Harmonic Mapping for Surface Registration

    PubMed Central

    Shi, Rui; Zeng, Wei; Su, Zhengyu; Jiang, Jian; Damasio, Hanna; Lu, Zhonglin; Wang, Yalin; Yau, Shing-Tung; Gu, Xianfeng

    2016-01-01

    Automatic computation of surface correspondence via harmonic map is an active research field in computer vision, computer graphics and computational geometry. It may help document and understand physical and biological phenomena and also has broad applications in biometrics, medical imaging and motion capture inducstries. Although numerous studies have been devoted to harmonic map research, limited progress has been made to compute a diffeomorphic harmonic map on general topology surfaces with landmark constraints. This work conquers this problem by changing the Riemannian metric on the target surface to a hyperbolic metric so that the harmonic mapping is guaranteed to be a diffeomorphism under landmark constraints. The computational algorithms are based on Ricci flow and nonlinear heat diffusion methods. The approach is general and robust. We employ our algorithm to study the constrained surface registration problem which applies to both computer vision and medical imaging applications. Experimental results demonstrate that, by changing the Riemannian metric, the registrations are always diffeomorphic and achieve relatively high performance when evaluated with some popular surface registration evaluation standards. PMID:27187948

  3. Search Path Mapping: A Versatile Approach for Visualizing Problem-Solving Behavior.

    ERIC Educational Resources Information Center

    Stevens, Ronald H.

    1991-01-01

    Computer-based problem-solving examinations in immunology generate graphic representations of students' search paths, allowing evaluation of how organized and focused their knowledge is, how well their organization relates to critical concepts in immunology, where major misconceptions exist, and whether proper knowledge links exist between content…

  4. BASIC Simulation Programs; Volumes I and II. Biology, Earth Science, Chemistry.

    ERIC Educational Resources Information Center

    Digital Equipment Corp., Maynard, MA.

    Computer programs which teach concepts and processes related to biology, earth science, and chemistry are presented. The seven biology problems deal with aspects of genetics, evolution and natural selection, gametogenesis, enzymes, photosynthesis, and the transport of material across a membrane. Four earth science problems concern climates, the…

  5. Scheduling language and algorithm development study. Volume 1, phase 2: Design considerations for a scheduling and resource allocation system

    NASA Technical Reports Server (NTRS)

    Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.

    1975-01-01

    Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.

  6. Probability, Problem Solving, and "The Price is Right."

    ERIC Educational Resources Information Center

    Wood, Eric

    1992-01-01

    This article discusses the analysis of a decision-making process faced by contestants on the television game show "The Price is Right". The included analyses of the original and related problems concern pattern searching, inductive reasoning, quadratic functions, and graphing. Computer simulation programs in BASIC and tables of…

  7. Thermodynamic free energy methods to investigate shape transitions in bilayer membranes.

    PubMed

    Ramakrishnan, N; Tourdot, Richard W; Radhakrishnan, Ravi

    2016-06-01

    The conformational free energy landscape of a system is a fundamental thermodynamic quantity of importance particularly in the study of soft matter and biological systems, in which the entropic contributions play a dominant role. While computational methods to delineate the free energy landscape are routinely used to analyze the relative stability of conformational states, to determine phase boundaries, and to compute ligand-receptor binding energies its use in problems involving the cell membrane is limited. Here, we present an overview of four different free energy methods to study morphological transitions in bilayer membranes, induced either by the action of curvature remodeling proteins or due to the application of external forces. Using a triangulated surface as a model for the cell membrane and using the framework of dynamical triangulation Monte Carlo, we have focused on the methods of Widom insertion, thermodynamic integration, Bennett acceptance scheme, and umbrella sampling and weighted histogram analysis. We have demonstrated how these methods can be employed in a variety of problems involving the cell membrane. Specifically, we have shown that the chemical potential, computed using Widom insertion, and the relative free energies, computed using thermodynamic integration and Bennett acceptance method, are excellent measures to study the transition from curvature sensing to curvature inducing behavior of membrane associated proteins. The umbrella sampling and WHAM analysis has been used to study the thermodynamics of tether formation in cell membranes and the quantitative predictions of the computational model are in excellent agreement with experimental measurements. Furthermore, we also present a method based on WHAM and thermodynamic integration to handle problems related to end-point-catastrophe that are common in most free energy methods.

  8. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289

  9. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.

    PubMed

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-11-02

    We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.

  10. An algorithmic framework for multiobjective optimization.

    PubMed

    Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization.

  11. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  12. Discrete square root filtering - A survey of current techniques.

    NASA Technical Reports Server (NTRS)

    Kaminskii, P. G.; Bryson, A. E., Jr.; Schmidt, S. F.

    1971-01-01

    Current techniques in square root filtering are surveyed and related by applying a duality association. Four efficient square root implementations are suggested, and compared with three common conventional implementations in terms of computational complexity and precision. It is shown that the square root computational burden should not exceed the conventional by more than 50% in most practical problems. An examination of numerical conditioning predicts that the square root approach can yield twice the effective precision of the conventional filter in ill-conditioned problems. This prediction is verified in two examples.

  13. Application of computational fluid mechanics to atmospheric pollution problems

    NASA Technical Reports Server (NTRS)

    Hung, R. J.; Liaw, G. S.; Smith, R. E.

    1986-01-01

    One of the most noticeable effects of air pollution on the properties of the atmosphere is the reduction in visibility. This paper reports the results of investigations of the fluid dynamical and microphysical processes involved in the formation of advection fog on aerosols from combustion-related pollutants, as condensation nuclei. The effects of a polydisperse aerosol distribution, on the condensation/nucleation processes which cause the reduction in visibility are studied. This study demonstrates how computational fluid mechanics and heat transfer modeling can be applied to simulate the life cycle of the atmosphereic pollution problems.

  14. Computer generated animation and movie production at LARC: A case study

    NASA Technical Reports Server (NTRS)

    Gates, R. L.; Matthews, C. G.; Vonofenheim, W. H.; Randall, D. P.; Jones, K. H.

    1984-01-01

    The process of producing computer generated 16mm movies using the MOVIE.BYU software package developed by Brigham Young University and the currently available hardware technology at the Langley Research Center is described. A general overview relates the procedures to a specific application. Details are provided which describe the data used, preparation of a storyboard, key frame generation, the actual animation, title generation, filming, and processing/developing the final product. Problems encountered in each of these areas are identified. Both hardware and software problems are discussed along with proposed solutions and recommendations.

  15. PowerPlay: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem

    PubMed Central

    Schmidhuber, Jürgen

    2013-01-01

    Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. Given a general problem-solving architecture, at any given time, the novel algorithmic framework PowerPlay (Schmidhuber, 2011) searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Newly invented tasks may require to achieve a wow-effect by making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. The greedy search of typical PowerPlay variants uses time-optimal program search to order candidate pairs of tasks and solver modifications by their conditional computational (time and space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. This biases the search toward pairs that can be described compactly and validated quickly. The computational costs of validating new tasks need not grow with task repertoire size. Standard problem solver architectures of personal computers or neural networks tend to generalize by solving numerous tasks outside the self-invented training set; PowerPlay’s ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Gödel’s sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem-solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. PowerPlay may be viewed as a greedy but practical implementation of basic principles of creativity (Schmidhuber, 2006a, 2010). A first experimental analysis can be found in separate papers (Srivastava et al., 2012a,b, 2013). PMID:23761771

  16. Integer Linear Programming in Computational Biology

    NASA Astrophysics Data System (ADS)

    Althaus, Ernst; Klau, Gunnar W.; Kohlbacher, Oliver; Lenhof, Hans-Peter; Reinert, Knut

    Computational molecular biology (bioinformatics) is a young research field that is rich in NP-hard optimization problems. The problem instances encountered are often huge and comprise thousands of variables. Since their introduction into the field of bioinformatics in 1997, integer linear programming (ILP) techniques have been successfully applied to many optimization problems. These approaches have added much momentum to development and progress in related areas. In particular, ILP-based approaches have become a standard optimization technique in bioinformatics. In this review, we present applications of ILP-based techniques developed by members and former members of Kurt Mehlhorn’s group. These techniques were introduced to bioinformatics in a series of papers and popularized by demonstration of their effectiveness and potential.

  17. Higher-order adaptive finite-element methods for Kohn–Sham density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motamarri, P.; Nowak, M.R.; Leiter, K.

    2013-11-15

    We present an efficient computational approach to perform real-space electronic structure calculations using an adaptive higher-order finite-element discretization of Kohn–Sham density-functional theory (DFT). To this end, we develop an a priori mesh-adaption technique to construct a close to optimal finite-element discretization of the problem. We further propose an efficient solution strategy for solving the discrete eigenvalue problem by using spectral finite-elements in conjunction with Gauss–Lobatto quadrature, and a Chebyshev acceleration technique for computing the occupied eigenspace. The proposed approach has been observed to provide a staggering 100–200-fold computational advantage over the solution of a generalized eigenvalue problem. Using the proposedmore » solution procedure, we investigate the computational efficiency afforded by higher-order finite-element discretizations of the Kohn–Sham DFT problem. Our studies suggest that staggering computational savings—of the order of 1000-fold—relative to linear finite-elements can be realized, for both all-electron and local pseudopotential calculations, by using higher-order finite-element discretizations. On all the benchmark systems studied, we observe diminishing returns in computational savings beyond the sixth-order for accuracies commensurate with chemical accuracy, suggesting that the hexic spectral-element may be an optimal choice for the finite-element discretization of the Kohn–Sham DFT problem. A comparative study of the computational efficiency of the proposed higher-order finite-element discretizations suggests that the performance of finite-element basis is competing with the plane-wave discretization for non-periodic local pseudopotential calculations, and compares to the Gaussian basis for all-electron calculations to within an order of magnitude. Further, we demonstrate the capability of the proposed approach to compute the electronic structure of a metallic system containing 1688 atoms using modest computational resources, and good scalability of the present implementation up to 192 processors.« less

  18. Nicholas Metropolis Award Talk for Outstanding Doctoral Thesis Work in Computational Physics: Computational biophysics and multiscale modeling of blood cells and blood flow in health and disease

    NASA Astrophysics Data System (ADS)

    Fedosov, Dmitry

    2011-03-01

    Computational biophysics is a large and rapidly growing area of computational physics. In this talk, we will focus on a number of biophysical problems related to blood cells and blood flow in health and disease. Blood flow plays a fundamental role in a wide range of physiological processes and pathologies in the organism. To understand and, if necessary, manipulate the course of these processes it is essential to investigate blood flow under realistic conditions including deformability of blood cells, their interactions, and behavior in the complex microvascular network. Using a multiscale cell model we are able to accurately capture red blood cell mechanics, rheology, and dynamics in agreement with a number of single cell experiments. Further, this validated model yields accurate predictions of the blood rheological properties, cell migration, cell-free layer, and hemodynamic resistance in microvessels. In addition, we investigate blood related changes in malaria, which include a considerable stiffening of red blood cells and their cytoadherence to endothelium. For these biophysical problems computational modeling is able to provide new physical insights and capabilities for quantitative predictions of blood flow in health and disease.

  19. An Optimization Code for Nonlinear Transient Problems of a Large Scale Multidisciplinary Mathematical Model

    NASA Astrophysics Data System (ADS)

    Takasaki, Koichi

    This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).

  20. Soreness during non-music activities is associated with playing-related musculoskeletal problems: an observational study of 731 child and adolescent instrumentalists.

    PubMed

    Ranelli, Sonia; Straker, Leon; Smith, Anne

    2014-06-01

    Is exposure to non-music-related activities associated with playing-related musculoskeletal problems in young instrumentalists? Is non-music-activity-related soreness associated with playing-related musculoskeletal problems in this group of instrumentalists? Observational study using a questionnaire and physical measures. 859 instrumentalists aged 7 to 17 years from the School of Instrumental Music program. Of the 731 respondents who completed the questionnaire adequately, 412 (56%) experienced instrument-playing problems; 219 (30%) had symptoms severe enough to interfere with normal playing. Children commonly reported moderate exposure to non-music-related activities, such as watching television (61%), vigorous physical activity (57%), writing (51%) and computer use (45%). Greater exposure to any non-music activity was not associated with playing problems, with odds ratios ranging from 1.01 (95% CI 0.7 to 1.5) for watching television to 2.08 (95% CI 0.5 to 3.3) for intensive hand activities. Four hundred and seventy eight (65%) children reported soreness related to non-music activities, such as vigorous physical activity (52%), writing (40%), computer use (28%), intensive hand activities (22%), electronic game use (17%) and watching television (15%). Non-music-activity-related soreness was significantly associated with instrument playing problems, adjusting for gender and age, with odds ratios ranging from 2.6 (95% CI 1.7 to 3.9) for soreness whilst watching television, to 4.3 (95% CI 2.6 to 7.1) for soreness during intensive hand activities. Non-music-activity-related soreness co-occurs significantly with playing problems in young instrumentalists. The finding of significant co-occurrence of music and non-music-related soreness in respondents in this study suggests that intervention targets for young instrumentalists could include risk factors previously identified in the general child and adolescent population, as well as music-specific risk factors. This is an important consideration for the assessment and management of the musculoskeletal health of young musicians. Copyright © 2014. Published by Elsevier B.V.

  1. One-dimensional Euclidean matching problem: exact solutions, correlation functions, and universality.

    PubMed

    Caracciolo, Sergio; Sicuro, Gabriele

    2014-10-01

    We discuss the equivalence relation between the Euclidean bipartite matching problem on the line and on the circumference and the Brownian bridge process on the same domains. The equivalence allows us to compute the correlation function and the optimal cost of the original combinatorial problem in the thermodynamic limit; moreover, we solve also the minimax problem on the line and on the circumference. The properties of the average cost and correlation functions are discussed.

  2. Advanced Computer Aids in the Planning and Execution of Air Warfare and Ground Strike Operations: Conference Proceedings, Meeting of the Avionics Panels of AGARD (51st) Held in Kongsberg, Norway on 12-16 May 1986

    DTIC Science & Technology

    1986-02-01

    the area of Artificial Intelligence (At). DARPA’s Strategic Computing Program 13 developing an At ýtchnology base upon which several applications...technologies with the Strategic Computing Program . In late 1983 the Strategic Computing Program (SCP) wes announced. The program was organizsd to develop...solving a resource allocation problem. The remainder of this paper will discuss the TEMPLAR progeam as it relates to the Strategic Computing Program

  3. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    PubMed Central

    Guo, Hao; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489

  4. Fundamental differences between optimization code test problems in engineering applications

    NASA Technical Reports Server (NTRS)

    Eason, E. D.

    1984-01-01

    The purpose here is to suggest that there is at least one fundamental difference between the problems used for testing optimization codes and the problems that engineers often need to solve; in particular, the level of precision that can be practically achieved in the numerical evaluation of the objective function, derivatives, and constraints. This difference affects the performance of optimization codes, as illustrated by two examples. Two classes of optimization problem were defined. Class One functions and constraints can be evaluated to a high precision that depends primarily on the word length of the computer. Class Two functions and/or constraints can only be evaluated to a moderate or a low level of precision for economic or modeling reasons, regardless of the computer word length. Optimization codes have not been adequately tested on Class Two problems. There are very few Class Two test problems in the literature, while there are literally hundreds of Class One test problems. The relative performance of two codes may be markedly different for Class One and Class Two problems. Less sophisticated direct search type codes may be less likely to be confused or to waste many function evaluations on Class Two problems. The analysis accuracy and minimization performance are related in a complex way that probably varies from code to code. On a problem where the analysis precision was varied over a range, the simple Hooke and Jeeves code was more efficient at low precision while the Powell code was more efficient at high precision.

  5. Portraits of PBL: Course Objectives and Students' Study Strategies in Computer Engineering, Psychology and Physiotherapy.

    ERIC Educational Resources Information Center

    Dahlgren, Madeleine Abrandt

    2000-01-01

    Compares the role of course objectives in relation to students' study strategies in problem-based learning (PBL). Results comprise data from three PBL programs at Linkopings University (Sweden), in physiotherapy, psychology, and computer engineering. Faculty provided course objectives to function as supportive structures and guides for students'…

  6. Teachers' Use of Computational Tools to Construct and Explore Dynamic Mathematical Models

    ERIC Educational Resources Information Center

    Santos-Trigo, Manuel; Reyes-Rodriguez, Aaron

    2011-01-01

    To what extent does the use of computational tools offer teachers the possibility of constructing dynamic models to identify and explore diverse mathematical relations? What ways of reasoning or thinking about the problems emerge during the model construction process that involves the use of the tools? These research questions guided the…

  7. How learning one category influences the learning of another: intercategory generalization based on analogy and specific stimulus information.

    PubMed

    Nahinsky, Irwin D; Lucas, Barbara A; Edgell, Stephen E; Overfelt, Joseph; Loeb, Richard

    2004-01-01

    We investigated the effect of learning one category structure on the learning of a related category structure. Photograph-name combinations, called identifiers, were associated with values of four demographic attributes. Two problems were related by analogous demographic attributes, common identifiers, or both to examine the impact of common identifier, related general characteristics, and the interaction of the two variables in mediating learning transfer from one category structure to another. Problems sharing the same identifier information prompted greater positive transfer than those not sharing the same identifier information. In contrast, analogous defining characteristics in the two problems did not facilitate transfer. We computed correlations between responses to first-problem stimuli and responses to analogous second-problem stimuli for each participant. The analogous characteristics produced a tendency to respond in the same way to corresponding stimuli in the two problems. The results support an alignment between category structures related by analogous defining characteristics, which is facilitated by specific identifier information shared by two category structures.

  8. A Comparison of Computation Span and Reading Span Working Memory Measures' Relations With Problem-Solving Criteria.

    PubMed

    Perlow, Richard; Jattuso, Mia

    2018-06-01

    Researchers have operationalized working memory in different ways and although working memory-performance relationships are well documented, there has been relatively less attention devoted to determining whether seemingly similar measures yield comparable relations with performance outcomes. Our objective is to assess whether two working memory measures deploying the same processes but different item content yield different relations with two problem-solving criteria. Participants completed a computation-based working memory measure and a reading-based measure prior to performing a computerized simulation. Results reveal differential relations with one of the two criteria and support the notion that the two working memory measures tap working memory capacity and other cognitive abilities. One implication for theory development is that researchers should consider incorporating other cognitive abilities in their working memory models and that the selection of those abilities should correspond to the criterion of interest. One practical implication is that researchers and practitioners shouldn't automatically assume that different phonological loop-based working memory scales are interchangeable.

  9. An iterative truncation method for unbounded electromagnetic problems using varying order finite elements

    NASA Astrophysics Data System (ADS)

    Paul, Prakash

    2009-12-01

    The finite element method (FEM) is used to solve three-dimensional electromagnetic scattering and radiation problems. Finite element (FE) solutions of this kind contain two main types of error: discretization error and boundary error. Discretization error depends on the number of free parameters used to model the problem, and on how effectively these parameters are distributed throughout the problem space. To reduce the discretization error, the polynomial order of the finite elements is increased, either uniformly over the problem domain or selectively in those areas with the poorest solution quality. Boundary error arises from the condition applied to the boundary that is used to truncate the computational domain. To reduce the boundary error, an iterative absorbing boundary condition (IABC) is implemented. The IABC starts with an inexpensive boundary condition and gradually improves the quality of the boundary condition as the iteration continues. An automatic error control (AEC) is implemented to balance the two types of error. With the AEC, the boundary condition is improved when the discretization error has fallen to a low enough level to make this worth doing. The AEC has these characteristics: (i) it uses a very inexpensive truncation method initially; (ii) it allows the truncation boundary to be very close to the scatterer/radiator; (iii) it puts more computational effort on the parts of the problem domain where it is most needed; and (iv) it can provide as accurate a solution as needed depending on the computational price one is willing to pay. To further reduce the computational cost, disjoint scatterers and radiators that are relatively far from each other are bounded separately and solved using a multi-region method (MRM), which leads to savings in computational cost. A simple analytical way to decide whether the MRM or the single region method will be computationally cheaper is also described. To validate the accuracy and savings in computation time, different shaped metallic and dielectric obstacles (spheres, ogives, cube, flat plate, multi-layer slab etc.) are used for the scattering problems. For the radiation problems, waveguide excited antennas (horn antenna, waveguide with flange, microstrip patch antenna) are used. Using the AEC the peak reduction in computation time during the iteration is typically a factor of 2, compared to the IABC using the same element orders throughout. In some cases, it can be as high as a factor of 4.

  10. Fault tolerance in computational grids: perspectives, challenges, and issues.

    PubMed

    Haider, Sajjad; Nazir, Babar

    2016-01-01

    Computational grids are established with the intention of providing shared access to hardware and software based resources with special reference to increased computational capabilities. Fault tolerance is one of the most important issues faced by the computational grids. The main contribution of this survey is the creation of an extended classification of problems that incur in the computational grid environments. The proposed classification will help researchers, developers, and maintainers of grids to understand the types of issues to be anticipated. Moreover, different types of problems, such as omission, interaction, and timing related have been identified that need to be handled on various layers of the computational grid. In this survey, an analysis and examination is also performed pertaining to the fault tolerance and fault detection mechanisms. Our conclusion is that a dependable and reliable grid can only be established when more emphasis is on fault identification. Moreover, our survey reveals that adaptive and intelligent fault identification, and tolerance techniques can improve the dependability of grid working environments.

  11. Computational complexities and storage requirements of some Riccati equation solvers

    NASA Technical Reports Server (NTRS)

    Utku, Senol; Garba, John A.; Ramesh, A. V.

    1989-01-01

    The linear optimal control problem of an nth-order time-invariant dynamic system with a quadratic performance functional is usually solved by the Hamilton-Jacobi approach. This leads to the solution of the differential matrix Riccati equation with a terminal condition. The bulk of the computation for the optimal control problem is related to the solution of this equation. There are various algorithms in the literature for solving the matrix Riccati equation. However, computational complexities and storage requirements as a function of numbers of state variables, control variables, and sensors are not available for all these algorithms. In this work, the computational complexities and storage requirements for some of these algorithms are given. These expressions show the immensity of the computational requirements of the algorithms in solving the Riccati equation for large-order systems such as the control of highly flexible space structures. The expressions are also needed to compute the speedup and efficiency of any implementation of these algorithms on concurrent machines.

  12. The effectiveness of interactive computer simulations on college engineering student conceptual understanding and problem-solving ability related to circular motion

    NASA Astrophysics Data System (ADS)

    Chien, Cheng-Chih

    In the past thirty years, the effectiveness of computer assisted learning was found varied by individual studies. Today, with drastic technical improvement, computers have been widely spread in schools and used in a variety of ways. In this study, a design model involving educational technology, pedagogy, and content domain is proposed for effective use of computers in learning. Computer simulation, constructivist and Vygotskian perspectives, and circular motion are the three elements of the specific Chain Model for instructional design. The goal of the physics course is to help students remove the ideas which are not consistent with the physics community and rebuild new knowledge. To achieve the learning goal, the strategies of using conceptual conflicts and using language to internalize specific tasks into mental functions were included. Computer simulations and accompanying worksheets were used to help students explore their own ideas and to generate questions for discussions. Using animated images to describe the dynamic processes involved in the circular motion may reduce the complexity and possible miscommunications resulting from verbal explanations. The effectiveness of the instructional material on student learning is evaluated. The results of problem solving activities show that students using computer simulations had significantly higher scores than students not using computer simulations. For conceptual understanding, on the pretest students in the non-simulation group had significantly higher score than students in the simulation group. There was no significant difference observed between the two groups in the posttest. The relations of gender, prior physics experience, and frequency of computer uses outside the course to student achievement were also studied. There were fewer female students than male students and fewer students using computer simulations than students not using computer simulations. These characteristics affect the statistical power for detecting differences. For the future research, more intervention of simulations may be introduced to explore the potential of computer simulation in helping students learning. A test for conceptual understanding with more problems and appropriate difficulty level may be needed.

  13. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  14. Virtual Reality: An Experiential Tool for Clinical Psychology

    ERIC Educational Resources Information Center

    Riva, Giuseppe

    2009-01-01

    Several Virtual Reality (VR) applications for the understanding, assessment and treatment of mental health problems have been developed in the last 15 years. Typically, in VR the patient learns to manipulate problematic situations related to his/her problem. In fact, VR can be described as an advanced form of human-computer interface that is able…

  15. Artificial Intelligence: Themes in the Second Decade. Memo Number 67.

    ERIC Educational Resources Information Center

    Feigenbaum, Edward A.

    The text of an invited address on artificial intelligence (AI) research over the 1963-1968 period is presented. A survey of recent studies on the computer simulation of intellective processes emphasizes developments in heuristic programing, problem-solving and closely related learning models. Progress and problems in these areas are indicated by…

  16. Using Computer-Generated Random Numbers to Calculate the Lifetime of a Comet.

    ERIC Educational Resources Information Center

    Danesh, Iraj

    1991-01-01

    An educational technique to calculate the lifetime of a comet using software-generated random numbers is introduced to undergraduate physiques and astronomy students. Discussed are the generation and eligibility of the required random numbers, background literature related to the problem, and the solution to the problem using random numbers.…

  17. The Effective-One-Body Approach to the General Relativistic Two Body Problem

    NASA Astrophysics Data System (ADS)

    Damour, Thibault; Nagar, Alessandro

    The two-body problem in General Relativity has been the subject of many analytical investigations. After reviewing some of the methods used to tackle this problem (and, more generally, the N-body problem), we focus on a new, recently introduced approach to the motion and radiation of (comparable mass) binary systems: the Effective One Body (EOB) formalism. We review the basic elements of this formalism, and discuss some of its recent developments. Several recent comparisons between EOB predictions and Numerical Relativity (NR) simulations have shown the aptitude of the EOB formalism to provide accurate descriptions of the dynamics and radiation of various binary systems (comprising black holes or neutron stars) in regimes that are inaccessible to other analytical approaches (such as the last orbits and the merger of comparable mass black holes). In synergy with NR simulations, post-Newtonian (PN) theory and Gravitational Self-Force (GSF) computations, the EOB formalism is likely to provide an efficient way of computing the very many accurate template waveforms that are needed for Gravitational Wave (GW) data analysis purposes.

  18. The Quantum Measurement Problem and Physical reality: A Computation Theoretic Perspective

    NASA Astrophysics Data System (ADS)

    Srikanth, R.

    2006-11-01

    Is the universe computable? If yes, is it computationally a polynomial place? In standard quantum mechanics, which permits infinite parallelism and the infinitely precise specification of states, a negative answer to both questions is not ruled out. On the other hand, empirical evidence suggests that NP-complete problems are intractable in the physical world. Likewise, computational problems known to be algorithmically uncomputable do not seem to be computable by any physical means. We suggest that this close correspondence between the efficiency and power of abstract algorithms on the one hand, and physical computers on the other, finds a natural explanation if the universe is assumed to be algorithmic; that is, that physical reality is the product of discrete sub-physical information processing equivalent to the actions of a probabilistic Turing machine. This assumption can be reconciled with the observed exponentiality of quantum systems at microscopic scales, and the consequent possibility of implementing Shor's quantum polynomial time algorithm at that scale, provided the degree of superposition is intrinsically, finitely upper-bounded. If this bound is associated with the quantum-classical divide (the Heisenberg cut), a natural resolution to the quantum measurement problem arises. From this viewpoint, macroscopic classicality is an evidence that the universe is in BPP, and both questions raised above receive affirmative answers. A recently proposed computational model of quantum measurement, which relates the Heisenberg cut to the discreteness of Hilbert space, is briefly discussed. A connection to quantum gravity is noted. Our results are compatible with the philosophy that mathematical truths are independent of the laws of physics.

  19. Linear Programming and Its Application to Pattern Recognition Problems

    NASA Technical Reports Server (NTRS)

    Omalley, M. J.

    1973-01-01

    Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.

  20. Reliability and concurrent validity of the computer workstation checklist.

    PubMed

    Baker, Nancy A; Livengood, Heather; Jacobs, Karen

    2013-01-01

    Self-report checklists are used to assess computer workstation set up, typically by workers not trained in ergonomic assessment or checklist interpretation.Though many checklists exist, few have been evaluated for reliability and validity. This study examined reliability and validity of the Computer Workstation Checklist (CWC) to identify mismatches between workers' self-reported workstation problems. The CWC was completed at baseline and at 1 month to establish reliability. Validity was determined with CWC baseline data compared to an onsite workstation evaluation conducted by an expert in computer workstation assessment. Reliability ranged from fair to near perfect (prevalence-adjusted bias-adjusted kappa, 0.38-0.93); items with the strongest agreement were related to the input device, monitor, computer table, and document holder. The CWC had greater specificity (11 of 16 items) than sensitivity (3 of 16 items). The positive predictive value was greater than the negative predictive value for all questions. The CWC has strong reliability. Sensitivity and specificity suggested workers often indicated no problems with workstation setup when problems existed. The evidence suggests that while the CWC may not be valid when used alone, it may be a suitable adjunct to an ergonomic assessment completed by professionals.

  1. Evolutionary inference via the Poisson Indel Process.

    PubMed

    Bouchard-Côté, Alexandre; Jordan, Michael I

    2013-01-22

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.

  2. Evolutionary inference via the Poisson Indel Process

    PubMed Central

    Bouchard-Côté, Alexandre; Jordan, Michael I.

    2013-01-01

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114–124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments. PMID:23275296

  3. An approach for heterogeneous and loosely coupled geospatial data distributed computing

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui

    2010-07-01

    Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.

  4. Application of a distributed network in computational fluid dynamic simulations

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.; Deshpande, Ashish

    1994-01-01

    A general-purpose 3-D, incompressible Navier-Stokes algorithm is implemented on a network of concurrently operating workstations using parallel virtual machine (PVM) and compared with its performance on a CRAY Y-MP and on an Intel iPSC/860. The problem is relatively computationally intensive, and has a communication structure based primarily on nearest-neighbor communication, making it ideally suited to message passing. Such problems are frequently encountered in computational fluid dynamics (CDF), and their solution is increasingly in demand. The communication structure is explicitly coded in the implementation to fully exploit the regularity in message passing in order to produce a near-optimal solution. Results are presented for various grid sizes using up to eight processors.

  5. Computer Based Collaborative Problem Solving for Introductory Courses in Physics

    NASA Astrophysics Data System (ADS)

    Ilie, Carolina; Lee, Kevin

    2010-03-01

    We discuss collaborative problem solving computer-based recitation style. The course is designed by Lee [1], and the idea was proposed before by Christian, Belloni and Titus [2,3]. The students find the problems on a web-page containing simulations (physlets) and they write the solutions on an accompanying worksheet after discussing it with a classmate. Physlets have the advantage of being much more like real-world problems than textbook problems. We also compare two protocols for web-based instruction using simulations in an introductory physics class [1]. The inquiry protocol allowed students to control input parameters while the worked example protocol did not. We will discuss which of the two methods is more efficient in relation to Scientific Discovery Learning and Cognitive Load Theory. 1. Lee, Kevin M., Nicoll, Gayle and Brooks, Dave W. (2004). ``A Comparison of Inquiry and Worked Example Web-Based Instruction Using Physlets'', Journal of Science Education and Technology 13, No. 1: 81-88. 2. Christian, W., and Belloni, M. (2001). Physlets: Teaching Physics With Interactive Curricular Material, Prentice Hall, Englewood Cliffs, NJ. 3. Christian,W., and Titus,A. (1998). ``Developing web-based curricula using Java Physlets.'' Computers in Physics 12: 227--232.

  6. A mediational model of racial discrimination and alcohol-related problems among african american college students.

    PubMed

    Boynton, Marcella H; O'Hara, Ross E; Covault, Jonathan; Scott, Denise; Tennen, Howard

    2014-03-01

    Racial discrimination has been identified as an important predictor of alcohol-related outcomes for African Americans. The goal of the current study was to extend previously found links between lifetime discrimination, alcohol use, and alcohol problems as well as to elucidate the affective mechanisms underlying these associations, as moderated by gender. A multiple-groups structural equation model was computed using survey data collected from 619 students from a historically Black college/university. The final model provided excellent fit to the data, explaining 6% of the variance in alcohol consumption and 37% of the variance in alcohol problems. Discrimination was a significant predictor of alcohol-related problems but not, by and large, level of use. For men, anger-but not discrimination-specific anger-was a significant partial mediator of the link between discrimination and both alcohol use and alcohol problems. Depression partially mediated the link between discrimination and alcohol problems for both men and women. The results suggest that, for African Americans whose drinking leads to drinking-related problems, discrimination and poor affective self-regulation are highly relevant and predictive factors, especially for men.

  7. Issues about home computer workstations and primary school children in Hong Kong: a pilot study.

    PubMed

    Py Szeto, Grace; Tsui, Macy Mei Sze; Sze, Winky Wing Yu; Chan, Irene Sin Ting; Chung, Cyrus Chak Fai; Lee, Felix Wai Kit

    2014-01-01

    All around the world, there is a rising trend of computer use among young children especially at home; yet the computer furniture is usually not designed specifically for children's use. In Hong Kong, this creates an even greater problem as most people live in very small apartments in high-rise buildings. Most of the past research literature is focused on computer use in children in the school environment and not about the home setting. The present pilot study aimed to examine ergonomic issues in children's use of computers at home in Hong Kong, which has some unique home environmental issues. Fifteen children (six male, nine female) aged 8-11 years and their parents were recruited by convenience sampling. Participants were asked to provide information on their computer use habits and related musculoskeletal symptoms. Participants were photographed when sitting at the computer workstation in their usual postures and joint angles were measured. The participants used computers frequently for less than two hours daily and the majority shared their workstations with other family members. Computer furniture was designed more for adult use and a mismatch of furniture and body size was found. Ergonomic issues included inappropriate positioning of the display screen, keyboard, and mouse, as well as lack of forearm support and suitable backrest. These led to awkward or constrained postures while some postural problems may be habitual. Three participants reported neck and shoulder discomfort in the past 12 months and 4 reported computer-related discomfort. Inappropriate computer workstation settings may have adverse effects on children's postures. More research on workstation setup at home, where children may use their computers the most, is needed.

  8. Autonomic Cluster Management System (ACMS): A Demonstration of Autonomic Principles at Work

    NASA Technical Reports Server (NTRS)

    Baldassari, James D.; Kopec, Christopher L.; Leshay, Eric S.; Truszkowski, Walt; Finkel, David

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of achieving significant computational capabilities for high-performance computing applications, while simultaneously affording the ability to. increase that capability simply by adding more (inexpensive) processors. However, the task of manually managing and con.guring a cluster quickly becomes impossible as the cluster grows in size. Autonomic computing is a relatively new approach to managing complex systems that can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Automatic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management.

  9. An Annotated Bibliography of Current Literature Dealing with the Effective Teaching of Computer Programming in High Schools.

    ERIC Educational Resources Information Center

    Taylor, Karen A.

    This review of the literature and annotated bibliography summarizes the available research relating to teaching programming to high school students. It is noted that, while the process of programming a computer could be broken down into five steps--problem definition, algorithm design, code writing, debugging, and documentation--current research…

  10. Computer Solution of the Two-Dimensional Tether Ball: Problem to Illustrate Newton's Second Law.

    ERIC Educational Resources Information Center

    Zimmerman, W. Bruce

    Force diagrams involving angular velocity, linear velocity, centripetal force, work, and kinetic energy are given with related equations of motion expressed in polar coordinates. The computer is used to solve differential equations, thus reducing the mathematical requirements of the students. An experiment is conducted using an air table to check…

  11. Neural-Network Computer Transforms Coordinates

    NASA Technical Reports Server (NTRS)

    Josin, Gary M.

    1990-01-01

    Numerical simulation demonstrated ability of conceptual neural-network computer to generalize what it has "learned" from few examples. Ability to generalize achieved with even simple neural network (relatively few neurons) and after exposure of network to only few "training" examples. Ability to obtain fairly accurate mappings after only few training examples used to provide solutions to otherwise intractable mapping problems.

  12. Svetovid--Interactive Development and Submission System with Prevention of Academic Collusion in Computer Programming

    ERIC Educational Resources Information Center

    Pribela, Ivan; Ivanovic, Mirjana; Budimac, Zoran

    2009-01-01

    This paper discusses Svetovid, cross-platform software that helps instructors to assess the amount of effort put into practical exercises and exams in courses related to computer programming. The software was developed as an attempt at solving problems associated with practical exercises and exams. This paper discusses the design and use of…

  13. Sequential Test Strategies for Multiple Fault Isolation

    NASA Technical Reports Server (NTRS)

    Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.

    1997-01-01

    In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.

  14. Automated problem scheduling and reduction of synchronization delay effects

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.

    1987-01-01

    It is anticipated that in order to make effective use of many future high performance architectures, programs will have to exhibit at least a medium grained parallelism. A framework is presented for partitioning very sparse triangular systems of linear equations that is designed to produce favorable preformance results in a wide variety of parallel architectures. Efficient methods for solving these systems are of interest because: (1) they provide a useful model problem for use in exploring heuristics for the aggregation, mapping and scheduling of relatively fine grained computations whose data dependencies are specified by directed acrylic graphs, and (2) because such efficient methods can find direct application in the development of parallel algorithms for scientific computation. Simple expressions are derived that describe how to schedule computational work with varying degrees of granularity. The Encore Multimax was used as a hardware simulator to investigate the performance effects of using the partitioning techniques presented in shared memory architectures with varying relative synchronization costs.

  15. Basic metabolic panel

    MedlinePlus

    SMAC7; Sequential multi-channel analysis with computer-7; SMA7; Metabolic panel 7; CHEM-7 ... breathing problems, diabetes or diabetes-related complications, and medicine side effects. Talk to your provider about the ...

  16. Singular perturbation techniques for real time aircraft trajectory optimization and control

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Moerder, D. D.

    1982-01-01

    The usefulness of singular perturbation methods for developing real time computer algorithms to control and optimize aircraft flight trajectories is examined. A minimum time intercept problem using F-8 aerodynamic and propulsion data is used as a baseline. This provides a framework within which issues relating to problem formulation, solution methodology and real time implementation are examined. Theoretical questions relating to separability of dynamics are addressed. With respect to implementation, situations leading to numerical singularities are identified, and procedures for dealing with them are outlined. Also, particular attention is given to identifying quantities that can be precomputed and stored, thus greatly reducing the on-board computational load. Numerical results are given to illustrate the minimum time algorithm, and the resulting flight paths. An estimate is given for execution time and storage requirements.

  17. A novel quantum scheme for secure two-party distance computation

    NASA Astrophysics Data System (ADS)

    Peng, Zhen-wan; Shi, Run-hua; Zhong, Hong; Cui, Jie; Zhang, Shun

    2017-12-01

    Secure multiparty computational geometry is an essential field of secure multiparty computation, which computes a computation geometric problem without revealing any private information of each party. Secure two-party distance computation is a primitive of secure multiparty computational geometry, which computes the distance between two points without revealing each point's location information (i.e., coordinate). Secure two-party distance computation has potential applications with high secure requirements in military, business, engineering and so on. In this paper, we present a quantum solution to secure two-party distance computation by subtly using quantum private query. Compared to the classical related protocols, our quantum protocol can ensure higher security and better privacy protection because of the physical principle of quantum mechanics.

  18. Ontology-based vector space model and fuzzy query expansion to retrieve knowledge on medical computational problem solutions.

    PubMed

    Bratsas, Charalampos; Koutkias, Vassilis; Kaimakamis, Evangelos; Bamidis, Panagiotis; Maglaveras, Nicos

    2007-01-01

    Medical Computational Problem (MCP) solving is related to medical problems and their computerized algorithmic solutions. In this paper, an extension of an ontology-based model to fuzzy logic is presented, as a means to enhance the information retrieval (IR) procedure in semantic management of MCPs. We present herein the methodology followed for the fuzzy expansion of the ontology model, the fuzzy query expansion procedure, as well as an appropriate ontology-based Vector Space Model (VSM) that was constructed for efficient mapping of user-defined MCP search criteria and MCP acquired knowledge. The relevant fuzzy thesaurus is constructed by calculating the simultaneous occurrences of terms and the term-to-term similarities derived from the ontology that utilizes UMLS (Unified Medical Language System) concepts by using Concept Unique Identifiers (CUI), synonyms, semantic types, and broader-narrower relationships for fuzzy query expansion. The current approach constitutes a sophisticated advance for effective, semantics-based MCP-related IR.

  19. Advanced Numerical Methods for Computing Statistical Quantities of Interest from Solutions of SPDES

    DTIC Science & Technology

    2012-01-19

    and related optimization problems; developing numerical methods for option pricing problems in the presence of random arbitrage return. 1. Novel...equations (BSDEs) are connected to nonlinear partial differen- tial equations and non-linear semigroups, to the theory of hedging and pricing of contingent...the presence of random arbitrage return [3] We consider option pricing problems when we relax the condition of no arbitrage in the Black- Scholes

  20. Hyperactivity, Impulsivity, Inattention (HIA) and Conduct Problems among African American Youth: The Roles of Neighborhood and Gender

    ERIC Educational Resources Information Center

    Zalot, Alecia; Jones, Deborah J.; Kincaid, Carlye; Smith, Tasia

    2009-01-01

    This study replicated and extended prior research by examining neighborhood context as a moderator of the relation between the constellation of hyperactivity, impulsivity, and attention (HIA) difficulties and conduct problems among African American youth (11-16 years old; 55% girls) from single mother homes (N = 193). Using audio computer-assisted…

  1. Extending Strong Scaling of Quantum Monte Carlo to the Exascale

    NASA Astrophysics Data System (ADS)

    Shulenburger, Luke; Baczewski, Andrew; Luo, Ye; Romero, Nichols; Kent, Paul

    Quantum Monte Carlo is one of the most accurate and most computationally expensive methods for solving the electronic structure problem. In spite of its significant computational expense, its massively parallel nature is ideally suited to petascale computers which have enabled a wide range of applications to relatively large molecular and extended systems. Exascale capabilities have the potential to enable the application of QMC to significantly larger systems, capturing much of the complexity of real materials such as defects and impurities. However, both memory and computational demands will require significant changes to current algorithms to realize this possibility. This talk will detail both the causes of the problem and potential solutions. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corp, a wholly owned subsidiary of Lockheed Martin Corp, for the US Department of Energys National Nuclear Security Administration under contract DE-AC04-94AL85000.

  2. Extended Krylov subspaces approximations of matrix functions. Application to computational electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Druskin, V.; Lee, Ping; Knizhnerman, L.

    There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.

  3. Evaluation of Enthalpy Diagrams for NH3-H2O Absorption Refrigerator

    NASA Astrophysics Data System (ADS)

    Takei, Toshitaka; Saito, Kiyoshi; Kawai, Sunao

    The protection of environment is becoming a grave problem nowadays and an absorption refrigerator, which does not use fleon as a refrigerant, is acquiring a close attention. Among the absorption refrigerators, a number of ammonia-water absorption refrigerators are being used in realm such as refrigeration and ice accumulation, since this type of refrigerator can produce below zero degree products. It is essential to conduct an investigation on the characteristics of ammonia-water absorption refrigerator in detail by means of computer simulation in order to realize low cost, highly efficient operation. Unfortunately, there have been number of problems in order to conduct computer simulations. Firstly, Merkel's achievements of enthalpy diagram does not give the relational equations. And secondly, although relational equation are being proposed by Ziegler, simpler equations that can be applied to computer simulation are yet to be proposed. In this research, simper equations based on Ziegler's equations have been derived to make computer simulation concerning the performance of ammonia-water absorption refrigerator possible-Both results of computer simulations using simple equations and Merkel's enthalpy diagram respectively, have been compared with the actual experimental data of one staged ammonia-water absorption refrigerator. Consequently, it is clarified that the results from Ziegler's equations agree with experimental data better than those from Merkel's enthalpy diagram.

  4. Specialized computer system to diagnose critical lined equipment

    NASA Astrophysics Data System (ADS)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Morozova, O. A.; Nedelkin, A. A.

    2018-05-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors propose and describe the structure of the specialized computer system to diagnose critical lined equipment. The relative results of diagnosing lining condition by the basic system and the proposed specialized computer system are presented. To automate evaluation of lining condition and support in making decisions regarding the operation mode of the lined equipment, the specialized software has been developed.

  5. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  6. Adults' age-related differences in strategy perseveration are modulated by response-stimulus intervals and problem features.

    PubMed

    Lemaire, Patrick; Brun, Fleur

    2014-10-01

    Ageing results in the tendency of older adults to repeat the same strategy across consecutive problems more often than young adults, even when such strategy perseveration is not appropriate. Here, we examined how these age-related differences in strategy perseveration are modulated by response-stimulus intervals and problem characteristics. We asked participants to select the best strategy while accomplishing a computational estimation task (i.e., provide approximate sums to two-digit addition problems like 38 + 74). We found that participants repeated the same strategy across consecutive problems more often when the duration between their response and next problem display was short (300 ms) than when it was long (1300 ms). We also found more strategy perseverations in older than in young adults under short Response-Stimulus Intervals, but not under long Response-Stimulus Intervals. Finally, age-related differences in strategy perseveration decreased when problem features helped participants to select the best strategy. These modulations of age-related differences in strategy perseveration by response-stimulus intervals and characteristics of target problems are important for furthering our understanding of mechanisms underlying strategy perseveration and, more generally, ageing effects on strategy selection.

  7. A family of position- and orientation-independent embedded boundary methods for viscous flow and fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Huang, Daniel Z.; De Santis, Dante; Farhat, Charbel

    2018-07-01

    The Finite Volume method with Exact two-material Riemann Problems (FIVER) is both a computational framework for multi-material flows characterized by large density jumps, and an Embedded Boundary Method (EBM) for computational fluid dynamics and highly nonlinear Fluid-Structure Interaction (FSI) problems. This paper deals with the EBM aspect of FIVER. For FSI problems, this EBM has already demonstrated the ability to address viscous effects along wall boundaries, and large deformations and topological changes of such boundaries. However, like for most EBMs - also known as immersed boundary methods - the performance of FIVER in the vicinity of a wall boundary can be sensitive with respect to the position and orientation of this boundary relative to the embedding mesh. This is mainly due to ill-conditioning issues that arise when an embedded interface becomes too close to a node of the embedding mesh, which may lead to spurious oscillations in the computed solution gradients at the wall boundary. This paper resolves these issues by introducing an alternative definition of the active/inactive status of a mesh node that leads to the removal of all sources of potential ill-conditioning from all spatial approximations performed by FIVER in the vicinity of a fluid-structure interface. It also makes two additional contributions. The first one is a new procedure for constructing the fluid-structure half Riemann problem underlying the semi-discretization by FIVER of the convective fluxes. This procedure eliminates one extrapolation from the conventional treatment of the wall boundary conditions and replaces it by an interpolation, which improves robustness. The second contribution is a post-processing algorithm for computing quantities of interest at the wall that achieves smoothness in the computed solution and its gradients. Lessons learned from these enhancements and contributions that are triggered by the new definition of the status of a mesh node are then generalized and exploited to eliminate from the original version of the FIVER method its sensitivities with respect to both of the position and orientation of the wall boundary relative to the embedding mesh, while maintaining the original definition of the status of a mesh node. This leads to a family of second-generation FIVER methods whose performance is illustrated in this paper for several flow and FSI problems. These include a challenging flow problem over a bird wing characterized by a feather-induced surface roughness, and a complex flexible flapping wing problem for which experimental data is available.

  8. Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.

    1981-01-01

    Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.

  9. News Focus: NSF Director Erich Bloch Discusses Foundation's Problems, Outlook.

    ERIC Educational Resources Information Center

    Chemical and Engineering News, 1987

    1987-01-01

    Relates the comments offered in an interview with Erich Bloch, the National Science Foundation (NSF) Director. Discusses issues related to NSF and its funding, engineering research centers, involvement with industry, concern for science education, computer centers, and its affiliation with the social sciences. (ML)

  10. What is biomedical informatics?

    PubMed Central

    Bernstam, Elmer V.; Smith, Jack W.; Johnson, Todd R.

    2009-01-01

    Biomedical informatics lacks a clear and theoretically grounded definition. Many proposed definitions focus on data, information, and knowledge, but do not provide an adequate definition of these terms. Leveraging insights from the philosophy of information, we define informatics as the science of information, where information is data plus meaning. Biomedical informatics is the science of information as applied to or studied in the context of biomedicine. Defining the object of study of informatics as data plus meaning clearly distinguishes the field from related fields, such as computer science, statistics and biomedicine, which have different objects of study. The emphasis on data plus meaning also suggests that biomedical informatics problems tend to be difficult when they deal with concepts that are hard to capture using formal, computational definitions. In other words, problems where meaning must be considered are more difficult than problems where manipulating data without regard for meaning is sufficient. Furthermore, the definition implies that informatics research, teaching, and service should focus on biomedical information as data plus meaning rather than only computer applications in biomedicine. PMID:19683067

  11. Applicability of mathematical modeling to problems of environmental physiology

    NASA Technical Reports Server (NTRS)

    White, Ronald J.; Lujan, Barbara F.; Leonard, Joel I.; Srinivasan, R. Srini

    1988-01-01

    The paper traces the evolution of mathematical modeling and systems analysis from terrestrial research to research related to space biomedicine and back again to terrestrial research. Topics covered include: power spectral analysis of physiological signals; pattern recognition models for detection of disease processes; and, computer-aided diagnosis programs used in conjunction with a special on-line biomedical computer library.

  12. Knowledge-based geographic information systems on the Macintosh computer: a component of the GypsES project

    Treesearch

    Gregory Elmes; Thomas Millette; Charles B. Yuill

    1991-01-01

    GypsES, a decision-support and expert system for the management of Gypsy Moth addresses five related research problems in a modular, computer-based project. The modules are hazard rating, monitoring, prediction, treatment decision and treatment implementation. One common component is a geographic information system designed to function intelligently. We refer to this...

  13. Self-calibration of robot-sensor system

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1990-01-01

    The process of finding the coordinate transformation between a robot and an external sensor system has been addressed. This calibration is equivalent to solving a nonlinear optimization problem for the parameters that characterize the transformation. A two-step procedure is herein proposed for solving the problem. The first step involves finding a nominal solution that is a good approximation of the final solution. A varational problem is then generated to replace the original problem in the next step. With the assumption that the variational parameters are small compared to unity, the problem that can be more readily solved with relatively small computation effort.

  14. Security and privacy qualities of medical devices: an analysis of FDA postmarket surveillance.

    PubMed

    Kramer, Daniel B; Baker, Matthew; Ransford, Benjamin; Molina-Markham, Andres; Stewart, Quinn; Fu, Kevin; Reynolds, Matthew R

    2012-01-01

    Medical devices increasingly depend on computing functions such as wireless communication and Internet connectivity for software-based control of therapies and network-based transmission of patients' stored medical information. These computing capabilities introduce security and privacy risks, yet little is known about the prevalence of such risks within the clinical setting. We used three comprehensive, publicly available databases maintained by the Food and Drug Administration (FDA) to evaluate recalls and adverse events related to security and privacy risks of medical devices. Review of weekly enforcement reports identified 1,845 recalls; 605 (32.8%) of these included computers, 35 (1.9%) stored patient data, and 31 (1.7%) were capable of wireless communication. Searches of databases specific to recalls and adverse events identified only one event with a specific connection to security or privacy. Software-related recalls were relatively common, and most (81.8%) mentioned the possibility of upgrades, though only half of these provided specific instructions for the update mechanism. Our review of recalls and adverse events from federal government databases reveals sharp inconsistencies with databases at individual providers with respect to security and privacy risks. Recalls related to software may increase security risks because of unprotected update and correction mechanisms. To detect signals of security and privacy problems that adversely affect public health, federal postmarket surveillance strategies should rethink how to effectively and efficiently collect data on security and privacy problems in devices that increasingly depend on computing systems susceptible to malware.

  15. Security and Privacy Qualities of Medical Devices: An Analysis of FDA Postmarket Surveillance

    PubMed Central

    Kramer, Daniel B.; Baker, Matthew; Ransford, Benjamin; Molina-Markham, Andres; Stewart, Quinn; Fu, Kevin; Reynolds, Matthew R.

    2012-01-01

    Background Medical devices increasingly depend on computing functions such as wireless communication and Internet connectivity for software-based control of therapies and network-based transmission of patients’ stored medical information. These computing capabilities introduce security and privacy risks, yet little is known about the prevalence of such risks within the clinical setting. Methods We used three comprehensive, publicly available databases maintained by the Food and Drug Administration (FDA) to evaluate recalls and adverse events related to security and privacy risks of medical devices. Results Review of weekly enforcement reports identified 1,845 recalls; 605 (32.8%) of these included computers, 35 (1.9%) stored patient data, and 31 (1.7%) were capable of wireless communication. Searches of databases specific to recalls and adverse events identified only one event with a specific connection to security or privacy. Software-related recalls were relatively common, and most (81.8%) mentioned the possibility of upgrades, though only half of these provided specific instructions for the update mechanism. Conclusions Our review of recalls and adverse events from federal government databases reveals sharp inconsistencies with databases at individual providers with respect to security and privacy risks. Recalls related to software may increase security risks because of unprotected update and correction mechanisms. To detect signals of security and privacy problems that adversely affect public health, federal postmarket surveillance strategies should rethink how to effectively and efficiently collect data on security and privacy problems in devices that increasingly depend on computing systems susceptible to malware. PMID:22829874

  16. Efficient Preconditioning for the p-Version Finite Element Method in Two Dimensions

    DTIC Science & Technology

    1989-10-01

    paper, we study fast parallel preconditioners for systems of equations arising from the p-version finite element method. The p-version finite element...computations and the solution of a relatively small global auxiliary problem. We study two different methods. In the first (Section 3), the global...20], will be studied in the next section. Problem (3.12) is obviously much more easily solved than the original problem ,nd the procedure is highly

  17. A border-ownership model based on computational electromagnetism.

    PubMed

    Zainal, Zaem Arif; Satoh, Shunji

    2018-03-01

    The mathematical relation between a vector electric field and its corresponding scalar potential field is useful to formulate computational problems of lower/middle-order visual processing, specifically related to the assignment of borders to the side of the object: so-called border ownership (BO). BO coding is a key process for extracting the objects from the background, allowing one to organize a cluttered scene. We propose that the problem is solvable simultaneously by application of a theorem of electromagnetism, i.e., "conservative vector fields have zero rotation, or "curl." We hypothesize that (i) the BO signal is definable as a vector electric field with arrowheads pointing to the inner side of perceived objects, and (ii) its corresponding scalar field carries information related to perceived order in depth of occluding/occluded objects. A simple model was developed based on this computational theory. Model results qualitatively agree with object-side selectivity of BO-coding neurons, and with perceptions of object order. The model update rule can be reproduced as a plausible neural network that presents new interpretations of existing physiological results. Results of this study also suggest that T-junction detectors are unnecessary to calculate depth order. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Structural factoring approach for analyzing stochastic networks

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Shier, Douglas R.

    1991-01-01

    The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.

  19. Assessing Cognitive Learning of Analytical Problem Solving

    NASA Astrophysics Data System (ADS)

    Billionniere, Elodie V.

    Introductory programming courses, also known as CS1, have a specific set of expected outcomes related to the learning of the most basic and essential computational concepts in computer science (CS). However, two of the most often heard complaints in such courses are that (1) they are divorced from the reality of application and (2) they make the learning of the basic concepts tedious. The concepts introduced in CS1 courses are highly abstract and not easily comprehensible. In general, the difficulty is intrinsic to the field of computing, often described as "too mathematical or too abstract." This dissertation presents a small-scale mixed method study conducted during the fall 2009 semester of CS1 courses at Arizona State University. This study explored and assessed students' comprehension of three core computational concepts---abstraction, arrays of objects, and inheritance---in both algorithm design and problem solving. Through this investigation students' profiles were categorized based on their scores and based on their mistakes categorized into instances of five computational thinking concepts: abstraction, algorithm, scalability, linguistics, and reasoning. It was shown that even though the notion of computational thinking is not explicit in the curriculum, participants possessed and/or developed this skill through the learning and application of the CS1 core concepts. Furthermore, problem-solving experiences had a direct impact on participants' knowledge skills, explanation skills, and confidence. Implications for teaching CS1 and for future research are also considered.

  20. Derivation of Einstein-Cartan theory from general relativity

    NASA Astrophysics Data System (ADS)

    Petti, Richard

    2015-04-01

    General relativity cannot describe exchange of classical intrinsic angular momentum and orbital angular momentum. Einstein-Cartan theory fixes this problem in the least invasive way. In the late 20th century, the consensus view was that Einstein-Cartan theory requires inclusion of torsion without adequate justification, it has no empirical support (though it doesn't conflict with any known evidence), it solves no important problem, and it complicates gravitational theory with no compensating benefit. In 1986 the author published a derivation of Einstein-Cartan theory from general relativity, with no additional assumptions or parameters. Starting without torsion, Poincaré symmetry, classical or quantum spin, or spinors, it derives torsion and its relation to spin from a continuum limit of general relativistic solutions. The present work makes the case that this computation, combined with supporting arguments, constitutes a derivation of Einstein-Cartan theory from general relativity, not just a plausibility argument. This paper adds more and simpler explanations, more computational details, correction of a factor of 2, discussion of limitations of the derivation, and discussion of some areas of gravitational research where Einstein-Cartan theory is relevant.

  1. The development and application of CFD technology in mechanical engineering

    NASA Astrophysics Data System (ADS)

    Wei, Yufeng

    2017-12-01

    Computational Fluid Dynamics (CFD) is an analysis of the physical phenomena involved in fluid flow and heat conduction by computer numerical calculation and graphical display. The numerical method simulates the complexity of the physical problem and the precision of the numerical solution, which is directly related to the hardware speed of the computer and the hardware such as memory. With the continuous improvement of computer performance and CFD technology, it has been widely applied to the field of water conservancy engineering, environmental engineering and industrial engineering. This paper summarizes the development process of CFD, the theoretical basis, the governing equations of fluid mechanics, and introduces the various methods of numerical calculation and the related development of CFD technology. Finally, CFD technology in the mechanical engineering related applications are summarized. It is hoped that this review will help researchers in the field of mechanical engineering.

  2. Scheduling Earth Observing Fleets Using Evolutionary Algorithms: Problem Description and Approach

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Morris, Robert; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We describe work in progress concerning multi-instrument, multi-satellite scheduling. Most, although not all, Earth observing instruments currently in orbit are unique. In the relatively near future, however, we expect to see fleets of Earth observing spacecraft, many carrying nearly identical instruments. This presents a substantially new scheduling challenge. Inspired by successful commercial applications of evolutionary algorithms in scheduling domains, this paper presents work in progress regarding the use of evolutionary algorithms to solve a set of Earth observing related model problems. Both the model problems and the software are described. Since the larger problems will require substantial computation and evolutionary algorithms are embarrassingly parallel, we discuss our parallelization techniques using dedicated and cycle-scavenged workstations.

  3. The Management of Cognitive Load During Complex Cognitive Skill Acquisition by Means of Computer-Simulated Problem Solving

    ERIC Educational Resources Information Center

    Kester, Liesbeth; Kirschner, Paul A.; van Merrienboer, Jeroen J.G.

    2005-01-01

    This study compared the effects of two information presentation formats on learning to solve problems in electrical circuits. In one condition, the split-source format, information relating to procedural aspects of the functioning of an electrical circuit was not integrated in a circuit diagram, while information in the integrated format condition…

  4. Combinatorial solutions to integrable hierarchies

    NASA Astrophysics Data System (ADS)

    Kazarian, M. E.; Lando, S. K.

    2015-06-01

    This paper reviews modern approaches to the construction of formal solutions to integrable hierarchies of mathematical physics whose coefficients are answers to various enumerative problems. The relationship between these approaches and the combinatorics of symmetric groups and their representations is explained. Applications of the results to the construction of efficient computations in problems related to models of quantum field theories are described. Bibliography: 34 titles.

  5. Online Health-Related Fitness Courses: A Wolf in Sheep's Clothing or a Solution to Some Common Problems?

    ERIC Educational Resources Information Center

    Ransdell, Lynda B.; Rice, Kerry; Snelson, Chareen; DeCola, Josh

    2008-01-01

    Distance education is growing rapidly at the collegiate and secondary levels. Online courses, which deliver information via a computer, are a form of distance education that has been both praised and condemned. Those skeptical of online courses maintain that learners have to deal with technology problems, low motivation, isolation, and lack of…

  6. Young Adolescents' Metacognition and Domain Knowledge as Predictors of Hypothesis-Development Performance in a Computer-Supported Context

    ERIC Educational Resources Information Center

    Kim, Hye Jeong; Pedersen, Susan

    2010-01-01

    Recently, the importance of ill-structured problem-solving in real-world contexts has become a focus of educational research. Particularly, the hypothesis-development process has been examined as one of the keys to developing a high-quality solution in a problem context. The authors of this study examined predictive relations between young…

  7. The role of metadata in managing large environmental science datasets. Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melton, R.B.; DeVaney, D.M.; French, J. C.

    1995-06-01

    The purpose of this workshop was to bring together computer science researchers and environmental sciences data management practitioners to consider the role of metadata in managing large environmental sciences datasets. The objectives included: establishing a common definition of metadata; identifying categories of metadata; defining problems in managing metadata; and defining problems related to linking metadata with primary data.

  8. Quantum simulation of the integer factorization problem: Bell states in a Penning trap

    NASA Astrophysics Data System (ADS)

    Rosales, Jose Luis; Martin, Vicente

    2018-03-01

    The arithmetic problem of factoring an integer N can be translated into the physics of a quantum device, a result that supports Pólya's and Hilbert's conjecture to demonstrate Riemann's hypothesis. The energies of this system, being univocally related to the factors of N , are the eigenvalues of a bounded Hamiltonian. Here we solve the quantum conditions and show that the histogram of the discrete energies, provided by the spectrum of the system, should be interpreted in number theory as the relative probability for a prime to be a factor candidate of N . This is equivalent to a quantum sieve that is shown to require only o (ln√{N}) 3 energy measurements to solve the problem, recovering Shor's complexity result. Hence the outcome can be seen as a probability map that a pair of primes solve the given factorization problem. Furthermore, we show that a possible embodiment of this quantum simulator corresponds to two entangled particles in a Penning trap. The possibility to build the simulator experimentally is studied in detail. The results show that factoring numbers, many orders of magnitude larger than those computed with experimentally available quantum computers, is achievable using typical parameters in Penning traps.

  9. Children's strategies to solving additive inverse problems: a preliminary analysis

    NASA Astrophysics Data System (ADS)

    Ding, Meixia; Auxter, Abbey E.

    2017-03-01

    Prior studies show that elementary school children generally "lack" formal understanding of inverse relations. This study goes beyond lack to explore what children might "have" in their existing conception. A total of 281 students, kindergarten to third grade, were recruited to respond to a questionnaire that involved both contextual and non-contextual tasks on inverse relations, requiring both computational and explanatory skills. Results showed that children demonstrated better performance in computation than explanation. However, many students' explanations indicated that they did not necessarily utilize inverse relations for computation. Rather, they appeared to possess partial understanding, as evidenced by their use of part-whole structure, which is a key to understanding inverse relations. A close inspection of children's solution strategies further revealed that the sophistication of children's conception of part-whole structure varied in representation use and unknown quantity recognition, which suggests rich opportunities to develop students' understanding of inverse relations in lower elementary classrooms.

  10. Human/computer control of undersea teleoperators

    NASA Technical Reports Server (NTRS)

    Sheridan, T. B.; Verplank, W. L.; Brooks, T. L.

    1978-01-01

    The potential of supervisory controlled teleoperators for accomplishment of manipulation and sensory tasks in deep ocean environments is discussed. Teleoperators and supervisory control are defined, the current problems of human divers are reviewed, and some assertions are made about why supervisory control has potential use to replace and extend human diver capabilities. The relative roles of man and computer and the variables involved in man-computer interaction are next discussed. Finally, a detailed description of a supervisory controlled teleoperator system, SUPERMAN, is presented.

  11. Simple Logic for Big Problems: An Inside Look at Relational Databases.

    ERIC Educational Resources Information Center

    Seba, Douglas B.; Smith, Pat

    1982-01-01

    Discusses database design concept termed "normalization" (process replacing associations between data with associations in two-dimensional tabular form) which results in formation of relational databases (they are to computers what dictionaries are to spoken languages). Applications of the database in serials control and complex systems…

  12. Job-Related Basic Skills. ERIC Digest No. 94.

    ERIC Educational Resources Information Center

    Kerka, Sandra

    Seven job-related basic skills identified as skills employers want are as follows: (1) learning to learn; (2) reading, writing, and computation; (3) oral communication and listening; (4) creative thinking and problem solving; (5) personal management, including self-esteem, goal setting, motivation, and personal and career development; (6) group…

  13. Relating dynamic brain states to dynamic machine states: Human and machine solutions to the speech recognition problem

    PubMed Central

    Liu, Xunying; Zhang, Chao; Woodland, Phil; Fonteneau, Elisabeth

    2017-01-01

    There is widespread interest in the relationship between the neurobiological systems supporting human cognition and emerging computational systems capable of emulating these capacities. Human speech comprehension, poorly understood as a neurobiological process, is an important case in point. Automatic Speech Recognition (ASR) systems with near-human levels of performance are now available, which provide a computationally explicit solution for the recognition of words in continuous speech. This research aims to bridge the gap between speech recognition processes in humans and machines, using novel multivariate techniques to compare incremental ‘machine states’, generated as the ASR analysis progresses over time, to the incremental ‘brain states’, measured using combined electro- and magneto-encephalography (EMEG), generated as the same inputs are heard by human listeners. This direct comparison of dynamic human and machine internal states, as they respond to the same incrementally delivered sensory input, revealed a significant correspondence between neural response patterns in human superior temporal cortex and the structural properties of ASR-derived phonetic models. Spatially coherent patches in human temporal cortex responded selectively to individual phonetic features defined on the basis of machine-extracted regularities in the speech to lexicon mapping process. These results demonstrate the feasibility of relating human and ASR solutions to the problem of speech recognition, and suggest the potential for further studies relating complex neural computations in human speech comprehension to the rapidly evolving ASR systems that address the same problem domain. PMID:28945744

  14. A roadmap for optimal control: the right way to commute.

    PubMed

    Ross, I Michael

    2005-12-01

    Optimal control theory is the foundation for many problems in astrodynamics. Typical examples are trajectory design and optimization, relative motion control of distributed space systems and attitude steering. Many such problems in astrodynamics are solved by an alternative route of mathematical analysis and deep physical insight, in part because of the perception that an optimal control framework generates hard problems. Although this is indeed true of the Bellman and Pontryagin frameworks, the covector mapping principle provides a neoclassical approach that renders hard problems easy. That is, although the origins of this philosophy can be traced back to Bernoulli and Euler, it is essentially modern as a result of the strong linkage between approximation theory, set-valued analysis and computing technology. Motivated by the broad success of this approach, mission planners are now conceiving and demanding higher performance from space systems. This has resulted in new set of theoretical and computational problems. Recently, under the leadership of NASA-GRC, several workshops were held to address some of these problems. This paper outlines the theoretical issues stemming from practical problems in astrodynamics. Emphasis is placed on how it pertains to advanced mission design problems.

  15. Control and instanton trajectories for random transitions in turbulent flows

    NASA Astrophysics Data System (ADS)

    Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg

    2011-12-01

    Many turbulent systems exhibit random switches between qualitatively different attractors. The transition between these bistable states is often an extremely rare event, that can not be computed through DNS, due to complexity limitations. We present results for the calculation of instanton trajectories (a control problem) between non-equilibrium stationary states (attractors) in the 2D stochastic Navier-Stokes equations. By representing the transition probability between two states using a path integral formulation, we can compute the most probable trajectory (instanton) joining two non-equilibrium stationary states. Technically, this is equivalent to the minimization of an action, which can be related to a fluid mechanics control problem.

  16. On computational experiments in some inverse problems of heat and mass transfer

    NASA Astrophysics Data System (ADS)

    Bilchenko, G. G.; Bilchenko, N. G.

    2016-11-01

    The results of mathematical modeling of effective heat and mass transfer on hypersonic aircraft permeable surfaces are considered. The physic-chemical processes (the dissociation and the ionization) in laminar boundary layer of compressible gas are appreciated. Some algorithms of control restoration are suggested for the interpolation and approximation statements of heat and mass transfer inverse problems. The differences between the methods applied for the problem solutions search for these statements are discussed. Both the algorithms are realized as programs. Many computational experiments were accomplished with the use of these programs. The parameters of boundary layer obtained by means of the A.A.Dorodnicyn's generalized integral relations method from solving the direct problems have been used to obtain the inverse problems solutions. Two types of blowing laws restoration for the inverse problem in interpolation statement are presented as the examples. The influence of the temperature factor on the blowing restoration is investigated. The different character of sensitivity of controllable parameters (the local heat flow and local tangent friction) respectively to step (discrete) changing of control (the blowing) and the switching point position is studied.

  17. A study of computer graphics technology in application of communication resource management

    NASA Astrophysics Data System (ADS)

    Li, Jing; Zhou, Liang; Yang, Fei

    2017-08-01

    With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.

  18. Computer use and addiction in Romanian children and teenagers--an observational study.

    PubMed

    Chiriţă, V; Chiriţă, Roxana; Stefănescu, C; Chele, Gabriela; Ilinca, M

    2006-01-01

    The computer has provided some wonderful opportunities for our children. Although research on the effects of children's use of computer is still ambiguous, some initial indications of positive and negative effects are beginning t emerge. They commonly use computers for playing games, completing school assignments, email, and connecting to the Internet. This may sometimes come at the expense of other activities such as homework or normal social interchange. Although most children seem to naturally correct the problem, parents and educators must monitor the signs of misuse. Studies of general computer users suggest that some children's may experience psychological problems such as social isolation, depression, loneliness, and time mismanagement related to their computer use and failure at school. The purpose of this study is to investigate issues related to computer use by school students from 11 to 18 years old. The survey included a representative sample of 439 school students of ages 11 to 18. All of the students came from 3 gymnasium schools and 5 high schools of Iaşi, Romania. The students answered to a questionnaire comprising 34 questions related to computer activities. The children's parents answered to a second questionnaire with the same subject. Most questions supposed to rate on a scale the frequency of occurrence of a certain event or issue; some questions solicited an open-answer or to choose an answer from a list. These were aimed at highlighting: (1) The frequency of computer use by the students; (2) The interference of excessive use with school performance and social life; (3) The identification of a possible computer addiction. The data was processed using the SPSS statistics software, version 11.0. Results show that the school students prefer to spend a considerable amount of time with their computers, over 3 hours/day. More than 65.7% of the students have a computer at home. More than 70% of the parents admit they do not or only occasionally discuss computer use with their children. This indicates the fact that, although they bought a computer for their children, they do not supervise the way it is used. The family is rather a passive presence, vaguely responsible and lacking involvement. But, the parents consider that, for better school results, their children should use their computers. This study tried to identify aspects of computer addiction in gymnasium and high school students, as well.

  19. WPS mediation: An approach to process geospatial data on different computing backends

    NASA Astrophysics Data System (ADS)

    Giuliani, Gregory; Nativi, Stefano; Lehmann, Anthony; Ray, Nicolas

    2012-10-01

    The OGC Web Processing Service (WPS) specification allows generating information by processing distributed geospatial data made available through Spatial Data Infrastructures (SDIs). However, current SDIs have limited analytical capacities and various problems emerge when trying to use them in data and computing-intensive domains such as environmental sciences. These problems are usually not or only partially solvable using single computing resources. Therefore, the Geographic Information (GI) community is trying to benefit from the superior storage and computing capabilities offered by distributed computing (e.g., Grids, Clouds) related methods and technologies. Currently, there is no commonly agreed approach to grid-enable WPS. No implementation allows one to seamlessly execute a geoprocessing calculation following user requirements on different computing backends, ranging from a stand-alone GIS server up to computer clusters and large Grid infrastructures. Considering this issue, this paper presents a proof of concept by mediating different geospatial and Grid software packages, and by proposing an extension of WPS specification through two optional parameters. The applicability of this approach will be demonstrated using a Normalized Difference Vegetation Index (NDVI) mediated WPS process, highlighting benefits, and issues that need to be further investigated to improve performances.

  20. Extracting Depth From Motion Parallax in Real-World and Synthetic Displays

    NASA Technical Reports Server (NTRS)

    Hecht, Heiko; Kaiser, Mary K.; Aiken, William; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    In psychophysical studies on human sensitivity to visual motion parallax (MP), the use of computer displays is pervasive. However, a number of potential problems are associated with such displays: cue conflicts arise when observers accommodate to the screen surface, and observer head and body movements are often not reflected in the displays. We investigated observers' sensitivity to depth information in MP (slant, depth order, relative depth) using various real-world displays and their computer-generated analogs. Angle judgments of real-world stimuli were consistently superior to judgments that were based on computer-generated stimuli. Similar results were found for perceived depth order and relative depth. Perceptual competence of observers tends to be underestimated in research that is based on computer generated displays. Such findings cannot be generalized to more realistic viewing situations.

  1. Computer-associated health complaints and sources of ergonomic instructions in computer-related issues among Finnish adolescents: a cross-sectional study.

    PubMed

    Hakala, Paula T; Saarni, Lea A; Ketola, Ritva L; Rahkola, Erja T; Salminen, Jouko J; Rimpelä, Arja H

    2010-01-11

    The use of computers has increased among adolescents, as have musculoskeletal symptoms. There is evidence that these symptoms can be reduced through an ergonomics approach and through education. The purpose of this study was to examine where adolescents had received ergonomic instructions related to computer use, and whether receiving these instructions was associated with a reduced prevalence of computer-associated health complaints. Mailed survey with nationally representative sample of 12 to 18-year-old Finns in 2001 (n = 7292, response rate 70%). In total, 6961 youths reported using a computer. We tested the associations of computer use time and received ergonomic instructions (predictor variables) with computer-associated health complaints (outcome variables) using logistic regression analysis. To prevent computer-associated complaints, 61.2% reported having been instructed to arrange their desk/chair/screen in the right position, 71.5% to take rest breaks. The older age group (16-18 years) reported receiving instructions or being self-instructed more often than the 12- to 14-year-olds (p < 0.001). Among both age groups the sources of instructions included school (33.1%), family (28.6%), self (self-instructed) (12.5%), ICT-related (8.6%), friends (1.5%) and health professionals (0.8%). Receiving instructions was not related to lower prevalence of computer-associated health complaints. This report shows that ergonomic instructions on how to prevent computer-related musculoskeletal problems fail to reach a substantial number of children. Furthermore, the reported sources of instructions vary greatly in terms of reliability.

  2. Computer-associated health complaints and sources of ergonomic instructions in computer-related issues among Finnish adolescents: A cross-sectional study

    PubMed Central

    2010-01-01

    Background The use of computers has increased among adolescents, as have musculoskeletal symptoms. There is evidence that these symptoms can be reduced through an ergonomics approach and through education. The purpose of this study was to examine where adolescents had received ergonomic instructions related to computer use, and whether receiving these instructions was associated with a reduced prevalence of computer-associated health complaints. Methods Mailed survey with nationally representative sample of 12 to 18-year-old Finns in 2001 (n = 7292, response rate 70%). In total, 6961 youths reported using a computer. We tested the associations of computer use time and received ergonomic instructions (predictor variables) with computer-associated health complaints (outcome variables) using logistic regression analysis. Results To prevent computer-associated complaints, 61.2% reported having been instructed to arrange their desk/chair/screen in the right position, 71.5% to take rest breaks. The older age group (16-18 years) reported receiving instructions or being self-instructed more often than the 12- to 14-year-olds (p < 0.001). Among both age groups the sources of instructions included school (33.1%), family (28.6%), self (self-instructed) (12.5%), ICT-related (8.6%), friends (1.5%) and health professionals (0.8%). Receiving instructions was not related to lower prevalence of computer-associated health complaints. Conclusions This report shows that ergonomic instructions on how to prevent computer-related musculoskeletal problems fail to reach a substantial number of children. Furthermore, the reported sources of instructions vary greatly in terms of reliability. PMID:20064250

  3. A filtering approach to edge preserving MAP estimation of images.

    PubMed

    Humphrey, David; Taubman, David

    2011-05-01

    The authors present a computationally efficient technique for maximum a posteriori (MAP) estimation of images in the presence of both blur and noise. The image is divided into statistically independent regions. Each region is modelled with a WSS Gaussian prior. Classical Wiener filter theory is used to generate a set of convex sets in the solution space, with the solution to the MAP estimation problem lying at the intersection of these sets. The proposed algorithm uses an underlying segmentation of the image, and a means of determining the segmentation and refining it are described. The algorithm is suitable for a range of image restoration problems, as it provides a computationally efficient means to deal with the shortcomings of Wiener filtering without sacrificing the computational simplicity of the filtering approach. The algorithm is also of interest from a theoretical viewpoint as it provides a continuum of solutions between Wiener filtering and Inverse filtering depending upon the segmentation used. We do not attempt to show here that the proposed method is the best general approach to the image reconstruction problem. However, related work referenced herein shows excellent performance in the specific problem of demosaicing.

  4. A Computer-Assisted Instructional Software Program in Mathematical Problem-Solving Skills for Medication Administration for Beginning Baccalaureate Nursing Students at San Jose State University.

    ERIC Educational Resources Information Center

    Wahl, Sharon C.

    Nursing educators and administrators are concerned about medication errors made by students which jeopardize patient safety. The inability to conceptualize and calculate medication dosages, often related to math anxiety, is implicated in such errors. A computer-assisted instruction (CAI) program is seen as a viable method of allowing students to…

  5. Read My Lips: The Importance of the Face in a Computer-Animated Tutor for Vocabulary Learning by Children with Autism

    ERIC Educational Resources Information Center

    Massaro, Dominic W.; Bosseler, Alexis

    2006-01-01

    A computer-animated tutor, Baldi, has been successful in teaching vocabulary and grammar to children with autism and those with hearing problems. The present study assessed to what extent the face facilitated this learning process relative to the voice alone. Baldi was implemented in a Language Wizard/Tutor, which allows easy creation and…

  6. Communication Avoiding and Overlapping for Numerical Linear Algebra

    DTIC Science & Technology

    2012-05-08

    future exascale systems, communication cost must be avoided or overlapped. Communication-avoiding 2.5D algorithms improve scalability by reducing...linear algebra problems to future exascale systems, communication cost must be avoided or overlapped. Communication-avoiding 2.5D algorithms improve...will continue to grow relative to the cost of computation. With exascale computing as the long-term goal, the community needs to develop techniques

  7. Discriminative Learning with Markov Logic Networks

    DTIC Science & Technology

    2009-10-01

    Discriminative Learning with Markov Logic Networks Tuyen N. Huynh Department of Computer Sciences University of Texas at Austin Austin, TX 78712...emerging area of research that addresses the problem of learning from noisy structured/relational data. Markov logic networks (MLNs), sets of weighted...TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of Texas at Austin,Department of Computer

  8. Use of a Computer Language in Teaching Dynamic Programming. Final Report.

    ERIC Educational Resources Information Center

    Trimble, C. J.; And Others

    Most optimization problems of any degree of complexity must be solved using a computer. In the teaching of dynamic programing courses, it is often desirable to use a computer in problem solution. The solution process involves conceptual formulation and computational Solution. Generalized computer codes for dynamic programing problem solution…

  9. Using process groups to implement failure detection in asynchronous environments

    NASA Technical Reports Server (NTRS)

    Ricciardi, Aleta M.; Birman, Kenneth P.

    1991-01-01

    Agreement on the membership of a group of processes in a distributed system is a basic problem that arises in a wide range of applications. Such groups occur when a set of processes cooperate to perform some task, share memory, monitor one another, subdivide a computation, and so forth. The group membership problems is discussed as it relates to failure detection in asynchronous, distributed systems. A rigorous, formal specification for group membership is presented under this interpretation. A solution is then presented for this problem.

  10. Computational Psychometrics for Modeling System Dynamics during Stressful Disasters.

    PubMed

    Cipresso, Pietro; Bessi, Alessandro; Colombo, Desirée; Pedroli, Elisa; Riva, Giuseppe

    2017-01-01

    Disasters can be very stressful events. However, computational models of stress require data that might be very difficult to collect during disasters. Moreover, personal experiences are not repeatable, so it is not possible to collect bottom-up information when building a coherent model. To overcome these problems, we propose the use of computational models and virtual reality integration to recreate disaster situations, while examining possible dynamics in order to understand human behavior and relative consequences. By providing realistic parameters associated with disaster situations, computational scientists can work more closely with emergency responders to improve the quality of interventions in the future.

  11. Aircraft cybernetics

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The use of computers for aircraft control, flight simulation, and inertial navigation is explored. The man-machine relation problem in aviation is addressed. Simple and self-adapting autopilots are described and the assets and liabilities of digital navigation techniques are assessed.

  12. Symbolic-Graphical Calculators: Teaching Tools for Mathematics.

    ERIC Educational Resources Information Center

    Dick, Thomas P.

    1992-01-01

    Explores the role that symbolic-graphical calculators can play in the current calls for reform in the mathematics curriculum. Discusses symbolic calculators and graphing calculators in relation to problem solving, computational skills, and mathematics instruction. (MDH)

  13. Usefulness of computed tomography in pre-surgical evaluation of maxillo-facial pathology with rapid prototyping and surgical pre-planning by virtual reality.

    PubMed

    Toso, Francesco; Zuiani, Chiara; Vergendo, Maurizio; Salvo, Iolanda; Robiony, Massimo; Politi, Massimo; Bazzocchi, Massimo

    2005-01-01

    To validate a protocol for creating virtual models to be used in the construction of solid prototypes useful for the planning-simulation of maxillo-facial surgery, in particular for very complex anatomic and pathologic problems. To optimize communications between the radiology, engineering and surgical laboratories. We studied 16 patients with different clinical problems of the maxillo-facial district. Exams were performed with multidetector computed tomography (MDCT) and single slice computed tomography (SDCT) with axial scans and collimation of 0.5-2 mm, and reconstruction interval of 1 mm. Subsequently we performed 2D multiplanar reconstructions and 3D volume-rendering reconstructions. We exported the DICOM images to the engineering laboratory, to recognize and isolate the bony structures by software. With these data the solid prototypes were generated using stereolitography. To date, surgery has been preformed on 12 patients after simulation of the procedure on the stereolithographyc model. The solid prototypes constructed in the difficult cases were sufficiently detailed despite problems related to the artefacts generated by dental fillings an d prostheses. In the remaining cases the MPR/3D images were sufficiently detailed for surgical planning. The surgical results were excellent in all patients who underwent surgery, and the surgeons were satisfied with the improvement in quality and the reduction in time required for the procedure. MDCT enables rapid prototyping using solid replication, which was very helpful in maxillo-facial surgery, despite problems related to artifacts due to dental fillings and prosthesis within the acquisition field; solutions for this problem are work in progress. The protocol used for communication between the different laboratories was valid and reproducible.

  14. Visual problems in young adults due to computer use.

    PubMed

    Moschos, M M; Chatziralli, I P; Siasou, G; Papazisis, L

    2012-04-01

    Computer use can cause visual problems. The purpose of our study was to evaluate visual problems due to computer use in young adults. Participants in our study were 87 adults, 48 male and 39 female, mean aged 31.3 years old (SD 7.6). All the participants completed a questionnaire regarding visual problems detected after computer use. The mean daily use of computers was 3.2 hours (SD 2.7). 65.5 % of the participants complained for dry eye, mainly after more than 2.5 hours of computer use. 32 persons (36.8 %) had a foreign body sensation in their eyes, while 15 participants (17.2 %) complained for blurred vision which caused difficulties in driving, after 3.25 hours of continuous computer use. 10.3 % of the participants sought medical advice for their problem. There was a statistically significant correlation between the frequency of visual problems and the duration of computer use (p = 0.021). 79.3 % of the participants use artificial tears during or after long use of computers, so as not to feel any ocular discomfort. The main symptom after computer use in young adults was dry eye. All visual problems associated with the duration of computer use. Artificial tears play an important role in the treatment of ocular discomfort after computer use. © Georg Thieme Verlag KG Stuttgart · New York.

  15. Solution of axisymmetric and two-dimensional inviscid flow over blunt bodies by the method of lines

    NASA Technical Reports Server (NTRS)

    Hamilton, H. H., II

    1978-01-01

    Comparisons with experimental data and the results of other computational methods demonstrated that very accurate solutions can be obtained by using relatively few lines with the method of lines approach. This method is semidiscrete and has relatively low core storage requirements as compared with fully discrete methods since very little data were stored across the shock layer. This feature is very attractive for three dimensional problems because it enables computer storage requirements to be reduced by approximately an order of magnitude. In the present study it was found that nine lines was a practical upper limit for two dimensional and axisymmetric problems. This condition limits application of the method to smooth body geometries where relatively few lines would be adequate to describe changes in the flow variables around the body. Extension of the method to three dimensions was conceptually straightforward; however, three dimensional applications would also be limited to smooth body geometries although not necessarily to total of nine lines.

  16. The relationship between psychosocial work factors, work stress and computer-related musculoskeletal discomforts among computer users in Malaysia.

    PubMed

    Zakerian, Seyed Abolfazl; Subramaniam, Indra Devi

    2009-01-01

    Increasing numbers of workers use computer for work. So, especially among office workers, there is a high risk of musculoskeletal discomforts. This study examined the associations among 3 factors, psychosocial work factors, work stress and musculoskeletal discomforts. These associations were examined via a questionnaire survey on 30 office workers (at a university in Malaysia), whose jobs required an extensive use of computers. The questionnaire was distributed and collected daily for 20 days. While the results indicated a significant relationship among psychosocial work factors, work stress and musculoskeletal discomfort, 3 psychosocial work factors were found to be more important than others in both work stress and musculoskeletal discomfort: job demands, negative social interaction and computer-related problems. To further develop study design, it is necessary to investigate industrial and other workers who have experienced musculoskeletal discomforts and work stress.

  17. Computer ethics and teritary level education in Hong Kong

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, E.Y.W.; Davison, R.M.; Wade, P.W.

    1994-12-31

    This paper seeks to highlight some ethical issues relating to the increasing proliferation of Information Technology into our everyday lives. The authors explain their understanding of computer ethics, and give some reasons why the study of computer ethics is becoming increasingly pertinent. The paper looks at some of the problems that arise in attempting to develop appropriate ethical concepts in a constantly changing environment, and explores some of the ethical dilemmas arising from the increasing use of computers. Some initial research undertaken to explore the ideas and understanding of tertiary level students in Hong Kong on a number of ethicalmore » issues of interest is described, and our findings discussed. We hope that presenting this paper and eliciting subsequent discussion will enable us to draw up more comprehensive guidelines for the teaching of computer related ethics to tertiary level students, as well as reveal some directions for future research.« less

  18. Fast generation of Fresnel holograms based on multirate filtering.

    PubMed

    Tsang, Peter; Liu, Jung-Ping; Cheung, Wai-Keung; Poon, Ting-Chung

    2009-12-01

    One of the major problems in computer-generated holography is the high computation cost involved for the calculation of fringe patterns. Recently, the problem has been addressed by imposing a horizontal parallax only constraint whereby the process can be simplified to the computation of one-dimensional sublines, each representing a scan plane of the object scene. Subsequently the sublines can be expanded to a two-dimensional hologram through multiplication with a reference signal. Furthermore, economical hardware is available with which sublines can be generated in a computationally free manner with high throughput of approximately 100 M pixels/second. Apart from decreasing the computation loading, the sublines can be treated as intermediate data that can be compressed by simply downsampling the number of sublines. Despite these favorable features, the method is suitable only for the generation of white light (rainbow) holograms, and the resolution of the reconstructed image is inferior to the classical Fresnel hologram. We propose to generate holograms from one-dimensional sublines so that the above-mentioned problems can be alleviated. However, such an approach also leads to a substantial increase in computation loading. To overcome this problem we encapsulated the conversion of sublines to holograms as a multirate filtering process and implemented the latter by use of a fast Fourier transform. Evaluation reveals that, for holograms of moderate size, our method is capable of operating 40,000 times faster than the calculation of Fresnel holograms based on the precomputed table lookup method. Although there is no relative vertical parallax between object points at different distance planes, a global vertical parallax is preserved for the object scene as a whole and the reconstructed image can be observed easily.

  19. Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)

    2002-01-01

    Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.

  20. Advances and trends in structural and solid mechanics; Proceedings of the Symposium, Washington, DC, October 4-7, 1982

    NASA Technical Reports Server (NTRS)

    Noor, A. K. (Editor); Housner, J. M.

    1983-01-01

    The mechanics of materials and material characterization are considered, taking into account micromechanics, the behavior of steel structures at elevated temperatures, and an anisotropic plasticity model for inelastic multiaxial cyclic deformation. Other topics explored are related to advances and trends in finite element technology, classical analytical techniques and their computer implementation, interactive computing and computational strategies for nonlinear problems, advances and trends in numerical analysis, database management systems and CAD/CAM, space structures and vehicle crashworthiness, beams, plates and fibrous composite structures, design-oriented analysis, artificial intelligence and optimization, contact problems, random waves, and lifetime prediction. Earthquake-resistant structures and other advanced structural applications are also discussed, giving attention to cumulative damage in steel structures subjected to earthquake ground motions, and a mixed domain analysis of nuclear containment structures using impulse functions.

  1. A Computationally Inexpensive Optimal Guidance via Radial-Basis-Function Neural Network for Autonomous Soft Landing on Asteroids

    PubMed Central

    Zhang, Peng; Liu, Keping; Zhao, Bo; Li, Yuanchun

    2015-01-01

    Optimal guidance is essential for the soft landing task. However, due to its high computational complexities, it is hardly applied to the autonomous guidance. In this paper, a computationally inexpensive optimal guidance algorithm based on the radial basis function neural network (RBFNN) is proposed. The optimization problem of the trajectory for soft landing on asteroids is formulated and transformed into a two-point boundary value problem (TPBVP). Combining the database of initial states with the relative initial co-states, an RBFNN is trained offline. The optimal trajectory of the soft landing is determined rapidly by applying the trained network in the online guidance. The Monte Carlo simulations of soft landing on the Eros433 are performed to demonstrate the effectiveness of the proposed guidance algorithm. PMID:26367382

  2. Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications

    DTIC Science & Technology

    2016-10-17

    finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16

  3. Space Mathematics: A Resource for Secondary School Teachers

    NASA Technical Reports Server (NTRS)

    Kastner, Bernice

    1985-01-01

    A collection of mathematical problems related to NASA space science projects is presented. In developing the examples and problems, attention was given to preserving the authenticity and significance of the original setting while keeping the level of mathematics within the secondary school curriculum. Computation and measurement, algebra, geometry, probability and statistics, exponential and logarithmic functions, trigonometry, matrix algebra, conic sections, and calculus are among the areas addressed.

  4. A Game Based e-Learning System to Teach Artificial Intelligence in the Computer Sciences Degree

    ERIC Educational Resources Information Center

    de Castro-Santos, Amable; Fajardo, Waldo; Molina-Solana, Miguel

    2017-01-01

    Our students taking the Artificial Intelligence and Knowledge Engineering courses often encounter a large number of problems to solve which are not directly related to the subject to be learned. To solve this problem, we have developed a game based e-learning system. The elected game, that has been implemented as an e-learning system, allows to…

  5. Phylo: A Citizen Science Approach for Improving Multiple Sequence Alignment

    PubMed Central

    Kam, Alfred; Kwak, Daniel; Leung, Clarence; Wu, Chu; Zarour, Eleyine; Sarmenta, Luis; Blanchette, Mathieu; Waldispühl, Jérôme

    2012-01-01

    Background Comparative genomics, or the study of the relationships of genome structure and function across different species, offers a powerful tool for studying evolution, annotating genomes, and understanding the causes of various genetic disorders. However, aligning multiple sequences of DNA, an essential intermediate step for most types of analyses, is a difficult computational task. In parallel, citizen science, an approach that takes advantage of the fact that the human brain is exquisitely tuned to solving specific types of problems, is becoming increasingly popular. There, instances of hard computational problems are dispatched to a crowd of non-expert human game players and solutions are sent back to a central server. Methodology/Principal Findings We introduce Phylo, a human-based computing framework applying “crowd sourcing” techniques to solve the Multiple Sequence Alignment (MSA) problem. The key idea of Phylo is to convert the MSA problem into a casual game that can be played by ordinary web users with a minimal prior knowledge of the biological context. We applied this strategy to improve the alignment of the promoters of disease-related genes from up to 44 vertebrate species. Since the launch in November 2010, we received more than 350,000 solutions submitted from more than 12,000 registered users. Our results show that solutions submitted contributed to improving the accuracy of up to 70% of the alignment blocks considered. Conclusions/Significance We demonstrate that, combined with classical algorithms, crowd computing techniques can be successfully used to help improving the accuracy of MSA. More importantly, we show that an NP-hard computational problem can be embedded in casual game that can be easily played by people without significant scientific training. This suggests that citizen science approaches can be used to exploit the billions of “human-brain peta-flops” of computation that are spent every day playing games. Phylo is available at: http://phylo.cs.mcgill.ca. PMID:22412834

  6. Literal algebra for satellite dynamics. [perturbation analysis

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. M.

    1975-01-01

    A description of the rather general class of operations available is given and the operations are related to problems in satellite dynamics. The implementation of an algebra processor is discussed. The four main categories of symbol processors are related to list processing, string manipulation, symbol manipulation, and formula manipulation. Fundamental required operations for an algebra processor are considered. It is pointed out that algebra programs have been used for a number of problems in celestial mechanics with great success. The advantage of computer algebra is its accuracy and speed.

  7. A strategy for reducing turnaround time in design optimization using a distributed computer system

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  8. Cellular Automata

    NASA Astrophysics Data System (ADS)

    Gutowitz, Howard

    1991-08-01

    Cellular automata, dynamic systems in which space and time are discrete, are yielding interesting applications in both the physical and natural sciences. The thirty four contributions in this book cover many aspects of contemporary studies on cellular automata and include reviews, research reports, and guides to recent literature and available software. Chapters cover mathematical analysis, the structure of the space of cellular automata, learning rules with specified properties: cellular automata in biology, physics, chemistry, and computation theory; and generalizations of cellular automata in neural nets, Boolean nets, and coupled map lattices. Current work on cellular automata may be viewed as revolving around two central and closely related problems: the forward problem and the inverse problem. The forward problem concerns the description of properties of given cellular automata. Properties considered include reversibility, invariants, criticality, fractal dimension, and computational power. The role of cellular automata in computation theory is seen as a particularly exciting venue for exploring parallel computers as theoretical and practical tools in mathematical physics. The inverse problem, an area of study gaining prominence particularly in the natural sciences, involves designing rules that possess specified properties or perform specified task. A long-term goal is to develop a set of techniques that can find a rule or set of rules that can reproduce quantitative observations of a physical system. Studies of the inverse problem take up the organization and structure of the set of automata, in particular the parameterization of the space of cellular automata. Optimization and learning techniques, like the genetic algorithm and adaptive stochastic cellular automata are applied to find cellular automaton rules that model such physical phenomena as crystal growth or perform such adaptive-learning tasks as balancing an inverted pole. Howard Gutowitz is Collaborateur in the Service de Physique du Solide et Résonance Magnetique, Commissariat a I'Energie Atomique, Saclay, France.

  9. Integrating Computers into the Problem-Solving Process.

    ERIC Educational Resources Information Center

    Lowther, Deborah L.; Morrison, Gary R.

    2003-01-01

    Asserts that within the context of problem-based learning environments, professors can encourage students to use computers as problem-solving tools. The ten-step Integrating Technology for InQuiry (NteQ) model guides professors through the process of integrating computers into problem-based learning activities. (SWM)

  10. Recognition Using Hybrid Classifiers.

    PubMed

    Osadchy, Margarita; Keren, Daniel; Raviv, Dolev

    2016-04-01

    A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply.

  11. The interaction between a solid body and viscous fluid by marker-and-cell method

    NASA Technical Reports Server (NTRS)

    Cheng, R. Y. K.

    1976-01-01

    A computational method for solving nonlinear problems relating to impact and penetration of a rigid body into a fluid type medium is presented. The numerical techniques, based on the Marker-and-Cell method, gives the pressure and velocity of the flow field. An important feature in this method is that the force and displacement of the rigid body interacting with the fluid during the impact and sinking phases are evaluated from the boundary stresses imposed by the fluid on the rigid body. A sample problem of low velocity penetration of a rigid block into still water is solved by this method. The computed time histories of the acceleration, pressure, and displacement of the block show food agreement with experimental measurements. A sample problem of high velocity impact of a rigid block into soft clay is also presented.

  12. Cellular automatons applied to gas dynamic problems

    NASA Technical Reports Server (NTRS)

    Long, Lyle N.; Coopersmith, Robert M.; Mclachlan, B. G.

    1987-01-01

    This paper compares the results of a relatively new computational fluid dynamics method, cellular automatons, with experimental data and analytical results. This technique has been shown to qualitatively predict fluidlike behavior; however, there have been few published comparisons with experiment or other theories. Comparisons are made for a one-dimensional supersonic piston problem, Stokes first problem, and the flow past a normal flat plate. These comparisons are used to assess the ability of the method to accurately model fluid dynamic behavior and to point out its limitations. Reasonable results were obtained for all three test cases, but the fundamental limitations of cellular automatons are numerous. It may be misleading, at this time, to say that cellular automatons are a computationally efficient technique. Other methods, based on continuum or kinetic theory, would also be very efficient if as little of the physics were included.

  13. Burnout in Turkish Computer Teachers: Problems and Predictors

    ERIC Educational Resources Information Center

    Deryakulu, Deniz

    2006-01-01

    Burnout is known to be a job-related syndrome. Freudenberger (1974) introduced the term "burnout" to describe the inability to function effectively in one's job as a consequence of prolonged and extensive job-related stress. Teaching has been identified as a highly stressful job. Selye (1974, as cited in Iwanicki, 1983) used the terms…

  14. Development of guidelines for the definition of the relavant information content in data classes

    NASA Technical Reports Server (NTRS)

    Schmitt, E.

    1973-01-01

    The problem of experiment design is defined as an information system consisting of information source, measurement unit, environmental disturbances, data handling and storage, and the mathematical analysis and usage of data. Based on today's concept of effective computability, general guidelines for the definition of the relevant information content in data classes are derived. The lack of a universally applicable information theory and corresponding mathematical or system structure is restricting the solvable problem classes to a small set. It is expected that a new relativity theory of information, generally described by a universal algebra of relations will lead to new mathematical models and system structures capable of modeling any well defined practical problem isomorphic to an equivalence relation at any corresponding level of abstractness.

  15. Direct Numerical Simulation of Automobile Cavity Tones

    NASA Technical Reports Server (NTRS)

    Kurbatskii, Konstantin; Tam, Christopher K. W.

    2000-01-01

    The Navier Stokes equation is solved computationally by the Dispersion-Relation-Preserving (DRP) scheme for the flow and acoustic fields associated with a laminar boundary layer flow over an automobile door cavity. In this work, the flow Reynolds number is restricted to R(sub delta*) < 3400; the range of Reynolds number for which laminar flow may be maintained. This investigation focuses on two aspects of the problem, namely, the effect of boundary layer thickness on the cavity tone frequency and intensity and the effect of the size of the computation domain on the accuracy of the numerical simulation. It is found that the tone frequency decreases with an increase in boundary layer thickness. When the boundary layer is thicker than a certain critical value, depending on the flow speed, no tone is emitted by the cavity. Computationally, solutions of aeroacoustics problems are known to be sensitive to the size of the computation domain. Numerical experiments indicate that the use of a small domain could result in normal mode type acoustic oscillations in the entire computation domain leading to an increase in tone frequency and intensity. When the computation domain is expanded so that the boundaries are at least one wavelength away from the noise source, the computed tone frequency and intensity are found to be computation domain size independent.

  16. Dynamic discrete tomography

    NASA Astrophysics Data System (ADS)

    Alpers, Andreas; Gritzmann, Peter

    2018-03-01

    We consider the problem of reconstructing the paths of a set of points over time, where, at each of a finite set of moments in time the current positions of points in space are only accessible through some small number of their x-rays. This particular particle tracking problem, with applications, e.g. in plasma physics, is the basic problem in dynamic discrete tomography. We introduce and analyze various different algorithmic models. In particular, we determine the computational complexity of the problem (and various of its relatives) and derive algorithms that can be used in practice. As a byproduct we provide new results on constrained variants of min-cost flow and matching problems.

  17. An Algorithm for the Weighted Earliness-Tardiness Unconstrained Project Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Afshar Nadjafi, Behrouz; Shadrokh, Shahram

    This research considers a project scheduling problem with the object of minimizing weighted earliness-tardiness penalty costs, taking into account a deadline for the project and precedence relations among the activities. An exact recursive method has been proposed for solving the basic form of this problem. We present a new depth-first branch and bound algorithm for extended form of the problem, which time value of money is taken into account by discounting the cash flows. The algorithm is extended with two bounding rules in order to reduce the size of the branch and bound tree. Finally, some test problems are solved and computational results are reported.

  18. Challenges and opportunities of cloud computing for atmospheric sciences

    NASA Astrophysics Data System (ADS)

    Pérez Montes, Diego A.; Añel, Juan A.; Pena, Tomás F.; Wallom, David C. H.

    2016-04-01

    Cloud computing is an emerging technological solution widely used in many fields. Initially developed as a flexible way of managing peak demand it has began to make its way in scientific research. One of the greatest advantages of cloud computing for scientific research is independence of having access to a large cyberinfrastructure to fund or perform a research project. Cloud computing can avoid maintenance expenses for large supercomputers and has the potential to 'democratize' the access to high-performance computing, giving flexibility to funding bodies for allocating budgets for the computational costs associated with a project. Two of the most challenging problems in atmospheric sciences are computational cost and uncertainty in meteorological forecasting and climate projections. Both problems are closely related. Usually uncertainty can be reduced with the availability of computational resources to better reproduce a phenomenon or to perform a larger number of experiments. Here we expose results of the application of cloud computing resources for climate modeling using cloud computing infrastructures of three major vendors and two climate models. We show how the cloud infrastructure compares in performance to traditional supercomputers and how it provides the capability to complete experiments in shorter periods of time. The monetary cost associated is also analyzed. Finally we discuss the future potential of this technology for meteorological and climatological applications, both from the point of view of operational use and research.

  19. A Cognitive Model for Problem Solving in Computer Science

    ERIC Educational Resources Information Center

    Parham, Jennifer R.

    2009-01-01

    According to industry representatives, computer science education needs to emphasize the processes involved in solving computing problems rather than their solutions. Most of the current assessment tools used by universities and computer science departments analyze student answers to problems rather than investigating the processes involved in…

  20. Application of evolutionary computation in ECAD problems

    NASA Astrophysics Data System (ADS)

    Lee, Dae-Hyun; Hwang, Seung H.

    1998-10-01

    Design of modern electronic system is a complicated task which demands the use of computer- aided design (CAD) tools. Since a lot of problems in ECAD are combinatorial optimization problems, evolutionary computations such as genetic algorithms and evolutionary programming have been widely employed to solve those problems. We have applied evolutionary computation techniques to solve ECAD problems such as technology mapping, microcode-bit optimization, data path ordering and peak power estimation, where their benefits are well observed. This paper presents experiences and discusses issues in those applications.

  1. Developing an undergraduate geography course on digital image processing of remotely sensed data

    NASA Technical Reports Server (NTRS)

    Baumann, P. R.

    1981-01-01

    Problems relating to the development of a digital image processing course in an undergraduate geography environment is discussed. Computer resource requirements, course prerequisites, and the size of the study area are addressed.

  2. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  3. Numerical propulsion system simulation

    NASA Technical Reports Server (NTRS)

    Lytle, John K.; Remaklus, David A.; Nichols, Lester D.

    1990-01-01

    The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributors to the high cost is the need to perform many large scale system tests. Extensive testing is used to capture the complex interactions among the multiple disciplines and the multiple components inherent in complex systems. The objective of the Numerical Propulsion System Simulation (NPSS) is to provide insight into these complex interactions through computational simulations. This will allow for comprehensive evaluation of new concepts early in the design phase before a commitment to hardware is made. It will also allow for rapid assessment of field-related problems, particularly in cases where operational problems were encountered during conditions that would be difficult to simulate experimentally. The tremendous progress taking place in computational engineering and the rapid increase in computing power expected through parallel processing make this concept feasible within the near future. However it is critical that the framework for such simulations be put in place now to serve as a focal point for the continued developments in computational engineering and computing hardware and software. The NPSS concept which is described will provide that framework.

  4. Ergonomic guidelines for using notebook personal computers. Technical Committee on Human-Computer Interaction, International Ergonomics Association.

    PubMed

    Saito, S; Piccoli, B; Smith, M J; Sotoyama, M; Sweitzer, G; Villanueva, M B; Yoshitake, R

    2000-10-01

    In the 1980's, the visual display terminal (VDT) was introduced in workplaces of many countries. Soon thereafter, an upsurge in reported cases of related health problems, such as musculoskeletal disorders and eyestrain, was seen. Recently, the flat panel display or notebook personal computer (PC) became the most remarkable feature in modern workplaces with VDTs and even in homes. A proactive approach must be taken to avert foreseeable ergonomic and occupational health problems from the use of this new technology. Because of its distinct physical and optical characteristics, the ergonomic requirements for notebook PCs in terms of machine layout, workstation design, lighting conditions, among others, should be different from the CRT-based computers. The Japan Ergonomics Society (JES) technical committee came up with a set of guidelines for notebook PC use following exploratory discussions that dwelt on its ergonomic aspects. To keep in stride with this development, the Technical Committee on Human-Computer Interaction under the auspices of the International Ergonomics Association worked towards the international issuance of the guidelines. This paper unveils the result of this collaborative effort.

  5. Numerical Simulation of Black Holes

    NASA Astrophysics Data System (ADS)

    Teukolsky, Saul

    2003-04-01

    Einstein's equations of general relativity are prime candidates for numerical solution on supercomputers. There is some urgency in being able to carry out such simulations: Large-scale gravitational wave detectors are now coming on line, and the most important expected signals cannot be predicted except numerically. Problems involving black holes are perhaps the most interesting, yet also particularly challenging computationally. One difficulty is that inside a black hole there is a physical singularity that cannot be part of the computational domain. A second difficulty is the disparity in length scales between the size of the black hole and the wavelength of the gravitational radiation emitted. A third difficulty is that all existing methods of evolving black holes in three spatial dimensions are plagued by instabilities that prohibit long-term evolution. I will describe the ideas that are being introduced in numerical relativity to deal with these problems, and discuss the results of recent calculations of black hole collisions.

  6. A Cognitive Simulator for Learning the Nature of Human Problem Solving

    NASA Astrophysics Data System (ADS)

    Miwa, Kazuhisa

    Problem solving is understood as a process through which states of problem solving are transferred from the initial state to the goal state by applying adequate operators. Within this framework, knowledge and strategies are given as operators for the search. One of the most important points of researchers' interest in the domain of problem solving is to explain the performance of problem solving behavior based on the knowledge and strategies that the problem solver has. We call the interplay between problem solvers' knowledge/strategies and their behavior the causal relation between mental operations and behavior. It is crucially important, we believe, for novice learners in this domain to understand the causal relation between mental operations and behavior. Based on this insight, we have constructed a learning system in which learners can control mental operations of a computational agent that solves a task, such as knowledge, heuristics, and cognitive capacity, and can observe its behavior. We also introduce this system to a university class, and discuss which findings were discovered by the participants.

  7. An analysis of spectral envelope-reduction via quadratic assignment problems

    NASA Technical Reports Server (NTRS)

    George, Alan; Pothen, Alex

    1994-01-01

    A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was described. The ordering is computed by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. In this paper, we provide an analysis of the spectral envelope reduction algorithm. We described related 1- and 2-sum problems; the former is related to the envelope size, while the latter is related to an upper bound on the work involved in an envelope Cholesky factorization scheme. We formulate the latter two problems as quadratic assignment problems, and then study the 2-sum problem in more detail. We obtain lower bounds on the 2-sum by considering a projected quadratic assignment problem, and then show that finding a permutation matrix closest to an orthogonal matrix attaining one of the lower bounds justifies the spectral envelope reduction algorithm. The lower bound on the 2-sum is seen to be tight for reasonably 'uniform' finite element meshes. We also obtain asymptotically tight lower bounds for the envelope size for certain classes of meshes.

  8. Teaching Reductive Thinking

    ERIC Educational Resources Information Center

    Armoni, Michal; Gal-Ezer, Judith

    2005-01-01

    When dealing with a complex problem, solving it by reduction to simpler problems, or problems for which the solution is already known, is a common method in mathematics and other scientific disciplines, as in computer science and, specifically, in the field of computability. However, when teaching computational models (as part of computability)…

  9. The potential benefits of photonics in the computing platform

    NASA Astrophysics Data System (ADS)

    Bautista, Jerry

    2005-03-01

    The increase in computational requirements for real-time image processing, complex computational fluid dynamics, very large scale data mining in the health industry/Internet, and predictive models for financial markets are driving computer architects to consider new paradigms that rely upon very high speed interconnects within and between computing elements. Further challenges result from reduced power requirements, reduced transmission latency, and greater interconnect density. Optical interconnects may solve many of these problems with the added benefit extended reach. In addition, photonic interconnects provide relative EMI immunity which is becoming an increasing issue with a greater dependence on wireless connectivity. However, to be truly functional, the optical interconnect mesh should be able to support arbitration, addressing, etc. completely in the optical domain with a BER that is more stringent than "traditional" communication requirements. Outlined are challenges in the advanced computing environment, some possible optical architectures and relevant platform technologies, as well roughly sizing these opportunities which are quite large relative to the more "traditional" optical markets.

  10. Scilab software as an alternative low-cost computing in solving the linear equations problem

    NASA Astrophysics Data System (ADS)

    Agus, Fahrul; Haviluddin

    2017-02-01

    Numerical computation packages are widely used both in teaching and research. These packages consist of license (proprietary) and open source software (non-proprietary). One of the reasons to use the package is a complexity of mathematics function (i.e., linear problems). Also, number of variables in a linear or non-linear function has been increased. The aim of this paper was to reflect on key aspects related to the method, didactics and creative praxis in the teaching of linear equations in higher education. If implemented, it could be contribute to a better learning in mathematics area (i.e., solving simultaneous linear equations) that essential for future engineers. The focus of this study was to introduce an additional numerical computation package of Scilab as an alternative low-cost computing programming. In this paper, Scilab software was proposed some activities that related to the mathematical models. In this experiment, four numerical methods such as Gaussian Elimination, Gauss-Jordan, Inverse Matrix, and Lower-Upper Decomposition (LU) have been implemented. The results of this study showed that a routine or procedure in numerical methods have been created and explored by using Scilab procedures. Then, the routine of numerical method that could be as a teaching material course has exploited.

  11. Statistical mechanics of complex neural systems and high dimensional data

    NASA Astrophysics Data System (ADS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-03-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.

  12. Reciprocal Associations between Electronic Media Use and Behavioral Difficulties in Preschoolers.

    PubMed

    Poulain, Tanja; Vogel, Mandy; Neef, Madlen; Abicht, Franziska; Hilbert, Anja; Genuneit, Jon; Körner, Antje; Kiess, Wieland

    2018-04-21

    The use of electronic media has increased substantially and is already observable in young children. The present study explored associations of preschoolers’ use of electronic media with age, gender, and socio-economic status, investigated time trends, and examined reciprocal longitudinal relations between children’s use of electronic media and their behavioral difficulties. The study participants included 527 German two- to six-year-old children whose parents had provided information on their use of electronic media and their behavioral difficulties at two time points, with approximately 12 months between baseline and follow-up. The analyses revealed that older vs. younger children, as well as children from families with a lower vs. higher socio-economic status, were more often reported to use electronic media. Furthermore, the usage of mobile phones increased significantly between 2011 and 2016. Most interestingly, baseline usage of computer/Internet predicted more emotional and conduct problems at follow-up, and baseline usage of mobile phones was associated with more conduct problems and hyperactivity or inattention at follow-up. Peer relationship problems at baseline, on the other hand, increased the likelihood of using computer/Internet and mobile phones at follow-up. The findings indicate that preschoolers’ use of electronic media, especially newer media such as computer/Internet and mobile phones, and their behavioral difficulties are mutually related over time.

  13. Reciprocal Associations between Electronic Media Use and Behavioral Difficulties in Preschoolers

    PubMed Central

    Vogel, Mandy; Neef, Madlen; Abicht, Franziska; Hilbert, Anja; Körner, Antje; Kiess, Wieland

    2018-01-01

    The use of electronic media has increased substantially and is already observable in young children. The present study explored associations of preschoolers’ use of electronic media with age, gender, and socio-economic status, investigated time trends, and examined reciprocal longitudinal relations between children’s use of electronic media and their behavioral difficulties. The study participants included 527 German two- to six-year-old children whose parents had provided information on their use of electronic media and their behavioral difficulties at two time points, with approximately 12 months between baseline and follow-up. The analyses revealed that older vs. younger children, as well as children from families with a lower vs. higher socio-economic status, were more often reported to use electronic media. Furthermore, the usage of mobile phones increased significantly between 2011 and 2016. Most interestingly, baseline usage of computer/Internet predicted more emotional and conduct problems at follow-up, and baseline usage of mobile phones was associated with more conduct problems and hyperactivity or inattention at follow-up. Peer relationship problems at baseline, on the other hand, increased the likelihood of using computer/Internet and mobile phones at follow-up. The findings indicate that preschoolers’ use of electronic media, especially newer media such as computer/Internet and mobile phones, and their behavioral difficulties are mutually related over time. PMID:29690498

  14. Advanced Computational Aeroacoustics Methods for Fan Noise Prediction

    NASA Technical Reports Server (NTRS)

    Envia, Edmane (Technical Monitor); Tam, Christopher

    2003-01-01

    Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.

  15. Exact parallel algorithms for some members of the traveling salesman problem family

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pekny, J.F.

    1989-01-01

    The traveling salesman problem and its many generalizations comprise one of the best known combinatorial optimization problem families. Most members of the family are NP-complete problems so that exact algorithms require an unpredictable and sometimes large computational effort. Parallel computers offer hope for providing the power required to meet these demands. A major barrier to applying parallel computers is the lack of parallel algorithms. The contributions presented in this thesis center around new exact parallel algorithms for the asymmetric traveling salesman problem (ATSP), prize collecting traveling salesman problem (PCTSP), and resource constrained traveling salesman problem (RCTSP). The RCTSP is amore » particularly difficult member of the family since finding a feasible solution is an NP-complete problem. An exact sequential algorithm is also presented for the directed hamiltonian cycle problem (DHCP). The DHCP algorithm is superior to current heuristic approaches and represents the first exact method applicable to large graphs. Computational results presented for each of the algorithms demonstrates the effectiveness of combining efficient algorithms with parallel computing methods. Performance statistics are reported for randomly generated ATSPs with 7,500 cities, PCTSPs with 200 cities, RCTSPs with 200 cities, DHCPs with 3,500 vertices, and assignment problems of size 10,000. Sequential results were collected on a Sun 4/260 engineering workstation, while parallel results were collected using a 14 and 100 processor BBN Butterfly Plus computer. The computational results represent the largest instances ever solved to optimality on any type of computer.« less

  16. Some unsolved problems in discrete mathematics and mathematical cybernetics

    NASA Astrophysics Data System (ADS)

    Korshunov, Aleksei D.

    2009-10-01

    There are many unsolved problems in discrete mathematics and mathematical cybernetics. Writing a comprehensive survey of such problems involves great difficulties. First, such problems are rather numerous and varied. Second, they greatly differ from each other in degree of completeness of their solution. Therefore, even a comprehensive survey should not attempt to cover the whole variety of such problems; only the most important and significant problems should be reviewed. An impersonal choice of problems to include is quite hard. This paper includes 13 unsolved problems related to combinatorial mathematics and computational complexity theory. The problems selected give an indication of the author's studies for 50 years; for this reason, the choice of the problems reviewed here is, to some extent, subjective. At the same time, these problems are very difficult and quite important for discrete mathematics and mathematical cybernetics. Bibliography: 74 items.

  17. REVIEWS OF TOPICAL PROBLEMS: Analytic calculations on digital computers for applications in physics and mathematics

    NASA Astrophysics Data System (ADS)

    Gerdt, V. P.; Tarasov, O. V.; Shirkov, Dmitrii V.

    1980-01-01

    The present state of analytic calculations on computers is reviewed. Several programming systems which are used for analytic calculations are discussed: SCHOONSCHIP, CLAM, REDUCE-2, SYMBAL, CAMAL, AVTO-ANALITIK, MACSYMA, etc. It is shown that these systems can be used to solve a wide range of problems in physics and mathematics. Some physical applications are discussed in celestial mechanics, the general theory of relativity, quantum field theory, plasma physics, hydrodynamics, atomic and molecular physics, and quantum chemistry. Some mathematical applications which are discussed are evaluating indefinite integrals, solving differential equations, and analyzing mathematical expressions. This review is addressed to physicists and mathematicians working in a wide range of fields.

  18. Mathematical modelling and simulation of a tennis racket.

    PubMed

    Brannigan, M; Adali, S

    1981-01-01

    By constructing a mathematical model, we consider the dynamics of a tennis racket hit by a ball. Using this model, known experimental results can be simulated on the computer, and it becomes possible to make a parametric study of a racket. Such a simulation is essential in the study of two important problems related to tennis: computation of the resulting forces and moments transferred to the hand should assist understanding of the medical problem 'tennis elbow'; secondly, simulation will enable a study to be made of the relationships between the impact time, tension in the strings, forces transmitted to the rim and return velocity of the ball, all of which can lead to the optimal design of rackets.

  19. Simulator for multilevel optimization research

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Young, K. C.

    1986-01-01

    A computer program designed to simulate and improve multilevel optimization techniques is described. By using simple analytic functions to represent complex engineering analyses, the simulator can generate and test a large variety of multilevel decomposition strategies in a relatively short time. This type of research is an essential step toward routine optimization of large aerospace systems. The paper discusses the types of optimization problems handled by the simulator and gives input and output listings and plots for a sample problem. It also describes multilevel implementation techniques which have value beyond the present computer program. Thus, this document serves as a user's manual for the simulator and as a guide for building future multilevel optimization applications.

  20. NAS Technical Summaries, March 1993 - February 1994

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1993-94 operational year concluded with 448 high-speed processor projects and 95 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.

  1. NAS technical summaries. Numerical aerodynamic simulation program, March 1992 - February 1993

    NASA Technical Reports Server (NTRS)

    1994-01-01

    NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1992-93 operational year concluded with 399 high-speed processor projects and 91 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.

  2. Distributed computer taxonomy based on O/S structure

    NASA Technical Reports Server (NTRS)

    Foudriat, Edwin C.

    1985-01-01

    The taxonomy considers the resource structure at the operating system level. It compares a communication based taxonomy with the new taxonomy to illustrate how the latter does a better job when related to the client's view of the distributed computer. The results illustrate the fundamental features and what is required to construct fully distributed processing systems. The problem of using network computers on the space station is addressed. A detailed discussion of the taxonomy is not given here. Information is given in the form of charts and diagrams that were used to illustrate a talk.

  3. Parallelizing a peanut butter sandwich

    NASA Astrophysics Data System (ADS)

    Quenette, S. M.

    2005-12-01

    This poster aims to demonstrate, in a novel way, why contemporary computational code development is seemingly hard to a geodynamics modeler (i.e. a non-computer-scientist). For example, to utilise comtemporary computer hardware, parallelisation is required. But why do we chose the explicit approach (MPI) over an implicit (OpenMP) one? How does this relate to the typical geodynamics codes. And do we face this same style of problems in every day life? We aim to demonstrate that the little bit of complexity, fore-thought and effort is worth its while.

  4. On Target Localization Using Combined RSS and AoA Measurements

    PubMed Central

    Beko, Marko; Dinis, Rui

    2018-01-01

    This work revises existing solutions for a problem of target localization in wireless sensor networks (WSNs), utilizing integrated measurements, namely received signal strength (RSS) and angle of arrival (AoA). The problem of RSS/AoA-based target localization became very popular in the research community recently, owing to its great applicability potential and relatively low implementation cost. Therefore, here, a comprehensive study of the state-of-the-art (SoA) solutions and their detailed analysis is presented. The beginning of this work starts by considering the SoA approaches based on convex relaxation techniques (more computationally complex in general), and it goes through other (less computationally complex) approaches, as well, such as the ones based on the generalized trust region sub-problems framework and linear least squares. Furthermore, a detailed analysis of the computational complexity of each solution is reviewed. Furthermore, an extensive set of simulation results is presented. Finally, the main conclusions are summarized, and a set of future aspects and trends that might be interesting for future research in this area is identified. PMID:29671832

  5. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  6. Optimal processor assignment for pipeline computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath

    1991-01-01

    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.

  7. The future of computing--new architectures and new technologies.

    PubMed

    Warren, P

    2004-02-01

    All modern computers are designed using the 'von Neumann' architecture and built using silicon transistor technology. Both architecture and technology have been remarkably successful. Yet there are a range of problems for which this conventional architecture is not particularly well adapted, and new architectures are being proposed to solve these problems, in particular based on insight from nature. Transistor technology has enjoyed 50 years of continuing progress. However, the laws of physics dictate that within a relatively short time period this progress will come to an end. New technologies, based on molecular and biological sciences as well as quantum physics, are vying to replace silicon, or at least coexist with it and extend its capability. The paper describes these novel architectures and technologies, places them in the context of the kinds of problems they might help to solve, and predicts their possible manner and time of adoption. Finally it describes some key questions and research problems associated with their use.

  8. Exploring quantum computing application to satellite data assimilation

    NASA Astrophysics Data System (ADS)

    Cheung, S.; Zhang, S. Q.

    2015-12-01

    This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.

  9. Artificial neural networks: fundamentals, computing, design, and application.

    PubMed

    Basheer, I A; Hajmeer, M

    2000-12-01

    Artificial neural networks (ANNs) are relatively new computational tools that have found extensive utilization in solving many complex real-world problems. The attractiveness of ANNs comes from their remarkable information processing characteristics pertinent mainly to nonlinearity, high parallelism, fault and noise tolerance, and learning and generalization capabilities. This paper aims to familiarize the reader with ANN-based computing (neurocomputing) and to serve as a useful companion practical guide and toolkit for the ANNs modeler along the course of ANN project development. The history of the evolution of neurocomputing and its relation to the field of neurobiology is briefly discussed. ANNs are compared to both expert systems and statistical regression and their advantages and limitations are outlined. A bird's eye review of the various types of ANNs and the related learning rules is presented, with special emphasis on backpropagation (BP) ANNs theory and design. A generalized methodology for developing successful ANNs projects from conceptualization, to design, to implementation, is described. The most common problems that BPANNs developers face during training are summarized in conjunction with possible causes and remedies. Finally, as a practical application, BPANNs were used to model the microbial growth curves of S. flexneri. The developed model was reasonably accurate in simulating both training and test time-dependent growth curves as affected by temperature and pH.

  10. Cumulative trauma disorder risk for children using computer products: results of a pilot investigation with a student convenience sample.

    PubMed

    Burke, Adam; Peper, Erik

    2002-01-01

    Cumulative trauma disorder is a major health problem for adults. Despite a growing understanding of adult cumulative trauma disorder, however, little is known about the risks for younger populations. This investigation examined issues related to child/adolescent computer product use and upper body physical discomfort. A convenience sample of 212 students, grades 1-12, was interviewed at their homes by a college-age sibling or relative. One of the child's parents was also interviewed. A 22-item questionnaire was used for data-gathering. Questionnaire items included frequency and duration of use, type of computer products/games and input devices used, presence of physical discomfort, and parental concerns related to the child's computer use. Many students experienced physical discomfort attributed to computer use, such as wrist pain (30%) and back pain (15%). Specific computer activities-such as using a joystick or playing noneducational games-were significantly predictive of physical discomfort using logistic multiple regression. Many parents reported difficulty getting their children off the computer (46%) and that their children spent less time outdoors (35%). Computer product use within this cohort was associated with self-reported physical discomfort. Results suggest a need for more extensive study, including multiyear longitudinal surveys.

  11. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  12. On the possibility of control restoration in some inverse problems of heat and mass transfer

    NASA Astrophysics Data System (ADS)

    Bilchenko, G. G.; Bilchenko, N. G.

    2016-11-01

    The hypersonic aircraft permeable surfaces effective heat protection problems are considered. The physic-chemical processes (the dissociation and the ionization) in laminar boundary layer of compressible gas are appreciated in mathematical model. The statements of direct problems of heat and mass transfer are given: according to preset given controls it is necessary to compute the boundary layer mathematical model parameters and determinate the local and total heat flows and friction forces and the power of blowing system. The A.A.Dorodnicyn's generalized integral relations method has been used as calculation basis. The optimal control - the blowing into boundary layer (for continuous functions) was constructed as the solution of direct problem in extreme statement with the use of this approach. The statement of inverse problems are given: the control laws ensuring the preset given local heat flow and local tangent friction are restored. The differences between the interpolation and the approximation statements are discussed. The possibility of unique control restoration is established and proved (in the stagnation point). The computational experiments results are presented.

  13. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models.

    PubMed

    Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  14. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  15. Promoter Sequences Prediction Using Relational Association Rule Mining

    PubMed Central

    Czibula, Gabriela; Bocicor, Maria-Iuliana; Czibula, Istvan Gergely

    2012-01-01

    In this paper we are approaching, from a computational perspective, the problem of promoter sequences prediction, an important problem within the field of bioinformatics. As the conditions for a DNA sequence to function as a promoter are not known, machine learning based classification models are still developed to approach the problem of promoter identification in the DNA. We are proposing a classification model based on relational association rules mining. Relational association rules are a particular type of association rules and describe numerical orderings between attributes that commonly occur over a data set. Our classifier is based on the discovery of relational association rules for predicting if a DNA sequence contains or not a promoter region. An experimental evaluation of the proposed model and comparison with similar existing approaches is provided. The obtained results show that our classifier overperforms the existing techniques for identifying promoter sequences, confirming the potential of our proposal. PMID:22563233

  16. Artificial intelligence and design: Opportunities, research problems and directions

    NASA Technical Reports Server (NTRS)

    Amarel, Saul

    1990-01-01

    The issues of industrial productivity and economic competitiveness are of major significance in the U.S. at present. By advancing the science of design, and by creating a broad computer-based methodology for automating the design of artifacts and of industrial processes, we can attain dramatic improvements in productivity. It is our thesis that developments in computer science, especially in Artificial Intelligence (AI) and in related areas of advanced computing, provide us with a unique opportunity to push beyond the present level of computer aided automation technology and to attain substantial advances in the understanding and mechanization of design processes. To attain these goals, we need to build on top of the present state of AI, and to accelerate research and development in areas that are especially relevant to design problems of realistic complexity. We propose an approach to the special challenges in this area, which combines 'core work' in AI with the development of systems for handling significant design tasks. We discuss the general nature of design problems, the scientific issues involved in studying them with the help of AI approaches, and the methodological/technical issues that one must face in developing AI systems for handling advanced design tasks. Looking at basic work in AI from the perspective of design automation, we identify a number of research problems that need special attention. These include finding solution methods for handling multiple interacting goals, formation problems, problem decompositions, and redesign problems; choosing representations for design problems with emphasis on the concept of a design record; and developing approaches for the acquisition and structuring of domain knowledge with emphasis on finding useful approximations to domain theories. Progress in handling these research problems will have major impact both on our understanding of design processes and their automation, and also on several fundamental questions that are of intrinsic concern to AI. We present examples of current AI work on specific design tasks, and discuss new directions of research, both as extensions of current work and in the context of new design tasks where domain knowledge is either intractable or incomplete. The domains discussed include Digital Circuit Design, Mechanical Design of Rotational Transmissions, Design of Computer Architectures, Marine Design, Aircraft Design, and Design of Chemical Processes and Materials. Work in these domains is significant on technical grounds, and it is also important for economic and policy reasons.

  17. Gas tungsten arc welding in a microgravity environment: Work done on GAS payload G-169

    NASA Technical Reports Server (NTRS)

    Welcher, Blake A.; Kolkailah, Faysal A.; Muir, Arthur H., Jr.

    1987-01-01

    GAS payload G-169 is discussed. G-169 contains a computer-controlled Gas Tungsten Arc Welder. The equipment design, problem analysis, and problem solutions are presented. Analysis of data gathered from other microgravity arc welding and terrestrial Gas Tungsten Arc Welding (GTAW) experiments are discussed in relation to the predicted results for the GTAW to be performed in microgravity with payload G-169.

  18. HEMP 3D: A finite difference program for calculating elastic-plastic flow, appendix B

    NASA Astrophysics Data System (ADS)

    Wilkins, Mark L.

    1993-05-01

    The HEMP 3D program can be used to solve problems in solid mechanics involving dynamic plasticity and time dependent material behavior and problems in gas dynamics. The equations of motion, the conservation equations, and the constitutive relations listed below are solved by finite difference methods following the format of the HEMP computer simulation program formulated in two space dimensions and time.

  19. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  20. Video game access, parental rules, and problem behavior: a study of boys with autism spectrum disorder.

    PubMed

    Engelhardt, Christopher R; Mazurek, Micah O

    2014-07-01

    Environmental correlates of problem behavior among individuals with autism spectrum disorder remain relatively understudied. The current study examined the contribution of in-room (i.e. bedroom) access to a video game console as one potential correlate of problem behavior among a sample of 169 boys with autism spectrum disorder (ranging from 8 to 18 years of age). Parents of these children reported on (1) whether they had specific rules regulating their child's video game use, (2) whether their child had in-room access to a variety of screen-based media devices (television, computer, and video game console), and (3) their child's oppositional behaviors. Multivariate regression models showed that in-room access to a video game console predicted oppositional behavior while controlling for in-room access to other media devices (computer and television) and relevant variables (e.g. average number of video game hours played per day). Additionally, the association between in-room access to a video game console and oppositional behavior was particularly large when parents reported no rules on their child's video game use. The current findings indicate that both access and parental rules regarding video games warrant future experimental and longitudinal research as they relate to problem behavior in boys with autism spectrum disorder. © The Author(s) 2013.

  1. Neutron therapy of cancer

    NASA Technical Reports Server (NTRS)

    Frigerio, N. A.; Nellans, H. N.; Shaw, M. J.

    1969-01-01

    Reports relate applications of neutrons to the problem of cancer therapy. The biochemical and biophysical aspects of fast-neutron therapy, neutron-capture and neutron-conversion therapy with intermediate-range neutrons are presented. Also included is a computer program for neutron-gamma radiobiology.

  2. Sleep problems and computer use during work and leisure: Cross-sectional study among 7800 adults.

    PubMed

    Andersen, Lars Louis; Garde, Anne Helene

    2015-01-01

    Previous studies linked heavy computer use to disturbed sleep. This study investigates the association between computer use during work and leisure and sleep problems in working adults. From the 2010 round of the Danish Work Environment Cohort Study, currently employed wage earners on daytime schedule (N = 7883) replied to the Bergen insomnia scale and questions on weekly duration of computer use. Results showed that sleep problems for three or more days per week (average of six questions) were experienced by 14.9% of the respondents. Logistic regression analyses, controlled for gender, age, physical and psychosocial work factors, lifestyle, chronic disease and mental health showed that computer use during leisure for 30 or more hours per week (reference 0-10 hours per week) was associated with increased odds of sleep problems (OR 1.83 [95% CI 1.06-3.17]). Computer use during work and shorter duration of computer use during leisure were not associated with sleep problems. In conclusion, excessive computer use during leisure - but not work - is associated with sleep problems in adults working on daytime schedule.

  3. Cognition and asynchronous distribution between human and machine building accidents.

    PubMed

    Martins, Edgard; Soares, Marcelo; Augusto, Lia; Laura, Laura

    2012-01-01

    The creation of meaning in communication is a trading activity, resulting from the construction that is born of the interaction between subjects. That is, the meaning is not inherent to the relationship between words, signs and symbols that arise from negotiating a necessary and unavoidable. As the concepts of sense as discrete and static representations imply a notion of classical computing and design of a cognitive system corresponding conceptions of meaning construction as located and shared among agents implies notions of different computing and cognition. Several efforts have been developed to meet these demands. Among them are the Connectionism (also known as neural networks. Records on aspects of mental health and stress of flight professionals are present in the official reports of the organs of investigation of aviation accidents worldwide since its inception. Problems related to health physical and mental health of pilots (fatigue, stress, physiological and psychosocial problems) account for 19% of causal factors in aircraft accidents. The training seems a paradox when we know that these professionals receive regular training, have high education and technical training of high level. However, problems arise related to the implementation of learning that can be influenced to reduce their cognitive capacity, making it in practice, relatively unable to exercise its functions effectively and safely.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku

    There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes andmore » fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.« less

  5. Weighted Watson-Crick automata

    NASA Astrophysics Data System (ADS)

    Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku

    2014-07-01

    There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes and fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.

  6. Computational Psychometrics for Modeling System Dynamics during Stressful Disasters

    PubMed Central

    Cipresso, Pietro; Bessi, Alessandro; Colombo, Desirée; Pedroli, Elisa; Riva, Giuseppe

    2017-01-01

    Disasters can be very stressful events. However, computational models of stress require data that might be very difficult to collect during disasters. Moreover, personal experiences are not repeatable, so it is not possible to collect bottom-up information when building a coherent model. To overcome these problems, we propose the use of computational models and virtual reality integration to recreate disaster situations, while examining possible dynamics in order to understand human behavior and relative consequences. By providing realistic parameters associated with disaster situations, computational scientists can work more closely with emergency responders to improve the quality of interventions in the future. PMID:28861026

  7. Java and its future in biomedical computing.

    PubMed Central

    Rodgers, R P

    1996-01-01

    Java, a new object-oriented computing language related to C++, is receiving considerable attention due to its use in creating network-sharable, platform-independent software modules (known as "applets") that can be used with the World Wide Web. The Web has rapidly become the most commonly used information-retrieval tool associated with the global computer network known as the Internet, and Java has the potential to further accelerate the Web's application to medical problems. Java's potentially wide acceptance due to its Web association and its own technical merits also suggests that it may become a popular language for non-Web-based, object-oriented computing. PMID:8880677

  8. Outpatient follow-up system using a personal computer for patients with hepatocellular carcinoma after surgery.

    PubMed

    Itasaka, H; Matsumata, T; Taketomi, A; Yamamoto, K; Yanaga, K; Takenaka, K; Akazawa, K; Sugimachi, K

    1994-12-01

    A simple outpatient follow-up system was developed with a laptop personal computer to assist management of patients with hepatocellular carcinoma after hepatic resections. Since it is based on a non-relational database program and the graphical user interface of Macintosh operating system, those who are not a specialist of the computer operation can use it. It is helpful to promptly recognize current status and problems of the patients, to diagnose recurrences of the disease and to prevent lost from follow-up cases. A portability of the computer also facilitates utilization of these data everywhere, such as in clinical conferences and laboratories.

  9. Computing singularities of perturbation series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kvaal, Simen; Jarlebring, Elias; Michiels, Wim

    2011-03-15

    Many properties of current ab initio approaches to the quantum many-body problem, both perturbational and otherwise, are related to the singularity structure of the Rayleigh-Schroedinger perturbation series. A numerical procedure is presented that in principle computes the complete set of singularities, including the dominant singularity which limits the radius of convergence. The method approximates the singularities as eigenvalues of a certain generalized eigenvalue equation which is solved using iterative techniques. It relies on computation of the action of the Hamiltonian matrix on a vector and does not rely on the terms in the perturbation series. The method can be usefulmore » for studying perturbation series of typical systems of moderate size, for fundamental development of resummation schemes, and for understanding the structure of singularities for typical systems. Some illustrative model problems are studied, including a helium-like model with {delta}-function interactions for which Moeller-Plesset perturbation theory is considered and the radius of convergence found.« less

  10. Student Research in Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Blondin, J. M.

    1999-12-01

    Computational physics can shorten the long road from freshman physics major to independent research by providing students with powerful tools to deal with the complexities of modern research problems. At North Carolina State University we have introduced dozens of students to astrophysics research using the tools of computational fluid dynamics. We have used several formats for working with students, including the traditional approach of one-on-one mentoring, a more group-oriented format in which several students work together on one or more related projects, and a novel attempt to involve an entire class in a coordinated semester research project. The advantages and disadvantages of these formats will be discussed at length, but the single most important influence has been peer support. Having students work in teams or learn the tools of research together but tackle different problems has led to more positive experiences than a lone student diving into solo research. This work is supported by an NSF CAREER Award.

  11. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  12. Lattice Boltzmann and Navier-Stokes Cartesian CFD Approaches for Airframe Noise Predictions

    NASA Technical Reports Server (NTRS)

    Barad, Michael F.; Kocheemoolayil, Joseph G.; Kiris, Cetin C.

    2017-01-01

    Lattice Boltzmann (LB) and compressible Navier-Stokes (NS) equations based computational fluid dynamics (CFD) approaches are compared for simulating airframe noise. Both LB and NS CFD approaches are implemented within the Launch Ascent and Vehicle Aerodynamics (LAVA) framework. Both schemes utilize the same underlying Cartesian structured mesh paradigm with provision for local adaptive grid refinement and sub-cycling in time. We choose a prototypical massively separated, wake-dominated flow ideally suited for Cartesian-grid based approaches in this study - The partially-dressed, cavity-closed nose landing gear (PDCC-NLG) noise problem from AIAA's Benchmark problems for Airframe Noise Computations (BANC) series of workshops. The relative accuracy and computational efficiency of the two approaches are systematically compared. Detailed comments are made on the potential held by LB to significantly reduce time-to-solution for a desired level of accuracy within the context of modeling airframes noise from first principles.

  13. Scalable High Performance Computing: Direct and Large-Eddy Turbulent Flow Simulations Using Massively Parallel Computers

    NASA Technical Reports Server (NTRS)

    Morgan, Philip E.

    2004-01-01

    This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.

  14. Recent advances in computational-analytical integral transforms for convection-diffusion problems

    NASA Astrophysics Data System (ADS)

    Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.

    2017-10-01

    An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.

  15. Issues in human/computer control of dexterous remote hands

    NASA Technical Reports Server (NTRS)

    Salisbury, K.

    1987-01-01

    Much research on dexterous robot hands has been aimed at the design and control problems associated with their autonomous operation, while relatively little research has addressed the problem of direct human control. It is likely that these two modes can be combined in a complementary manner yielding more capability than either alone could provide. While many of the issues in mixed computer/human control of dexterous hands parallel those found in supervisory control of traditional remote manipulators, the unique geometry and capabilities of dexterous hands pose many new problems. Among these are the control of redundant degrees of freedom, grasp stabilization and specification of non-anthropomorphic behavior. An overview is given of progress made at the MIT AI Laboratory in control of the Salisbury 3 finger hand, including experiments in grasp planning and manipulation via controlled slip. It is also suggested how we might introduce human control into the process at a variety of functional levels.

  16. [Forensic evidence-based medicine in computer communication networks].

    PubMed

    Qiu, Yun-Liang; Peng, Ming-Qi

    2013-12-01

    As an important component of judicial expertise, forensic science is broad and highly specialized. With development of network technology, increasement of information resources, and improvement of people's legal consciousness, forensic scientists encounter many new problems, and have been required to meet higher evidentiary standards in litigation. In view of this, evidence-based concept should be established in forensic medicine. We should find the most suitable method in forensic science field and other related area to solve specific problems in the evidence-based mode. Evidence-based practice can solve the problems in legal medical field, and it will play a great role in promoting the progress and development of forensic science. This article reviews the basic theory of evidence-based medicine and its effect, way, method, and evaluation in the forensic medicine in order to discuss the application value of forensic evidence-based medicine in computer communication networks.

  17. Ubiquitous information for ubiquitous computing: expressing clinical data sets with openEHR archetypes.

    PubMed

    Garde, Sebastian; Hovenga, Evelyn; Buck, Jasmin; Knaup, Petra

    2006-01-01

    Ubiquitous computing requires ubiquitous access to information and knowledge. With the release of openEHR Version 1.0 there is a common model available to solve some of the problems related to accessing information and knowledge by improving semantic interoperability between clinical systems. Considerable work has been undertaken by various bodies to standardise Clinical Data Sets. Notwithstanding their value, several problems remain unsolved with Clinical Data Sets without the use of a common model underpinning them. This paper outlines these problems like incompatible basic data types and overlapping and incompatible definitions of clinical content. A solution to this based on openEHR archetypes is motivated and an approach to transform existing Clinical Data Sets into archetypes is presented. To avoid significant overlaps and unnecessary effort during archetype development, archetype development needs to be coordinated nationwide and beyond and also across the various health professions in a formalized process.

  18. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  19. Foraging Behaviors and Potential Computational Ability of Problem-Solving in an Amoeba

    NASA Astrophysics Data System (ADS)

    Nakagaki, Toshiyuki

    We study cell behaviors in the complex situations: multiple locations of food were simultaneously given. An amoeba-like organism of true slime mold gathered at the multiple food locations while body shape made of tubular network was totally changed. Then only a few tubes connected all of food locations through a network shape. By taking the network shape of body, the plasmodium could meet its own physiological requirements: as fast absorption of nutrient as possible and sufficient circulation of chemical signals and nutrients through a whole body. Optimality of network shape was evaluated in relation to a combinatorial optimization problem. Here we reviewed the potential computational ability of problem-solving in the amoeba, which was much higher than we'd though. The main message of this article is that we had better to change our stupid opinion that an amoeba is stupid.

  20. A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.

    PubMed

    Catarinucci, Luca; Tarricone, Luciano

    2009-01-01

    The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.

  1. The semantic system is involved in mathematical problem solving.

    PubMed

    Zhou, Xinlin; Li, Mengyi; Li, Leinian; Zhang, Yiyun; Cui, Jiaxin; Liu, Jie; Chen, Chuansheng

    2018-02-01

    Numerous studies have shown that the brain regions around bilateral intraparietal cortex are critical for number processing and arithmetical computation. However, the neural circuits for more advanced mathematics such as mathematical problem solving (with little routine arithmetical computation) remain unclear. Using functional magnetic resonance imaging (fMRI), this study (N = 24 undergraduate students) compared neural bases of mathematical problem solving (i.e., number series completion, mathematical word problem solving, and geometric problem solving) and arithmetical computation. Direct subject- and item-wise comparisons revealed that mathematical problem solving typically had greater activation than arithmetical computation in all 7 regions of the semantic system (which was based on a meta-analysis of 120 functional neuroimaging studies on semantic processing). Arithmetical computation typically had greater activation in the supplementary motor area and left precentral gyrus. The results suggest that the semantic system in the brain supports mathematical problem solving. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Reliability enhancement of Navier-Stokes codes through convergence acceleration

    NASA Technical Reports Server (NTRS)

    Merkle, Charles L.; Dulikravich, George S.

    1995-01-01

    Methods for enhancing the reliability of Navier-Stokes computer codes through improving convergence characteristics are presented. The improving of these characteristics decreases the likelihood of code unreliability and user interventions in a design environment. The problem referred to as a 'stiffness' in the governing equations for propulsion-related flowfields is investigated, particularly in regard to common sources of equation stiffness that lead to convergence degradation of CFD algorithms. Von Neumann stability theory is employed as a tool to study the convergence difficulties involved. Based on the stability results, improved algorithms are devised to ensure efficient convergence in different situations. A number of test cases are considered to confirm a correlation between stability theory and numerical convergence. The examples of turbulent and reacting flow are presented, and a generalized form of the preconditioning matrix is derived to handle these problems, i.e., the problems involving additional differential equations for describing the transport of turbulent kinetic energy, dissipation rate and chemical species. Algorithms for unsteady computations are considered. The extension of the preconditioning techniques and algorithms derived for Navier-Stokes computations to three-dimensional flow problems is discussed. New methods to accelerate the convergence of iterative schemes for the numerical integration of systems of partial differential equtions are developed, with a special emphasis on the acceleration of convergence on highly clustered grids.

  3. Analytical Cost Metrics : Days of Future Past

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov

    As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less

  4. Students' Mathematics Word Problem-Solving Achievement in a Computer-Based Story

    ERIC Educational Resources Information Center

    Gunbas, N.

    2015-01-01

    The purpose of this study was to investigate the effect of a computer-based story, which was designed in anchored instruction framework, on sixth-grade students' mathematics word problem-solving achievement. Problems were embedded in a story presented on a computer as computer story, and then compared with the paper-based version of the same story…

  5. New numerical methods for open-loop and feedback solutions to dynamic optimization problems

    NASA Astrophysics Data System (ADS)

    Ghosh, Pradipto

    The topic of the first part of this research is trajectory optimization of dynamical systems via computational swarm intelligence. Particle swarm optimization is a nature-inspired heuristic search method that relies on a group of potential solutions to explore the fitness landscape. Conceptually, each particle in the swarm uses its own memory as well as the knowledge accumulated by the entire swarm to iteratively converge on an optimal or near-optimal solution. It is relatively straightforward to implement and unlike gradient-based solvers, does not require an initial guess or continuity in the problem definition. Although particle swarm optimization has been successfully employed in solving static optimization problems, its application in dynamic optimization, as posed in optimal control theory, is still relatively new. In the first half of this thesis particle swarm optimization is used to generate near-optimal solutions to several nontrivial trajectory optimization problems including thrust programming for minimum fuel, multi-burn spacecraft orbit transfer, and computing minimum-time rest-to-rest trajectories for a robotic manipulator. A distinct feature of the particle swarm optimization implementation in this work is the runtime selection of the optimal solution structure. Optimal trajectories are generated by solving instances of constrained nonlinear mixed-integer programming problems with the swarming technique. For each solved optimal programming problem, the particle swarm optimization result is compared with a nearly exact solution found via a direct method using nonlinear programming. Numerical experiments indicate that swarm search can locate solutions to very great accuracy. The second half of this research develops a new extremal-field approach for synthesizing nearly optimal feedback controllers for optimal control and two-player pursuit-evasion games described by general nonlinear differential equations. A notable revelation from this development is that the resulting control law has an algebraic closed-form structure. The proposed method uses an optimal spatial statistical predictor called universal kriging to construct the surrogate model of a feedback controller, which is capable of quickly predicting an optimal control estimate based on current state (and time) information. With universal kriging, an approximation to the optimal feedback map is computed by conceptualizing a set of state-control samples from pre-computed extremals to be a particular realization of a jointly Gaussian spatial process. Feedback policies are computed for a variety of example dynamic optimization problems in order to evaluate the effectiveness of this methodology. This feedback synthesis approach is found to combine good numerical accuracy with low computational overhead, making it a suitable candidate for real-time applications. Particle swarm and universal kriging are combined for a capstone example, a near optimal, near-admissible, full-state feedback control law is computed and tested for the heat-load-limited atmospheric-turn guidance of an aeroassisted transfer vehicle. The performance of this explicit guidance scheme is found to be very promising; initial errors in atmospheric entry due to simulated thruster misfirings are found to be accurately corrected while closely respecting the algebraic state-inequality constraint.

  6. Visual ergonomics in the workplace.

    PubMed

    Anshel, Jeffrey R

    2007-10-01

    This article provides information about visual function and its role in workplace productivity. By understanding the connection among comfort, health, and productivity and knowing the many options for effective ergonomic workplace lighting, the occupational health nurse can be sensitive to potential visual stress that can affect all areas of performance. Computer vision syndrome-the eye and vision problems associated with near work experienced during or related to computer use-is defined and solutions to it are discussed.

  7. Gravitational field calculations on a dynamic lattice by distributed computing.

    NASA Astrophysics Data System (ADS)

    Mähönen, P.; Punkka, V.

    A new method of calculating numerically time evolution of a gravitational field in general relativity is introduced. Vierbein (tetrad) formalism, dynamic lattice and massively parallelized computation are suggested as they are expected to speed up the calculations considerably and facilitate the solution of problems previously considered too hard to be solved, such as the time evolution of a system consisting of two or more black holes or the structure of worm holes.

  8. Gravitation Field Calculations on a Dynamic Lattice by Distributed Computing

    NASA Astrophysics Data System (ADS)

    Mähönen, Petri; Punkka, Veikko

    A new method of calculating numerically time evolution of a gravitational field in General Relatity is introduced. Vierbein (tetrad) formalism, dynamic lattice and massively parallelized computation are suggested as they are expected to speed up the calculations considerably and facilitate the solution of problems previously considered too hard to be solved, such as the time evolution of a system consisting of two or more black holes or the structure of worm holes.

  9. Bayesian inference and decision theory - A framework for decision making in natural resource management

    USGS Publications Warehouse

    Dorazio, R.M.; Johnson, F.A.

    2003-01-01

    Bayesian inference and decision theory may be used in the solution of relatively complex problems of natural resource management, owing to recent advances in statistical theory and computing. In particular, Markov chain Monte Carlo algorithms provide a computational framework for fitting models of adequate complexity and for evaluating the expected consequences of alternative management actions. We illustrate these features using an example based on management of waterfowl habitat.

  10. Spurious Numerical Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Lafon, A.; Yee, H. C.

    1995-01-01

    Paper presents detailed study of spurious steady-state numerical solutions of differential equations that contain nonlinear source terms. Main objectives of this study are (1) to investigate how well numerical steady-state solutions of model nonlinear reaction/convection boundary-value problem mimic true steady-state solutions and (2) to relate findings of this investigation to implications for interpretation of numerical results from computational-fluid-dynamics algorithms and computer codes used to simulate reacting flows.

  11. Mass storage system experiences and future needs at the National Center for Atmospheric Research

    NASA Technical Reports Server (NTRS)

    Olear, Bernard T.

    1992-01-01

    This presentation is designed to relate some of the experiences of the Scientific Computing Division at NCAR dealing with the 'data problem'. A brief history and a development of some basic Mass Storage System (MSS) principles are given. An attempt is made to show how these principles apply to the integration of various components into NCAR's MSS. There is discussion of future MSS needs for future computing environments.

  12. The Role of the Goal in Solving Hard Computational Problems: Do People Really Optimize?

    ERIC Educational Resources Information Center

    Carruthers, Sarah; Stege, Ulrike; Masson, Michael E. J.

    2018-01-01

    The role that the mental, or internal, representation plays when people are solving hard computational problems has largely been overlooked to date, despite the reality that this internal representation drives problem solving. In this work we investigate how performance on versions of two hard computational problems differs based on what internal…

  13. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  14. CHEMEX; Understanding and Solving Problems in Chemistry. A Computer-Assisted Instruction Program for General Chemistry.

    ERIC Educational Resources Information Center

    Lower, Stephen K.

    A brief overview of CHEMEX--a problem-solving, tutorial style computer-assisted instructional course--is provided and sample problems are offered. In CHEMEX, students receive problems in advance and attempt to solve them before moving through the computer program, which assists them in overcoming difficulties and serves as a review mechanism.…

  15. A Comparison of the Effects of Lego TC Logo and Problem Solving Software on Elementary Students' Problem Solving Skills.

    ERIC Educational Resources Information Center

    Palumbo, Debra L; Palumbo, David B.

    1993-01-01

    Computer-based problem-solving software exposure was compared to Lego TC LOGO instruction. Thirty fifth graders received either Lego LOGO instruction, which couples Lego building block activities with LOGO computer programming, or instruction with various problem-solving computer programs. Although both groups showed significant progress, the Lego…

  16. Graph cuts via l1 norm minimization.

    PubMed

    Bhusnurmath, Arvind; Taylor, Camillo J

    2008-10-01

    Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.

  17. Computational complexity in entanglement transformations

    NASA Astrophysics Data System (ADS)

    Chitambar, Eric A.

    In physics, systems having three parts are typically much more difficult to analyze than those having just two. Even in classical mechanics, predicting the motion of three interacting celestial bodies remains an insurmountable challenge while the analogous two-body problem has an elementary solution. It is as if just by adding a third party, a fundamental change occurs in the structure of the problem that renders it unsolvable. In this thesis, we demonstrate how such an effect is likewise present in the theory of quantum entanglement. In fact, the complexity differences between two-party and three-party entanglement become quite conspicuous when comparing the difficulty in deciding what state changes are possible for these systems when no additional entanglement is consumed in the transformation process. We examine this entanglement transformation question and its variants in the language of computational complexity theory, a powerful subject that formalizes the concept of problem difficulty. Since deciding feasibility of a specified bipartite transformation is relatively easy, this task belongs to the complexity class P. On the other hand, for tripartite systems, we find the problem to be NP-Hard, meaning that its solution is at least as hard as the solution to some of the most difficult problems humans have encountered. One can then rigorously defend the assertion that a fundamental complexity difference exists between bipartite and tripartite entanglement since unlike the former, the full range of forms realizable by the latter is incalculable (assuming P≠NP). However, similar to the three-body celestial problem, when one examines a special subclass of the problem---invertible transformations on systems having at least one qubit subsystem---we prove that the problem can be solved efficiently. As a hybrid of the two questions, we find that the question of tripartite to bipartite transformations can be solved by an efficient randomized algorithm. Our results are obtained by encoding well-studied computational problems such as polynomial identity testing and tensor rank into questions of entanglement transformation. In this way, entanglement theory provides a physical manifestation of some of the most puzzling and abstract classical computation questions.

  18. The Telecommunications Environment and Its Implications for System Design.

    ERIC Educational Resources Information Center

    Learn, Larry L.; McGill, Michael J.

    1984-01-01

    Discusses changing telecommunications environment and effect these changes might have on information systems design. Major telecommunications factors and trends reviewed are classified as technical (application of computer technologies to classical telecommunications problems), economic, and regulatory policy related (divestiture of American…

  19. Copyright Development in the United States

    ERIC Educational Resources Information Center

    Stedman, John C.

    1976-01-01

    Some of the problems posed by the more significant new technologies in their relation to the copyright law are described. Included in the discussion are cable television, reprography (especially Xeroxing and comparable processes), and the computer. A federal Technology Commission is proposed. (LBH)

  20. Quantum computation with coherent spin states and the close Hadamard problem

    NASA Astrophysics Data System (ADS)

    Adcock, Mark R. A.; Høyer, Peter; Sanders, Barry C.

    2016-04-01

    We study a model of quantum computation based on the continuously parameterized yet finite-dimensional Hilbert space of a spin system. We explore the computational powers of this model by analyzing a pilot problem we refer to as the close Hadamard problem. We prove that the close Hadamard problem can be solved in the spin system model with arbitrarily small error probability in a constant number of oracle queries. We conclude that this model of quantum computation is suitable for solving certain types of problems. The model is effective for problems where symmetries between the structure of the information associated with the problem and the structure of the unitary operators employed in the quantum algorithm can be exploited.

  1. Matrix Interdiction Problem

    NASA Astrophysics Data System (ADS)

    Kasiviswanathan, Shiva Prasad; Pan, Feng

    In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove a set of k matrix columns that minimizes in the residual matrix the sum of the row values, where the value of a row is defined to be the largest entry in that row. This combinatorial problem is closely related to bipartite network interdiction problem that can be applied to minimize the probability that an adversary can successfully smuggle weapons. After introducing the matrix interdiction problem, we study the computational complexity of this problem. We show that the matrix interdiction problem is NP-hard and that there exists a constant γ such that it is even NP-hard to approximate this problem within an n γ additive factor. We also present an algorithm for this problem that achieves an (n - k) multiplicative approximation ratio.

  2. On Convergence Acceleration Techniques for Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1998-01-01

    A discussion of convergence acceleration techniques as they relate to computational fluid dynamics problems on unstructured meshes is given. Rather than providing a detailed description of particular methods, the various different building blocks of current solution techniques are discussed and examples of solution strategies using one or several of these ideas are given. Issues relating to unstructured grid CFD problems are given additional consideration, including suitability of algorithms to current hardware trends, memory and cpu tradeoffs, treatment of non-linearities, and the development of efficient strategies for handling anisotropy-induced stiffness. The outlook for future potential improvements is also discussed.

  3. [Management of the visual risk in VDT workers and the role of the occupational physician (Medico competente)].

    PubMed

    Signorelli, C; Lepratto, M; Summa, A

    2005-01-01

    The enormous increasing of computer use in work activities has carried great progresses and many other advantages, but it has brought also possible health problems for the workers. The occupational risk in VDT workers involves the visual system, work-related muscoloskeletal disorders and also the mental state. This article concerns the major problems related to the obligations of the employer and to health surveillance, with special care to ophtalmologist examination for the ability, the responsibility and duty of occupational physicians (medici competenti) and the possible role of the ophthalmologists.

  4. Substructure analysis using NICE/SPAR and applications of force to linear and nonlinear structures. [spacecraft masts

    NASA Technical Reports Server (NTRS)

    Razzaq, Zia; Prasad, Venkatesh; Darbhamulla, Siva Prasad; Bhati, Ravinder; Lin, Cai

    1987-01-01

    Parallel computing studies are presented for a variety of structural analysis problems. Included are the substructure planar analysis of rectangular panels with and without a hole, the static analysis of space mast, using NICE/SPAR and FORCE, and substructure analysis of plane rigid-jointed frames using FORCE. The computations are carried out on the Flex/32 MultiComputer using one to eighteen processors. The NICE/SPAR runstream samples are documented for the panel problem. For the substructure analysis of plane frames, a computer program is developed to demonstrate the effectiveness of a substructuring technique when FORCE is enforced. Ongoing research activities for an elasto-plastic stability analysis problem using FORCE, and stability analysis of the focus problem using NICE/SPAR are briefly summarized. Speedup curves for the panel, the mast, and the frame problems provide a basic understanding of the effectiveness of parallel computing procedures utilized or developed, within the domain of the parameters considered. Although the speedup curves obtained exhibit various levels of computational efficiency, they clearly demonstrate the excellent promise which parallel computing holds for the structural analysis problem. Source code is given for the elasto-plastic stability problem and the FORCE program.

  5. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1990-01-01

    Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.

  6. Computer Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pronskikh, V. S.

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossiblemore » to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes« less

  7. Randomized Trial of Desktop Humidifier for Dry Eye Relief in Computer Users.

    PubMed

    Wang, Michael T M; Chan, Evon; Ea, Linda; Kam, Clifford; Lu, Yvonne; Misra, Stuti L; Craig, Jennifer P

    2017-11-01

    Dry eye is a frequently reported problem among computer users. Low relative humidity environments are recognized to exacerbate signs and symptoms of dry eye, yet are common in offices of computer operators. Desktop USB-powered humidifiers are available commercially, but their efficacy for dry eye relief has not been established. This study aims to evaluate the potential for a desktop USB-powered humidifier to improve tear-film parameters, ocular surface characteristics, and subjective comfort of computer users. Forty-four computer users were enrolled in a prospective, masked, randomized crossover study. On separate days, participants were randomized to 1 hour of continuous computer use, with and without exposure to a desktop humidifier. Lipid-layer grade, noninvasive tear-film breakup time, and tear meniscus height were measured before and after computer use. Following the 1-hour period, participants reported whether ocular comfort was greater, equal, or lesser than that at baseline. The desktop humidifier effected a relative difference in humidity between the two environments of +5.4 ± 5.0% (P < .001). Participants demonstrated no significant differences in lipid-layer grade and tear meniscus height between the two environments (all P > .05). However, a relative increase in the median noninvasive tear-film breakup time of +4.0 seconds was observed in the humidified environment (P < .001), which was associated with a higher proportion of subjects reporting greater comfort relative to baseline (36% vs. 5%, P < .001). Even with a modest increase in relative humidity locally, the desktop humidifier shows potential to improve tear-film stability and subjective comfort during computer use.Trial registration no: ACTRN12617000326392.

  8. Are Technology Interruptions Impacting Your Bottom Line? An Innovative Proposal for Change.

    PubMed

    Ledbetter, Tamera; Shultz, Sarah; Beckham, Roxanne

    2017-10-01

    Nursing interruptions are a costly and dangerous variable in acute care hospitals. Malfunctioning technology equipment interrupts nursing care and prevents full utilization of computer safety systems to prevent patient care errors. This paper identifies an innovative approach to nursing interruptions related to computer and computer cart malfunctions. The impact on human resources is defined and outcome measures were proposed. A multifaceted proposal, based on a literature review, aimed at reducing nursing interruptions is presented. This proposal is expected to increase patient safety, as well as patient and nurse satisfaction. Acute care hospitals utilizing electronic medical records and bar-coded medication administration technology. Nurses, information technology staff, nursing informatics staff, and all leadership teams affected by technology problems and their proposed solutions. Literature from multiple fields was reviewed to evaluate research related to computer/computer cart failures, and the approaches used to resolve these issues. Outcome measured strategic goals related to patient safety, and nurse and patient satisfaction. Specific help desk metrics will demonstrate the effect of interventions. This paper addresses a gap in the literature and proposes practical and innovative solutions. A comprehensive computer and computer cart repair program is essential for patient safety, financial stewardship, and utilization of resources. © 2015 Wiley Periodicals, Inc.

  9. A performance comparison of scalar, vector, and concurrent vector computers including supercomputers for modeling transport of reactive contaminants in groundwater

    NASA Astrophysics Data System (ADS)

    Tripathi, Vijay S.; Yeh, G. T.

    1993-06-01

    Sophisticated and highly computation-intensive models of transport of reactive contaminants in groundwater have been developed in recent years. Application of such models to real-world contaminant transport problems, e.g., simulation of groundwater transport of 10-15 chemically reactive elements (e.g., toxic metals) and relevant complexes and minerals in two and three dimensions over a distance of several hundred meters, requires high-performance computers including supercomputers. Although not widely recognized as such, the computational complexity and demand of these models compare with well-known computation-intensive applications including weather forecasting and quantum chemical calculations. A survey of the performance of a variety of available hardware, as measured by the run times for a reactive transport model HYDROGEOCHEM, showed that while supercomputers provide the fastest execution times for such problems, relatively low-cost reduced instruction set computer (RISC) based scalar computers provide the best performance-to-price ratio. Because supercomputers like the Cray X-MP are inherently multiuser resources, often the RISC computers also provide much better turnaround times. Furthermore, RISC-based workstations provide the best platforms for "visualization" of groundwater flow and contaminant plumes. The most notable result, however, is that current workstations costing less than $10,000 provide performance within a factor of 5 of a Cray X-MP.

  10. Scalable Faceted Ranking in Tagging Systems

    NASA Astrophysics Data System (ADS)

    Orlicki, José I.; Alvarez-Hamelin, J. Ignacio; Fierens, Pablo I.

    Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend contents, are growing. Such systems can be represented as graphs where nodes correspond to users and tagged-links to recommendations. In this paper we analyze the problem of computing a ranking of users with respect to a facet described as a set of tags. A straightforward solution is to compute a PageRank-like algorithm on a facet-related graph, but it is not feasible for online computation. We propose an alternative: (i) a ranking for each tag is computed offline on the basis of tag-related subgraphs; (ii) a faceted order is generated online by merging rankings corresponding to all the tags in the facet. Based on the graph analysis of YouTube and Flickr, we show that step (i) is scalable. We also present efficient algorithms for step (ii), which are evaluated by comparing their results with two gold standards.

  11. A new parallel DNA algorithm to solve the task scheduling problem based on inspired computational model.

    PubMed

    Wang, Zhaocai; Ji, Zuwen; Wang, Xiaoming; Wu, Tunhua; Huang, Wei

    2017-12-01

    As a promising approach to solve the computationally intractable problem, the method based on DNA computing is an emerging research area including mathematics, computer science and molecular biology. The task scheduling problem, as a well-known NP-complete problem, arranges n jobs to m individuals and finds the minimum execution time of last finished individual. In this paper, we use a biologically inspired computational model and describe a new parallel algorithm to solve the task scheduling problem by basic DNA molecular operations. In turn, we skillfully design flexible length DNA strands to represent elements of the allocation matrix, take appropriate biological experiment operations and get solutions of the task scheduling problem in proper length range with less than O(n 2 ) time complexity. Copyright © 2017. Published by Elsevier B.V.

  12. Finite element methods on supercomputers - The scatter-problem

    NASA Technical Reports Server (NTRS)

    Loehner, R.; Morgan, K.

    1985-01-01

    Certain problems arise in connection with the use of supercomputers for the implementation of finite-element methods. These problems are related to the desirability of utilizing the power of the supercomputer as fully as possible for the rapid execution of the required computations, taking into account the gain in speed possible with the aid of pipelining operations. For the finite-element method, the time-consuming operations may be divided into three categories. The first two present no problems, while the third type of operation can be a reason for the inefficient performance of finite-element programs. Two possibilities for overcoming certain difficulties are proposed, giving attention to a scatter-process.

  13. Control optimization, stabilization and computer algorithms for aircraft applications

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.

  14. Forest management and economics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buongiorno, J.; Gilless, J.K.

    1987-01-01

    This volume provides a survey of quantitative methods, guiding the reader through formulation and analysis of models that address forest management problems. The authors use simple mathematics, graphics, and short computer programs to explain each method. Emphasizing applications, they discuss linear, integer, dynamic, and goal programming; simulation; network modeling; and econometrics, as these relate to problems of determining economic harvest schedules in even-aged and uneven-aged forests, the evaluation of forest policies, multiple-objective decision making, and more.

  15. Insight and analysis problem solving in microbes to machines.

    PubMed

    Clark, Kevin B

    2015-11-01

    A key feature for obtaining solutions to difficult problems, insight is oftentimes vaguely regarded as a special discontinuous intellectual process and/or a cognitive restructuring of problem representation or goal approach. However, this nearly century-old state of art devised by the Gestalt tradition to explain the non-analytical or non-trial-and-error, goal-seeking aptitude of primate mentality tends to neglect problem-solving capabilities of lower animal phyla, Kingdoms other than Animalia, and advancing smart computational technologies built from biological, artificial, and composite media. Attempting to provide an inclusive, precise definition of insight, two major criteria of insight, discontinuous processing and problem restructuring, are here reframed using terminology and statistical mechanical properties of computational complexity classes. Discontinuous processing becomes abrupt state transitions in algorithmic/heuristic outcomes or in types of algorithms/heuristics executed by agents using classical and/or quantum computational models. And problem restructuring becomes combinatorial reorganization of resources, problem-type substitution, and/or exchange of computational models. With insight bounded by computational complexity, humans, ciliated protozoa, and complex technological networks, for example, show insight when restructuring time requirements, combinatorial complexity, and problem type to solve polynomial and nondeterministic polynomial decision problems. Similar effects are expected from other problem types, supporting the idea that insight might be an epiphenomenon of analytical problem solving and consequently a larger information processing framework. Thus, this computational complexity definition of insight improves the power, external and internal validity, and reliability of operational parameters with which to classify, investigate, and produce the phenomenon for computational agents ranging from microbes to man-made devices. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Academic computer science and gender: A naturalistic study investigating the causes of attrition

    NASA Astrophysics Data System (ADS)

    Declue, Timothy Hall

    Far fewer women than men take computer science classes in high school, enroll in computer science programs in college, or complete advanced degrees in computer science. The computer science pipeline begins to shrink for women even before entering college, but it is at the college level that the "brain drain" is the most evident numerically, especially in the first class taken by most computer science majors called "Computer Science 1" or CS-I. The result, for both academia and industry, is a pronounced technological gender disparity in academic and industrial computer science. The study revealed the existence of several factors influencing success in CS-I. First, and most clearly, the effect of attribution processes seemed to be quite strong. These processes tend to work against success for females and in favor of success for males. Likewise, evidence was discovered which strengthens theories related to prior experience and the perception that computer science has a culture which is hostile to females. Two unanticipated themes related to the motivation and persistence of successful computer science majors. The findings did not support the belief that females have greater logistical problems in computer science than males, or that females tend to have a different programming style than males which adversely affects the females' ability to succeed in CS-I.

  17. Knowledge Based Systems: A Critical Survey of Major Concepts, Issues, and Techniques. M.S. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kavi, Srinu

    1984-01-01

    This Working Paper Series entry presents a detailed survey of knowledge based systems. After being in a relatively dormant state for many years, only recently is Artificial Intelligence (AI) - that branch of computer science that attempts to have machines emulate intelligent behavior - accomplishing practical results. Most of these results can be attributed to the design and use of Knowledge-Based Systems, KBSs (or ecpert systems) - problem solving computer programs that can reach a level of performance comparable to that of a human expert in some specialized problem domain. These systems can act as a consultant for various requirements like medical diagnosis, military threat analysis, project risk assessment, etc. These systems possess knowledge to enable them to make intelligent desisions. They are, however, not meant to replace the human specialists in any particular domain. A critical survey of recent work in interactive KBSs is reported. A case study (MYCIN) of a KBS, a list of existing KBSs, and an introduction to the Japanese Fifth Generation Computer Project are provided as appendices. Finally, an extensive set of KBS-related references is provided at the end of the report.

  18. Quantum Computing: Solving Complex Problems

    ScienceCinema

    DiVincenzo, David

    2018-05-22

    One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

  19. A nonlinear bi-level programming approach for product portfolio management.

    PubMed

    Ma, Shuang

    2016-01-01

    Product portfolio management (PPM) is a critical decision-making for companies across various industries in today's competitive environment. Traditional studies on PPM problem have been motivated toward engineering feasibilities and marketing which relatively pay less attention to other competitors' actions and the competitive relations, especially in mathematical optimization domain. The key challenge lies in that how to construct a mathematical optimization model to describe this Stackelberg game-based leader-follower PPM problem and the competitive relations between them. The primary work of this paper is the representation of a decision framework and the optimization model to leverage the PPM problem of leader and follower. A nonlinear, integer bi-level programming model is developed based on the decision framework. Furthermore, a bi-level nested genetic algorithm is put forward to solve this nonlinear bi-level programming model for leader-follower PPM problem. A case study of notebook computer product portfolio optimization is reported. Results and analyses reveal that the leader-follower bi-level optimization model is robust and can empower product portfolio optimization.

  20. An Enhanced Memetic Algorithm for Single-Objective Bilevel Optimization Problems.

    PubMed

    Islam, Md Monjurul; Singh, Hemant Kumar; Ray, Tapabrata; Sinha, Ankur

    2017-01-01

    Bilevel optimization, as the name reflects, deals with optimization at two interconnected hierarchical levels. The aim is to identify the optimum of an upper-level  leader problem, subject to the optimality of a lower-level follower problem. Several problems from the domain of engineering, logistics, economics, and transportation have an inherent nested structure which requires them to be modeled as bilevel optimization problems. Increasing size and complexity of such problems has prompted active theoretical and practical interest in the design of efficient algorithms for bilevel optimization. Given the nested nature of bilevel problems, the computational effort (number of function evaluations) required to solve them is often quite high. In this article, we explore the use of a Memetic Algorithm (MA) to solve bilevel optimization problems. While MAs have been quite successful in solving single-level optimization problems, there have been relatively few studies exploring their potential for solving bilevel optimization problems. MAs essentially attempt to combine advantages of global and local search strategies to identify optimum solutions with low computational cost (function evaluations). The approach introduced in this article is a nested Bilevel Memetic Algorithm (BLMA). At both upper and lower levels, either a global or a local search method is used during different phases of the search. The performance of BLMA is presented on twenty-five standard test problems and two real-life applications. The results are compared with other established algorithms to demonstrate the efficacy of the proposed approach.

  1. Computer vision syndrome prevalence, knowledge and associated factors among Saudi Arabia University Students: Is it a serious problem?

    PubMed

    Al Rashidi, Sultan H; Alhumaidan, H

    2017-01-01

    Computers and other visual display devices are now an essential part of our daily life. With the increased use, a very large population is experiencing sundry ocular symptoms globally such as dry eyes, eye strain, irritation, and redness of the eyes to name a few. Collectively, all such computer related symptoms are usually referred to as computer vision syndrome (CVS). The current study aims to define the prevalence, knowledge in community, pathophysiology, factors associated, and prevention of CVS. This is a cross-sectional study conducted in Qassim University College of Medicine during a period of 1 year from January 2015 to January 2016 using a questionnaire to collect relevant data including demographics and various variables to be studied. 634 students were inducted from a public sector University of Qassim, Saudi Arabia, regardless of their age and gender. The data were then statistically analyzed on SPSS version 22, and the descriptive data were expressed as percentages, mode, and median using graphs where needed. A total of 634 students with a mean age of 21. 40, Std 1.997 and Range 7 (18-25) were included as study subjects with a male predominance (77.28%). Of the total patients, majority (459, 72%) presented with acute symptoms while remaining had chronic problems. A clear-cut majority was carrying the symptoms for <5 days and >1 month. The statistical analysis revealed serious symptoms in the majority of study subjects especially those who are permanent users of a computer for long hours. Continuous use of computers for long hours is found to have severe problems of vision especially in those who are using computers and similar devices for a long duration.

  2. Computer vision syndrome prevalence, knowledge and associated factors among Saudi Arabia University Students: Is it a serious problem?

    PubMed Central

    Al Rashidi, Sultan H.; Alhumaidan, H.

    2017-01-01

    Objectives: Computers and other visual display devices are now an essential part of our daily life. With the increased use, a very large population is experiencing sundry ocular symptoms globally such as dry eyes, eye strain, irritation, and redness of the eyes to name a few. Collectively, all such computer related symptoms are usually referred to as computer vision syndrome (CVS). The current study aims to define the prevalence, knowledge in community, pathophysiology, factors associated, and prevention of CVS. Methods: This is a cross-sectional study conducted in Qassim University College of Medicine during a period of 1 year from January 2015 to January 2016 using a questionnaire to collect relevant data including demographics and various variables to be studied. 634 students were inducted from a public sector University of Qassim, Saudi Arabia, regardless of their age and gender. The data were then statistically analyzed on SPSS version 22, and the descriptive data were expressed as percentages, mode, and median using graphs where needed. Results: A total of 634 students with a mean age of 21. 40, Std 1.997 and Range 7 (18-25) were included as study subjects with a male predominance (77.28%). Of the total patients, majority (459, 72%) presented with acute symptoms while remaining had chronic problems. A clear-cut majority was carrying the symptoms for <5 days and >1 month. The statistical analysis revealed serious symptoms in the majority of study subjects especially those who are permanent users of a computer for long hours. Conclusion: Continuous use of computers for long hours is found to have severe problems of vision especially in those who are using computers and similar devices for a long duration. PMID:29114189

  3. Towards an Autonomic Cluster Management System (ACMS) with Reflex Autonomicity

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Hinchey, Mike; Sterritt, Roy

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of providing a fault-tolerant environment and achieving significant computational capabilities for high-performance computing applications. However, the task of manually managing and configuring a cluster quickly becomes daunting as the cluster grows in size. Autonomic computing, with its vision to provide self-management, can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Autonomic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management and its evolution to include reflex reactions via pulse monitoring.

  4. Computers in Medical Education: A Cooperative Approach to Planning and Implementation

    PubMed Central

    Ellis, Lynda B.M.; Fuller, Sherrilynne

    1988-01-01

    After years of ‘ad hoc’ growth in the use of computers in the curriculum, the University of Minnesota Medical School in cooperation with the Bio-Medical Library and Health Sciences Computing Services developed and began implementation of a plan for integration of medical informatics into all phases of medical education. Objectives were developed which focus on teaching skills related to: 1) accessing, retrieving, evaluating and managing medical information; 2) appropriate utilization of computer-assisted instruction lessons; 3) electronic communication with fellow students and medical faculty; and 4) fostering a lifelong commitment to effective use of computers to solve clinical problems. Surveys assessed the status of computer expertise among faculty and entering students. The results of these surveys, lessons learned from this experience, and implications for the future of computers in medical education are discussed.

  5. Identifying controlling variables for math computation fluency through experimental analysis: the interaction of stimulus control and reinforcing consequences.

    PubMed

    Hofstadter-Duke, Kristi L; Daly, Edward J

    2015-03-01

    This study investigated a method for conducting experimental analyses of academic responding. In the experimental analyses, academic responding (math computation), rather than problem behavior, was reinforced across conditions. Two separate experimental analyses (one with fluent math computation problems and one with non-fluent math computation problems) were conducted with three elementary school children using identical contingencies while math computation rate was measured. Results indicate that the experimental analysis with non-fluent problems produced undifferentiated responding across participants; however, differentiated responding was achieved for all participants in the experimental analysis with fluent problems. A subsequent comparison of the single-most effective condition from the experimental analyses replicated the findings with novel computation problems. Results are discussed in terms of the critical role of stimulus control in identifying controlling consequences for academic deficits, and recommendations for future research refining and extending experimental analysis to academic responding are made. © The Author(s) 2014.

  6. Apollo experience report: Apollo lunar surface experiments package data processing system

    NASA Technical Reports Server (NTRS)

    Eason, R. L.

    1974-01-01

    Apollo Program experience in the processing of scientific data from the Apollo lunar surface experiments package, in which computers and associated hardware and software were used, is summarized. The facility developed for the preprocessing of the lunar science data is described, as are several computer facilities and programs used by the Principal Investigators. The handling, processing, and analyzing of lunar science data and the interface with the Principal Investigators are discussed. Pertinent problems that arose in the development of the data processing schemes are discussed so that future programs may benefit from the solutions to the problems. The evolution of the data processing techniques for lunar science data related to recommendations for future programs of this type.

  7. Structural Characteristics and Reactivity Relationships of Nitroaromatic and Nitramine Explosives – A Review of Our Computational Chemistry and Spectroscopic Research

    PubMed Central

    Qasim, Mohammad M.; Moore, Brett; Taylor, Lyssa; Honea, Patricia; Gorb, Leonid; Leszczynski, Jerzy

    2007-01-01

    Although much has been discovered, discussed and written as to problems of contamination by various military unique compounds, particularly by the nitrogen based energetics (NOCs), remaining problems dictate further evaluation of actual and potential risk to the environment by these energetics and their derivatives and metabolites through determination of their environmental impact—transport, fate and toxicity. This work comprises an effort to understand structural relationships and degradation mechanisms of current and emerging explosives, including nitroaromatic; cyclic and cage cyclic nitramine; and a nitrocubane. This review of our computational chemistry and spectroscopic research describes and compares competitive degradation mechanisms by free radical oxidative, reductive and alkali hydrolysis, relating them, when possible, to environmental risk.

  8. Resource Constrained Planning of Multiple Projects with Separable Activities

    NASA Astrophysics Data System (ADS)

    Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya

    In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.

  9. The ADL Registry and CORDRA. Volume 1: General Overview

    DTIC Science & Technology

    2008-08-01

    and problems encountered by others in related fields, such as library science , computer and network systems design, and publishing. As ADL...in and exist in isolated islands, limiting their visibility, access, and reuse. 4 Compared to publishing and library science , the learning

  10. Image databases: Problems and perspectives

    NASA Technical Reports Server (NTRS)

    Gudivada, V. Naidu

    1989-01-01

    With the increasing number of computer graphics, image processing, and pattern recognition applications, economical storage, efficient representation and manipulation, and powerful and flexible query languages for retrieval of image data are of paramount importance. These and related issues pertinent to image data bases are examined.

  11. An Efficient Statistical Computation Technique for Health Care Big Data using R

    NASA Astrophysics Data System (ADS)

    Sushma Rani, N.; Srinivasa Rao, P., Dr; Parimala, P.

    2017-08-01

    Due to the changes in living conditions and other factors many critical health related problems are arising. The diagnosis of the problem at earlier stages will increase the chances of survival and fast recovery. This reduces the time of recovery and the cost associated for the treatment. One such medical related issue is cancer and breast cancer has been identified as the second leading cause of cancer death. If detected in the early stage it can be cured. Once a patient is detected with breast cancer tumor, it should be classified whether it is cancerous or non-cancerous. So the paper uses k-nearest neighbors(KNN) algorithm which is one of the simplest machine learning algorithms and is an instance-based learning algorithm to classify the data. Day-to -day new records are added which leds to increase in the data to be classified and this tends to be big data problem. The algorithm is implemented in R whichis the most popular platform applied to machine learning algorithms for statistical computing. Experimentation is conducted by using various classification evaluation metric onvarious values of k. The results show that the KNN algorithm out performes better than existing models.

  12. Strategy alternatives for homeland air and cruise missile defense.

    PubMed

    Murphy, Eric M; Payne, Michael D; Vanderwoude, Glenn W

    2010-10-01

    Air and cruise missile defense of the U.S. homeland is characterized by a requirement to protect a large number of critical assets nonuniformly dispersed over a vast area with relatively few defensive systems. In this article, we explore strategy alternatives to make the best use of existing defense resources and suggest this approach as a means of reducing risk while mitigating the cost of developing and acquiring new systems. We frame the issue as an attacker-defender problem with simultaneous moves. First, we outline and examine the relatively simple problem of defending comparatively few locations with two surveillance systems. Second, we present our analysis and findings for a more realistic scenario that includes a representative list of U.S. critical assets. Third, we investigate sensitivity to defensive strategic choices in the more realistic scenario. As part of this investigation, we describe two complementary computational methods that, under certain circumstances, allow one to reduce large computational problems to a more manageable size. Finally, we demonstrate that strategic choices can be an important supplement to material solutions and can, in some cases, be a more cost-effective alternative. © 2010 Society for Risk Analysis.

  13. Computer-assisted resilience training to prepare healthcare workers for pandemic influenza: a randomized trial of the optimal dose of training

    PubMed Central

    2010-01-01

    Background Working in a hospital during an extraordinary infectious disease outbreak can cause significant stress and contribute to healthcare workers choosing to reduce patient contact. Psychological training of healthcare workers prior to an influenza pandemic may reduce stress-related absenteeism, however, established training methods that change behavior and attitudes are too resource-intensive for widespread use. This study tests the feasibility and effectiveness of a less expensive alternative - an interactive, computer-assisted training course designed to build resilience to the stresses of working during a pandemic. Methods A "dose-finding" study compared pre-post changes in three different durations of training. We measured variables that are likely to mediate stress-responses in a pandemic before and after training: confidence in support and training, pandemic-related self-efficacy, coping style and interpersonal problems. Results 158 hospital workers took the course and were randomly assigned to the short (7 sessions, median cumulative duration 111 minutes), medium (12 sessions, 158 minutes) or long (17 sessions, 223 minutes) version. Using an intention-to-treat analysis, the course was associated with significant improvements in confidence in support and training, pandemic self-efficacy and interpersonal problems. Participants who under-utilized coping via problem-solving or seeking support or over-utilized escape-avoidance experienced improved coping. Comparison of doses showed improved interpersonal problems in the medium and long course but not in the short course. There was a trend towards higher drop-out rates with longer duration of training. Conclusions Computer-assisted resilience training in healthcare workers appears to be of significant benefit and merits further study under pandemic conditions. Comparing three "doses" of the course suggested that the medium course was optimal. PMID:20307302

  14. Sixty years of aeronautical research, 1917-1977. [Langley Research Center

    NASA Technical Reports Server (NTRS)

    Anderton, D. A.

    1978-01-01

    The history of Langley Research Center and its contributions to solving problems related to flight over the past six decades is recounted. Technical innovations described include those related to air craft construction materials, jet and rocket propulsion, flight testing and simulation, wind tunnel tests, noise reduction, supersonic flight, air traffic control, structural analysis, computational aerodynamics, and fuel efficiency.

  15. Concepts for Developing and Utilizing Crowdsourcing for Neurotechnology Advancement

    DTIC Science & Technology

    2013-05-01

    understanding of brain function and related neuroimaging tools, which is mostly limited to highly trained neuroscientists and engineers who wish to...Included are some programmatic suggestions, as well as exemplar applications to fit this end goal. 15. SUBJECT TERMS modular, EEG, neuroscience ... neuroscience -related problems among professionals in other fields, such as engineering and computer science, utilizing this approach to inspire true

  16. Determinants of Information Behaviour and Information Literacy Related to Healthy Eating among Internet Users in Five European Countries

    ERIC Educational Resources Information Center

    Niedzwiedzka, Barbara; Mazzocchi, Mario; Aschemann-Witzel, Jessica; Gennaro, Laura; Verbeke, Wim; Traill, W. Bruce

    2014-01-01

    Introduction: This study investigates how Europeans seek information related to healthy eating, what determines their information seeking and whether any problems are encountered in doing so. Method: A survey was administered through computer-assisted on-line web-interviewing. Respondents were grouped by age and sex (n = 3003, age +16) in Belgium,…

  17. A model for the control mode man-computer interface dialogue

    NASA Technical Reports Server (NTRS)

    Chafin, R. L.

    1981-01-01

    A four stage model is presented for the control mode man-computer interface dialogue. It consists of context development, semantic development syntactic development, and command execution. Each stage is discussed in terms of the operator skill levels (naive, novice, competent, and expert) and pertinent human factors issues. These issues are human problem solving, human memory, and schemata. The execution stage is discussed in terms of the operators typing skills. This model provides an understanding of the human process in command mode activity for computer systems and a foundation for relating system characteristics to operator characteristics.

  18. Human performance models for computer-aided engineering

    NASA Technical Reports Server (NTRS)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  19. A new graph-based method for pairwise global network alignment

    PubMed Central

    Klau, Gunnar W

    2009-01-01

    Background In addition to component-based comparative approaches, network alignments provide the means to study conserved network topology such as common pathways and more complex network motifs. Yet, unlike in classical sequence alignment, the comparison of networks becomes computationally more challenging, as most meaningful assumptions instantly lead to NP-hard problems. Most previous algorithmic work on network alignments is heuristic in nature. Results We introduce the graph-based maximum structural matching formulation for pairwise global network alignment. We relate the formulation to previous work and prove NP-hardness of the problem. Based on the new formulation we build upon recent results in computational structural biology and present a novel Lagrangian relaxation approach that, in combination with a branch-and-bound method, computes provably optimal network alignments. The Lagrangian algorithm alone is a powerful heuristic method, which produces solutions that are often near-optimal and – unlike those computed by pure heuristics – come with a quality guarantee. Conclusion Computational experiments on the alignment of protein-protein interaction networks and on the classification of metabolic subnetworks demonstrate that the new method is reasonably fast and has advantages over pure heuristics. Our software tool is freely available as part of the LISA library. PMID:19208162

  20. Prospects for Finite-Difference Time-Domain (FDTD) Computational Electrodynamics

    NASA Astrophysics Data System (ADS)

    Taflove, Allen

    2002-08-01

    FDTD is the most powerful numerical solution of Maxwell's equations for structures having internal details. Relative to moment-method and finite-element techniques, FDTD can accurately model such problems with 100-times more field unknowns and with nonlinear and/or time-variable parameters. Hundreds of FDTD theory and applications papers are published each year. Currently, there are at least 18 commercial FDTD software packages for solving problems in: defense (especially vulnerability to electromagnetic pulse and high-power microwaves); design of antennas and microwave devices/circuits; electromagnetic compatibility; bioelectromagnetics (especially assessment of cellphone-generated RF absorption in human tissues); signal integrity in computer interconnects; and design of micro-photonic devices (especially photonic bandgap waveguides, microcavities; and lasers). This paper explores emerging prospects for FDTD computational electromagnetics brought about by continuing advances in computer capabilities and FDTD algorithms. We conclude that advances already in place point toward the usage by 2015 of ultralarge-scale (up to 1E11 field unknowns) FDTD electromagnetic wave models covering the frequency range from about 0.1 Hz to 1E17 Hz. We expect that this will yield significant benefits for our society in areas as diverse as computing, telecommunications, defense, and public health and safety.

  1. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  2. A Fresh Math Perspective Opens New Possibilities for Computational Chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vu, Linda; Govind, Niranjan; Yang, Chao

    2017-05-26

    By reformulating the TDDFT problem as a matrix function approximation, making use of a special transformation and taking advantage of the underlying symmetry with respect to a non-Euclidean metric, Yang and his colleagues were able to apply the Lanczos algorithm and a Kernal Polynomial Method (KPM) to approximate the absorption spectrum of several molecules. Both of these algorithms require relatively low-memory compared to non-symmetrical alternatives, which is the key to the computational savings.

  3. Nuclear Fuel Depletion Analysis Using Matlab Software

    NASA Astrophysics Data System (ADS)

    Faghihi, F.; Nematollahi, M. R.

    Coupled first order IVPs are frequently used in many parts of engineering and sciences. In this article, we presented a code including three computer programs which are joint with the Matlab software to solve and plot the solutions of the first order coupled stiff or non-stiff IVPs. Some engineering and scientific problems related to IVPs are given and fuel depletion (production of the 239Pu isotope) in a Pressurized Water Nuclear Reactor (PWR) are computed by the present code.

  4. Indirection and computer security.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, Michael J.

    2011-09-01

    The discipline of computer science is built on indirection. David Wheeler famously said, 'All problems in computer science can be solved by another layer of indirection. But that usually will create another problem'. We propose that every computer security vulnerability is yet another problem created by the indirections in system designs and that focusing on the indirections involved is a better way to design, evaluate, and compare security solutions. We are not proposing that indirection be avoided when solving problems, but that understanding the relationships between indirections and vulnerabilities is key to securing computer systems. Using this perspective, we analyzemore » common vulnerabilities that plague our computer systems, consider the effectiveness of currently available security solutions, and propose several new security solutions.« less

  5. Limits on efficient computation in the physical world

    NASA Astrophysics Data System (ADS)

    Aaronson, Scott Joel

    More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure/Shor separator"---a criterion that separates the already-verified quantum states from those that appear in Shor's factoring algorithm. I argue that such a separator should be based on a complexity classification of quantum states, and go on to create such a classification. Next I ask what happens to the quantum computing model if we take into account that the speed of light is finite---and in particular, whether Grover's algorithm still yields a quadratic speedup for searching a database. Refuting a claim by Benioff, I show that the surprising answer is yes. Finally, I analyze hypothetical models of computation that go even beyond quantum computing. I show that many such models would be as powerful as the complexity class PP, and use this fact to give a simple, quantum computing based proof that PP is closed under intersection. On the other hand, I also present one model---wherein we could sample the entire history of a hidden variable---that appears to be more powerful than standard quantum computing, but only slightly so.

  6. Thermodynamics of natural selection III: Landauer's principle in computation and chemistry.

    PubMed

    Smith, Eric

    2008-05-21

    This is the third in a series of three papers devoted to energy flow and entropy changes in chemical and biological processes, and their relations to the thermodynamics of computation. The previous two papers have developed reversible chemical transformations as idealizations for studying physiology and natural selection, and derived bounds from the second law of thermodynamics, between information gain in an ensemble and the chemical work required to produce it. This paper concerns the explicit mapping of chemistry to computation, and particularly the Landauer decomposition of irreversible computations, in which reversible logical operations generating no heat are separated from heat-generating erasure steps which are logically irreversible but thermodynamically reversible. The Landauer arrangement of computation is shown to produce the same entropy-flow diagram as that of the chemical Carnot cycles used in the second paper of the series to idealize physiological cycles. The specific application of computation to data compression and error-correcting encoding also makes possible a Landauer analysis of the somewhat different problem of optimal molecular recognition, which has been considered as an information theory problem. It is shown here that bounds on maximum sequence discrimination from the enthalpy of complex formation, although derived from the same logical model as the Shannon theorem for channel capacity, arise from exactly the opposite model for erasure.

  7. Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP

    NASA Technical Reports Server (NTRS)

    Long, Lyle N.; Brentner, Kenneth S.

    2000-01-01

    This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.

  8. Solutions of the Taylor-Green Vortex Problem Using High-Resolution Explicit Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2013-01-01

    A computational fluid dynamics code that solves the compressible Navier-Stokes equations was applied to the Taylor-Green vortex problem to examine the code s ability to accurately simulate the vortex decay and subsequent turbulence. The code, WRLES (Wave Resolving Large-Eddy Simulation), uses explicit central-differencing to compute the spatial derivatives and explicit Low Dispersion Runge-Kutta methods for the temporal discretization. The flow was first studied and characterized using Bogey & Bailley s 13-point dispersion relation preserving (DRP) scheme. The kinetic energy dissipation rate, computed both directly and from the enstrophy field, vorticity contours, and the energy spectra are examined. Results are in excellent agreement with a reference solution obtained using a spectral method and provide insight into computations of turbulent flows. In addition the following studies were performed: a comparison of 4th-, 8th-, 12th- and DRP spatial differencing schemes, the effect of the solution filtering on the results, the effect of large-eddy simulation sub-grid scale models, and the effect of high-order discretization of the viscous terms.

  9. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    NASA Astrophysics Data System (ADS)

    Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; Ng, Esmond G.; Maris, Pieter; Vary, James P.

    2018-01-01

    We describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. The use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. We also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.

  10. Cloud Computing for Complex Performance Codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  11. A Volunteer Computing Project for Solving Geoacoustic Inversion Problems

    NASA Astrophysics Data System (ADS)

    Zaikin, Oleg; Petrov, Pavel; Posypkin, Mikhail; Bulavintsev, Vadim; Kurochkin, Ilya

    2017-12-01

    A volunteer computing project aimed at solving computationally hard inverse problems in underwater acoustics is described. This project was used to study the possibilities of the sound speed profile reconstruction in a shallow-water waveguide using a dispersion-based geoacoustic inversion scheme. The computational capabilities provided by the project allowed us to investigate the accuracy of the inversion for different mesh sizes of the sound speed profile discretization grid. This problem suits well for volunteer computing because it can be easily decomposed into independent simpler subproblems.

  12. The application of generalized, cyclic, and modified numerical integration algorithms to problems of satellite orbit computation

    NASA Technical Reports Server (NTRS)

    Chesler, L.; Pierce, S.

    1971-01-01

    Generalized, cyclic, and modified multistep numerical integration methods are developed and evaluated for application to problems of satellite orbit computation. Generalized methods are compared with the presently utilized Cowell methods; new cyclic methods are developed for special second-order differential equations; and several modified methods are developed and applied to orbit computation problems. Special computer programs were written to generate coefficients for these methods, and subroutines were written which allow use of these methods with NASA's GEOSTAR computer program.

  13. Learning of state-space models with highly informative observations: A tempered sequential Monte Carlo solution

    NASA Astrophysics Data System (ADS)

    Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik

    2018-05-01

    Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.

  14. A comparison of several methods of solving nonlinear regression groundwater flow problems

    USGS Publications Warehouse

    Cooley, Richard L.

    1985-01-01

    Computational efficiency and computer memory requirements for four methods of minimizing functions were compared for four test nonlinear-regression steady state groundwater flow problems. The fastest methods were the Marquardt and quasi-linearization methods, which required almost identical computer times and numbers of iterations; the next fastest was the quasi-Newton method, and last was the Fletcher-Reeves method, which did not converge in 100 iterations for two of the problems. The fastest method per iteration was the Fletcher-Reeves method, and this was followed closely by the quasi-Newton method. The Marquardt and quasi-linearization methods were slower. For all four methods the speed per iteration was directly related to the number of parameters in the model. However, this effect was much more pronounced for the Marquardt and quasi-linearization methods than for the other two. Hence the quasi-Newton (and perhaps Fletcher-Reeves) method might be more efficient than either the Marquardt or quasi-linearization methods if the number of parameters in a particular model were large, although this remains to be proven. The Marquardt method required somewhat less central memory than the quasi-linearization metilod for three of the four problems. For all four problems the quasi-Newton method required roughly two thirds to three quarters of the memory required by the Marquardt method, and the Fletcher-Reeves method required slightly less memory than the quasi-Newton method. Memory requirements were not excessive for any of the four methods.

  15. Computational neuroscience across the lifespan: Promises and pitfalls.

    PubMed

    van den Bos, Wouter; Bruckner, Rasmus; Nassar, Matthew R; Mata, Rui; Eppinger, Ben

    2017-10-13

    In recent years, the application of computational modeling in studies on age-related changes in decision making and learning has gained in popularity. One advantage of computational models is that they provide access to latent variables that cannot be directly observed from behavior. In combination with experimental manipulations, these latent variables can help to test hypotheses about age-related changes in behavioral and neurobiological measures at a level of specificity that is not achievable with descriptive analysis approaches alone. This level of specificity can in turn be beneficial to establish the identity of the corresponding behavioral and neurobiological mechanisms. In this paper, we will illustrate applications of computational methods using examples of lifespan research on risk taking, strategy selection and reinforcement learning. We will elaborate on problems that can occur when computational neuroscience methods are applied to data of different age groups. Finally, we will discuss potential targets for future applications and outline general shortcomings of computational neuroscience methods for research on human lifespan development. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Computer-aided design of antenna structures and components

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1976-01-01

    This paper discusses computer-aided design procedures for antenna reflector structures and related components. The primary design aid is a computer program that establishes cross sectional sizes of the structural members by an optimality criterion. Alternative types of deflection-dependent objectives can be selected for designs subject to constraints on structure weight. The computer program has a special-purpose formulation to design structures of the type frequently used for antenna construction. These structures, in common with many in other areas of application, are represented by analytical models that employ only the three translational degrees of freedom at each node. The special-purpose construction of the program, however, permits coding and data management simplifications that provide advantages in problem size and execution speed. Size and speed are essentially governed by the requirements of structural analysis and are relatively unaffected by the added requirements of design. Computation times to execute several design/analysis cycles are comparable to the times required by general-purpose programs for a single analysis cycle. Examples in the paper illustrate effective design improvement for structures with several thousand degrees of freedom and within reasonable computing times.

  17. Evoking Knowledge and Information Awareness for Enhancing Computer-Supported Collaborative Problem Solving

    ERIC Educational Resources Information Center

    Engelmann, Tanja; Tergan, Sigmar-Olaf; Hesse, Friedrich W.

    2010-01-01

    Computer-supported collaboration by spatially distributed group members still involves interaction problems within the group. This article presents an empirical study investigating the question of whether computer-supported collaborative problem solving by spatially distributed group members can be fostered by evoking knowledge and information…

  18. Students' Activity in Computer-Supported Collaborative Problem Solving in Mathematics

    ERIC Educational Resources Information Center

    Hurme, Tarja-riitta; Jarvela, Sanna

    2005-01-01

    The purpose of this study was to analyse secondary school students' (N = 16) computer-supported collaborative mathematical problem solving. The problem addressed in the study was: What kinds of metacognitive processes appear during computer-supported collaborative learning in mathematics? Another aim of the study was to consider the applicability…

  19. Predicting protein structures with a multiplayer online game.

    PubMed

    Cooper, Seth; Khatib, Firas; Treuille, Adrien; Barbero, Janos; Lee, Jeehyung; Beenen, Michael; Leaver-Fay, Andrew; Baker, David; Popović, Zoran; Players, Foldit

    2010-08-05

    People exert large amounts of problem-solving effort playing computer games. Simple image- and text-recognition tasks have been successfully 'crowd-sourced' through games, but it is not clear if more complex scientific problems can be solved with human-directed computing. Protein structure prediction is one such problem: locating the biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space. Here we describe Foldit, a multiplayer online game that engages non-scientists in solving hard prediction problems. Foldit players interact with protein structures using direct manipulation tools and user-friendly versions of algorithms from the Rosetta structure prediction methodology, while they compete and collaborate to optimize the computed energy. We show that top-ranked Foldit players excel at solving challenging structure refinement problems in which substantial backbone rearrangements are necessary to achieve the burial of hydrophobic residues. Players working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only the conformational space but also the space of possible search strategies. The integration of human visual problem-solving and strategy development capabilities with traditional computational algorithms through interactive multiplayer games is a powerful new approach to solving computationally-limited scientific problems.

  20. Gambling-Related Problems as a Mediator Between Treatment and Mental Health with At-Risk College Student Gamblers.

    PubMed

    Geisner, Irene Markman; Bowen, Sarah; Lostutter, Ty W; Cronce, Jessica M; Granato, Hollie; Larimer, Mary E

    2015-09-01

    Disordered gambling has been linked to increased negative affect, and some promising treatments have been shown to be effective at reducing gambling behaviors and related problems (Larimer et al. in Addiction 107:1148-1158, 2012). The current study seeks to expand upon the findings of Larimer et al. (Addiction 107:1148-1158, 2012) by examining the relationship between gambling-related problems and mental health symptoms in college students. Specifically, the three-group design tested the effects of two brief interventions for gambling—an individual, in-person personalized feedback intervention (PFI) delivered using motivational interviewing and group-based cognitive behavioral therapy, versus assessment only on mood outcomes. The mediating effect of gambling-related problems on mood was also explored. Participants (N = 141; 65% men; 60% Caucasian, 28% Asian) were at-risk college student gamblers [South Oaks Gambling Screen (Lesieur and Blume in Am J Psychiatry 144:1184-1188, 1987) ≥3], assessed at baseline and 6-month follow-up. Gambling problems were assessed using the Gambling Problems Index (Neighbors et al. in J Gamb Stud 18:339-360, 2002). Mental health symptoms were assessed using the depression, anxiety, and hostility subscales of the Brief Symptom Inventory (Derogatis in Brief Symptom Inventory (BSI): administration, scoring, and procedures manual, National Computer Systems, Inc., Minneapolis, 1993). Results revealed that the PFI condition differentially reduced negative mood, and that reductions in gambling-related problems partially mediated this effect. Implications for intervention for comorbid mood and gambling disorders are discussed.

  1. Replicating the benefits of Deutschian closed timelike curves without breaking causality

    NASA Astrophysics Data System (ADS)

    Yuan, Xiao; Assad, Syed M.; Thompson, Jayne; Haw, Jing Yan; Vedral, Vlatko; Ralph, Timothy C.; Lam, Ping Koy; Weedbrook, Christian; Gu, Mile

    2015-11-01

    In general relativity, closed timelike curves can break causality with remarkable and unsettling consequences. At the classical level, they induce causal paradoxes disturbing enough to motivate conjectures that explicitly prevent their existence. At the quantum level such problems can be resolved through the Deutschian formalism, however this induces radical benefits—from cloning unknown quantum states to solving problems intractable to quantum computers. Instinctively, one expects these benefits to vanish if causality is respected. Here we show that in harnessing entanglement, we can efficiently solve NP-complete problems and clone arbitrary quantum states—even when all time-travelling systems are completely isolated from the past. Thus, the many defining benefits of Deutschian closed timelike curves can still be harnessed, even when causality is preserved. Our results unveil a subtle interplay between entanglement and general relativity, and significantly improve the potential of probing the radical effects that may exist at the interface between relativity and quantum theory.

  2. Protecting software agents from malicious hosts using quantum computing

    NASA Astrophysics Data System (ADS)

    Reisner, John; Donkor, Eric

    2000-07-01

    We evaluate how quantum computing can be applied to security problems for software agents. Agent-based computing, which merges technological advances in artificial intelligence and mobile computing, is a rapidly growing domain, especially in applications such as electronic commerce, network management, information retrieval, and mission planning. System security is one of the more eminent research areas in agent-based computing, and the specific problem of protecting a mobile agent from a potentially hostile host is one of the most difficult of these challenges. In this work, we describe our agent model, and discuss the capabilities and limitations of classical solutions to the malicious host problem. Quantum computing may be extremely helpful in addressing the limitations of classical solutions to this problem. This paper highlights some of the areas where quantum computing could be applied to agent security.

  3. A study of selected radiation and propagation problems related to antennas and probes in magneto-ionic media

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Research consisted of computations toward the solution of the problem of the current distribution on a cylindrical antenna in a magnetoplasma. The case of an antenna parallel to the applied magnetic field was investigated. A systematic method of asymptotic expansion was found which simplifies the solution in the general case by giving the field of a dipole even at relatively short range. Some useful properties of the dispersion surfaces in a lossy medium have also been found. A laboratory experiment was directed toward evaluating nonlinear effects, such as those due to power level, bias voltage and electron heating. The problem of reflection and transmission of waves in an electron heated plasma was treated theoretically. The profile inversion problem has been pursued. Some results are very encouraging, however, the general question of stability of the solution remains unsolved.

  4. P- and S-wave Receiver Function Imaging with Scattering Kernels

    NASA Astrophysics Data System (ADS)

    Hansen, S. M.; Schmandt, B.

    2017-12-01

    Full waveform inversion provides a flexible approach to the seismic parameter estimation problem and can account for the full physics of wave propagation using numeric simulations. However, this approach requires significant computational resources due to the demanding nature of solving the forward and adjoint problems. This issue is particularly acute for temporary passive-source seismic experiments (e.g. PASSCAL) that have traditionally relied on teleseismic earthquakes as sources resulting in a global scale forward problem. Various approximation strategies have been proposed to reduce the computational burden such as hybrid methods that embed a heterogeneous regional scale model in a 1D global model. In this study, we focus specifically on the problem of scattered wave imaging (migration) using both P- and S-wave receiver function data. The proposed method relies on body-wave scattering kernels that are derived from the adjoint data sensitivity kernels which are typically used for full waveform inversion. The forward problem is approximated using ray theory yielding a computationally efficient imaging algorithm that can resolve dipping and discontinuous velocity interfaces in 3D. From the imaging perspective, this approach is closely related to elastic reverse time migration. An energy stable finite-difference method is used to simulate elastic wave propagation in a 2D hypothetical subduction zone model. The resulting synthetic P- and S-wave receiver function datasets are used to validate the imaging method. The kernel images are compared with those generated by the Generalized Radon Transform (GRT) and Common Conversion Point stacking (CCP) methods. These results demonstrate the potential of the kernel imaging approach to constrain lithospheric structure in complex geologic environments with sufficiently dense recordings of teleseismic data. This is demonstrated using a receiver function dataset from the Central California Seismic Experiment which shows several dipping interfaces related to the tectonic assembly of this region. Figure 1. Scattering kernel examples for three receiver function phases. A) direct P-to-s (Ps), B) direct S-to-p and C) free-surface PP-to-s (PPs).

  5. Asymptotic density and effective negligibility

    NASA Astrophysics Data System (ADS)

    Astor, Eric P.

    In this thesis, we join the study of asymptotic computability, a project attempting to capture the idea that an algorithm might work correctly in all but a vanishing fraction of cases. In collaboration with Hirschfeldt and Jockusch, broadening the original investigation of Jockusch and Schupp, we introduce dense computation, the weakest notion of asymptotic computability (requiring only that the correct answer is produced on a set of density 1), and effective dense computation, where every computation halts with either the correct answer or (on a set of density 0) a symbol denoting uncertainty. A few results make more precise the relationship between these notions and work already done with Jockusch and Schupp's original definitions of coarse and generic computability. For all four types of asymptotic computation, including generic computation, we demonstrate that non-trivial upper cones have measure 0, building on recent work of Hirschfeldt, Jockusch, Kuyper, and Schupp in which they establish this for coarse computation. Their result transfers to yield a minimal pair for relative coarse computation; we generalize their method and extract a similar result for relative dense computation (and thus for its corresponding reducibility). However, all of these notions of near-computation treat a set as negligible iff it has asymptotic density 0. Noting that this definition is not computably invariant, this produces some failures of intuition and a break with standard expectations in computability theory. For instance, as shown by Hamkins and Miasnikov, the halting problem is (in some formulations) effectively densely computable, even in polynomial time---yet this result appears fragile, as indicated by Rybalov. In independent work, we respond to this by strengthening the approach of Jockusch and Schupp to avoid such phenomena; specifically, we introduce a new notion of intrinsic asymptotic density, invariant under computable permutation, with rich relations to both randomness and classical computability theory. For instance, we prove that the stochasticities corresponding to permutation randomness and injection randomness coincide, and identify said stochasticity as intrinsic density 1/2. We then define sets of intrinsic density 0 to be effectively negligible, and classify this as a new immunity property, determining its position in the standard hierarchy from immune to cohesive for both general and Delta02 sets. We further characterize the Turing degrees of effectively negligible sets as those which are either high (a' ≥T 0") or compute a DNC (diagonally non-computable) function. In fact, this result holds over RCA0, demonstrating the reverse-mathematical equivalence of the principles ID0 and DOM \\sext DNR. . Replacing Jockusch and Schupp's negligibility (density 0) by effective negligibility (intrinsic density 0), we then obtain new notions of intrinsically dense computation. Finally, we generalize Rice's Theorem to all forms of intrinsic dense computation, showing that no set that is 1-equivalent to a non-trivial index set is intrinsically densely computable; in particular, in contrast to ordinary dense computation, we see that the halting problem cannot be intrinsically densely computable.

  6. Numerical Computation of Sensitivities and the Adjoint Approach

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael

    1997-01-01

    We discuss the numerical computation of sensitivities via the adjoint approach in optimization problems governed by differential equations. We focus on the adjoint problem in its weak form. We show how one can avoid some of the problems with the adjoint approach, such as deriving suitable boundary conditions for the adjoint equation. We discuss the convergence of numerical approximations of the costate computed via the weak form of the adjoint problem and show the significance for the discrete adjoint problem.

  7. SAMSAN- MODERN NUMERICAL METHODS FOR CLASSICAL SAMPLED SYSTEM ANALYSIS

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1994-01-01

    SAMSAN was developed to aid the control system analyst by providing a self consistent set of computer algorithms that support large order control system design and evaluation studies, with an emphasis placed on sampled system analysis. Control system analysts have access to a vast array of published algorithms to solve an equally large spectrum of controls related computational problems. The analyst usually spends considerable time and effort bringing these published algorithms to an integrated operational status and often finds them less general than desired. SAMSAN reduces the burden on the analyst by providing a set of algorithms that have been well tested and documented, and that can be readily integrated for solving control system problems. Algorithm selection for SAMSAN has been biased toward numerical accuracy for large order systems with computational speed and portability being considered important but not paramount. In addition to containing relevant subroutines from EISPAK for eigen-analysis and from LINPAK for the solution of linear systems and related problems, SAMSAN contains the following not so generally available capabilities: 1) Reduction of a real non-symmetric matrix to block diagonal form via a real similarity transformation matrix which is well conditioned with respect to inversion, 2) Solution of the generalized eigenvalue problem with balancing and grading, 3) Computation of all zeros of the determinant of a matrix of polynomials, 4) Matrix exponentiation and the evaluation of integrals involving the matrix exponential, with option to first block diagonalize, 5) Root locus and frequency response for single variable transfer functions in the S, Z, and W domains, 6) Several methods of computing zeros for linear systems, and 7) The ability to generate documentation "on demand". All matrix operations in the SAMSAN algorithms assume non-symmetric matrices with real double precision elements. There is no fixed size limit on any matrix in any SAMSAN algorithm; however, it is generally agreed by experienced users, and in the numerical error analysis literature, that computation with non-symmetric matrices of order greater than about 200 should be avoided or treated with extreme care. SAMSAN attempts to support the needs of application oriented analysis by providing: 1) a methodology with unlimited growth potential, 2) a methodology to insure that associated documentation is current and available "on demand", 3) a foundation of basic computational algorithms that most controls analysis procedures are based upon, 4) a set of check out and evaluation programs which demonstrate usage of the algorithms on a series of problems which are structured to expose the limits of each algorithm's applicability, and 5) capabilities which support both a priori and a posteriori error analysis for the computational algorithms provided. The SAMSAN algorithms are coded in FORTRAN 77 for batch or interactive execution and have been implemented on a DEC VAX computer under VMS 4.7. An effort was made to assure that the FORTRAN source code was portable and thus SAMSAN may be adaptable to other machine environments. The documentation is included on the distribution tape or can be purchased separately at the price below. SAMSAN version 2.0 was developed in 1982 and updated to version 3.0 in 1988.

  8. Job Skills of the Financial Aid Professional.

    ERIC Educational Resources Information Center

    Heist, Vali

    2002-01-01

    Describes the skills practiced by student financial aid professionals which are valued by all employers, including problem solving, human relations, computer programming, teaching/training, information management, money management, business management, and science and math. Also describes how to develop skills outside of the office. (EV)

  9. Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Toussaint, U. V.; Timucin, D. A.

    2002-01-01

    We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum excitation gap. g min, = O(n 2(exp -n/2), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to 'the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.

  10. Definition and solution of a stochastic inverse problem for the Manning’s n parameter field in hydrodynamic models

    DOE PAGES

    Butler, Troy; Graham, L.; Estep, D.; ...

    2015-02-03

    The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less

  11. Making objective decisions in mechanical engineering problems

    NASA Astrophysics Data System (ADS)

    Raicu, A.; Oanta, E.; Sabau, A.

    2017-08-01

    Decision making process has a great influence in the development of a given project, the goal being to select an optimal choice in a given context. Because of its great importance, the decision making was studied using various science methods, finally being conceived the game theory that is considered the background for the science of logical decision making in various fields. The paper presents some basic ideas regarding the game theory in order to offer the necessary information to understand the multiple-criteria decision making (MCDM) problems in engineering. The solution is to transform the multiple-criteria problem in a one-criterion decision problem, using the notion of utility, together with the weighting sum model or the weighting product model. The weighted importance of the criteria is computed using the so-called Step method applied to a relation of preferences between the criteria. Two relevant examples from engineering are also presented. The future directions of research consist of the use of other types of criteria, the development of computer based instruments for decision making general problems and to conceive a software module based on expert system principles to be included in the Wiki software applications for polymeric materials that are already operational.

  12. Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadius; vonToussaint, Udo V.; Timucin, Dogan A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum exitation gap, gmin = O(n2(sup -n/2)), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.

  13. Elders' nonadherence, its assessment, and computer assisted instruction for medication recall training.

    PubMed

    Leirer, V O; Morrow, D G; Pariante, G M; Sheikh, J I

    1988-10-01

    This study investigates three questions related to the problem of medication nonadherence among elders. First, does recall failure play a significant role in nonadherence? Recent research suggests that it may not. Second, can the new portable bar code scanner technology be used to study nonadherence? Other forms of monitoring are obtrusive or inaccurate. Finally, can inexpensive computer assisted instructions (CAI) be used to teach mnemonic techniques specifically designed to improve medication schedule recall? Current research on memory training teaches nonspecific mnemonics and uses the expensive classroom approach. Results of the present study suggest that physically active and cognitively alert elders do have significant nonadherence (control group = 32.0%) problems related to forgetting and that CAI courseware can significantly reduce (medication recall training group = 10.0%) this form of nonadherence. Portable bar code technology proved easy to use by elderly patients and provided detailed information about the type of forgetting underlying nonadherence. Most significant recall failure was in the complete forgetting to take medication rather than delays in medicating or overmedicating.

  14. A computer program to find the kernel of a polynomial operator

    NASA Technical Reports Server (NTRS)

    Gejji, R. R.

    1976-01-01

    This paper presents a FORTRAN program written to solve for the kernel of a matrix of polynomials with real coefficients. It is an implementation of Sain's free modular algorithm for solving the minimal design problem of linear multivariable systems. The structure of the program is discussed, together with some features as they relate to questions of implementing the above method. An example of the use of the program to solve a design problem is included.

  15. A Generalization of the Karush-Kuhn-Tucker Theorem for Approximate Solutions of Mathematical Programming Problems Based on Quadratic Approximation

    NASA Astrophysics Data System (ADS)

    Voloshinov, V. V.

    2018-03-01

    In computations related to mathematical programming problems, one often has to consider approximate, rather than exact, solutions satisfying the constraints of the problem and the optimality criterion with a certain error. For determining stopping rules for iterative procedures, in the stability analysis of solutions with respect to errors in the initial data, etc., a justified characteristic of such solutions that is independent of the numerical method used to obtain them is needed. A necessary δ-optimality condition in the smooth mathematical programming problem that generalizes the Karush-Kuhn-Tucker theorem for the case of approximate solutions is obtained. The Lagrange multipliers corresponding to the approximate solution are determined by solving an approximating quadratic programming problem.

  16. A Study of the Correlation between Computer Games and Adolescent Behavioral Problems

    PubMed Central

    Shokouhi-Moqhaddam, Solmaz; Khezri-Moghadam, Noshiravan; Javanmard, Zeinab; Sarmadi-Ansar, Hassan; Aminaee, Mehran; Shokouhi-Moqhaddam, Majid; Zivari-Rahman, Mahmoud

    2013-01-01

    Background Today, due to developing communicative technologies, computer games and other audio-visual media as social phenomena, are very attractive and have a great effect on children and adolescents. The increasing popularity of these games among children and adolescents results in the public uncertainties about plausible harmful effects of these games. This study aimed to investigate the correlation between computer games and behavioral problems on male guidance school students. Methods This was a descriptive-correlative study on 384 randomly chosen male guidance school students. They were asked to answer the researcher's questionnaire about computer games and Achenbach’s Youth Self-Report (YSR). Findings The Results of this study indicated that there was about 95% direct significant correlation between the amount of playing games among adolescents and anxiety/depression, withdrawn/depression, rule-breaking behaviors, aggression, and social problems. However, there was no statistically significant correlation between the amount of computer game usage and physical complaints, thinking problems, and attention problems. In addition, there was a significant correlation between the students’ place of living and their parents’ job, and using computer games. Conclusion Computer games lead to anxiety, depression, withdrawal, rule-breaking behavior, aggression, and social problems in adolescents. PMID:24494157

  17. A Study of the Correlation between Computer Games and Adolescent Behavioral Problems.

    PubMed

    Shokouhi-Moqhaddam, Solmaz; Khezri-Moghadam, Noshiravan; Javanmard, Zeinab; Sarmadi-Ansar, Hassan; Aminaee, Mehran; Shokouhi-Moqhaddam, Majid; Zivari-Rahman, Mahmoud

    2013-01-01

    Today, due to developing communicative technologies, computer games and other audio-visual media as social phenomena, are very attractive and have a great effect on children and adolescents. The increasing popularity of these games among children and adolescents results in the public uncertainties about plausible harmful effects of these games. This study aimed to investigate the correlation between computer games and behavioral problems on male guidance school students. This was a descriptive-correlative study on 384 randomly chosen male guidance school students. They were asked to answer the researcher's questionnaire about computer games and Achenbach's Youth Self-Report (YSR). The Results of this study indicated that there was about 95% direct significant correlation between the amount of playing games among adolescents and anxiety/depression, withdrawn/depression, rule-breaking behaviors, aggression, and social problems. However, there was no statistically significant correlation between the amount of computer game usage and physical complaints, thinking problems, and attention problems. In addition, there was a significant correlation between the students' place of living and their parents' job, and using computer games. Computer games lead to anxiety, depression, withdrawal, rule-breaking behavior, aggression, and social problems in adolescents.

  18. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    PubMed

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  19. Near-Optimal Guidance Method for Maximizing the Reachable Domain of Gliding Aircraft

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takeshi

    This paper proposes a guidance method for gliding aircraft by using onboard computers to calculate a near-optimal trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the optimal control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the optimal control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an optimal turning flight problem in a horizontal direction. First, the former problem is solved using a shooting method. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the optimal solution are obtained in the first part of this paper. Next, in the latter problem, the optimal bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-optimal guidance method is compared with that obtained from the original optimal control problem.

  20. Predicting Development of Mathematical Word Problem Solving Across the Intermediate Grades

    PubMed Central

    Tolar, Tammy D.; Fuchs, Lynn; Cirino, Paul T.; Fuchs, Douglas; Hamlett, Carol L.; Fletcher, Jack M.

    2012-01-01

    This study addressed predictors of the development of word problem solving (WPS) across the intermediate grades. At beginning of 3rd grade, 4 cohorts of students (N = 261) were measured on computation, language, nonverbal reasoning skills, and attentive behavior and were assessed 4 times from beginning of 3rd through end of 5th grade on 2 measures of WPS at low and high levels of complexity. Language skills were related to initial performance at both levels of complexity and did not predict growth at either level. Computational skills had an effect on initial performance in low- but not high-complexity problems and did not predict growth at either level of complexity. Attentive behavior did not predict initial performance but did predict growth in low-complexity, whereas it predicted initial performance but not growth for high-complexity problems. Nonverbal reasoning predicted initial performance and growth for low-complexity WPS, but only growth for high-complexity WPS. This evidence suggests that although mathematical structure is fixed, different cognitive resources may act as limiting factors in WPS development when the WPS context is varied. PMID:23325985

Top