Comfort and experience with online learning: trends over nine years and associations with knowledge.
Cook, David A; Thompson, Warren G
2014-07-01
Some evidence suggests that attitude toward computer-based instruction is an important determinant of success in online learning. We sought to determine how comfort using computers and perceptions of prior online learning experiences have changed over the past decade, and how these associate with learning outcomes. Each year from 2003-2011 we conducted a prospective trial of online learning. As part of each year's study, we asked medicine residents about their comfort using computers and if their previous experiences with online learning were favorable. We assessed knowledge using a multiple-choice test. We used regression to analyze associations and changes over time. 371 internal medicine and family medicine residents participated. Neither comfort with computers nor perceptions of prior online learning experiences showed a significant change across years (p > 0.61), with mean comfort rating 3.96 (maximum 5 = very comfortable) and mean experience rating 4.42 (maximum 6 = strongly agree [favorable]). Comfort showed no significant association with knowledge scores (p = 0.39) but perceptions of prior experiences did, with a 1.56% rise in knowledge score for a 1-point rise in experience score (p = 0.02). Correlations among comfort, perceptions of prior experiences, and number of prior experiences were all small and not statistically significant. Comfort with computers and perceptions of prior experience with online learning remained stable over nine years. Prior good experiences (but not comfort with computers) demonstrated a modest association with knowledge outcomes, suggesting that prior course satisfaction may influence subsequent learning.
Mental Rotation Ability and Computer Game Experience
ERIC Educational Resources Information Center
Gecu, Zeynep; Cagiltay, Kursat
2015-01-01
Computer games, which are currently very popular among students, can affect different cognitive abilities. The purpose of the present study is to examine undergraduate students' experiences and preferences in playing computer games as well as their mental rotation abilities. A total of 163 undergraduate students participated. The results showed a…
Comfort and experience with online learning: trends over nine years and associations with knowledge
2014-01-01
Background Some evidence suggests that attitude toward computer-based instruction is an important determinant of success in online learning. We sought to determine how comfort using computers and perceptions of prior online learning experiences have changed over the past decade, and how these associate with learning outcomes. Methods Each year from 2003–2011 we conducted a prospective trial of online learning. As part of each year’s study, we asked medicine residents about their comfort using computers and if their previous experiences with online learning were favorable. We assessed knowledge using a multiple-choice test. We used regression to analyze associations and changes over time. Results 371 internal medicine and family medicine residents participated. Neither comfort with computers nor perceptions of prior online learning experiences showed a significant change across years (p > 0.61), with mean comfort rating 3.96 (maximum 5 = very comfortable) and mean experience rating 4.42 (maximum 6 = strongly agree [favorable]). Comfort showed no significant association with knowledge scores (p = 0.39) but perceptions of prior experiences did, with a 1.56% rise in knowledge score for a 1-point rise in experience score (p = 0.02). Correlations among comfort, perceptions of prior experiences, and number of prior experiences were all small and not statistically significant. Conclusions Comfort with computers and perceptions of prior experience with online learning remained stable over nine years. Prior good experiences (but not comfort with computers) demonstrated a modest association with knowledge outcomes, suggesting that prior course satisfaction may influence subsequent learning. PMID:24985690
ERIC Educational Resources Information Center
Wang, X. Christine; Ching, Cynthia Carter
2003-01-01
This ethnographic study investigated first-graders' social construction of their classroom computer experience. Findings showed that children constantly negotiate between their own individual and collective goals in the classroom as they create their own definition of computer use while conforming to the teacher's rules. Considers the usefulness…
Note-Taking with Computers: Exploring Alternative Strategies for Improved Recall
ERIC Educational Resources Information Center
Bui, Dung C.; Myerson, Joel; Hale, Sandra
2013-01-01
Three experiments examined note-taking strategies and their relation to recall. In Experiment 1, participants were instructed either to take organized lecture notes or to try and transcribe the lecture, and they either took their notes by hand or typed them into a computer. Those instructed to transcribe the lecture using a computer showed the…
ERIC Educational Resources Information Center
An, Yun-Jo; Haynes, Linda; D'Alba, Adriana; Chumney, Frances
2016-01-01
Science teachers' experiences, attitudes, perceptions, concerns, and support needs related to the use of educational computer games were investigated in this study. Data were collected from an online survey, which was completed by 111 science teachers. The results showed that 73% of participants had used computer games in teaching. Participants…
Stretching the Traditional Notion of Experiment in Computing: Explorative Experiments.
Schiaffonati, Viola
2016-06-01
Experimentation represents today a 'hot' topic in computing. If experiments made with the support of computers, such as computer simulations, have received increasing attention from philosophers of science and technology, questions such as "what does it mean to do experiments in computer science and engineering and what are their benefits?" emerged only recently as central in the debate over the disciplinary status of the discipline. In this work we aim at showing, also by means of paradigmatic examples, how the traditional notion of controlled experiment should be revised to take into account a part of the experimental practice in computing along the lines of experimentation as exploration. Taking inspiration from the discussion on exploratory experimentation in the philosophy of science-experimentation that is not theory-driven-we advance the idea of explorative experiments that, although not new, can contribute to enlarge the debate about the nature and role of experimental methods in computing. In order to further refine this concept we recast explorative experiments as socio-technical experiments, that test new technologies in their socio-technical contexts. We suggest that, when experiments are explorative, control should be intended in a posteriori form, in opposition to the a priori form that usually takes place in traditional experimental contexts.
The Development of Acoustic Experiments for Off-Campus Teaching and Learning
ERIC Educational Resources Information Center
Wild, Graham; Swan, Geoff
2011-01-01
In this article, we show the implementation of a computer-based digital storage oscilloscope (DSO) and function generator (FG) using the computer's soundcard for off-campus acoustic experiments. The microphone input is used for the DSO, and a speaker jack is used as the FG. In an effort to reduce the cost of implementing the experiment, we examine…
AnnotCompute: annotation-based exploration and meta-analysis of genomics experiments
Zheng, Jie; Stoyanovich, Julia; Manduchi, Elisabetta; Liu, Junmin; Stoeckert, Christian J.
2011-01-01
The ever-increasing scale of biological data sets, particularly those arising in the context of high-throughput technologies, requires the development of rich data exploration tools. In this article, we present AnnotCompute, an information discovery platform for repositories of functional genomics experiments such as ArrayExpress. Our system leverages semantic annotations of functional genomics experiments with controlled vocabulary and ontology terms, such as those from the MGED Ontology, to compute conceptual dissimilarities between pairs of experiments. These dissimilarities are then used to support two types of exploratory analysis—clustering and query-by-example. We show that our proposed dissimilarity measures correspond to a user's intuition about conceptual dissimilarity, and can be used to support effective query-by-example. We also evaluate the quality of clustering based on these measures. While AnnotCompute can support a richer data exploration experience, its effectiveness is limited in some cases, due to the quality of available annotations. Nonetheless, tools such as AnnotCompute may provide an incentive for richer annotations of experiments. Code is available for download at http://www.cbil.upenn.edu/downloads/AnnotCompute. Database URL: http://www.cbil.upenn.edu/annotCompute/ PMID:22190598
Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments
NASA Astrophysics Data System (ADS)
Munsky, Brian; Shepherd, Douglas
2014-03-01
Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.
Helfer, Peter; Shultz, Thomas R
2014-12-01
The widespread availability of calorie-dense food is believed to be a contributing cause of an epidemic of obesity and associated diseases throughout the world. One possible countermeasure is to empower consumers to make healthier food choices with useful nutrition labeling. An important part of this endeavor is to determine the usability of existing and proposed labeling schemes. Here, we report an experiment on how four different labeling schemes affect the speed and nutritional value of food choices. We then apply decision field theory, a leading computational model of human decision making, to simulate the experimental results. The psychology experiment shows that quantitative, single-attribute labeling schemes have greater usability than multiattribute and binary ones, and that they remain effective under moderate time pressure. The computational model simulates these psychological results and provides explanatory insights into them. This work shows how experimental psychology and computational modeling can contribute to the evaluation and improvement of nutrition-labeling schemes. © 2014 New York Academy of Sciences.
NASA Astrophysics Data System (ADS)
Li, Haiqing; Chatterjee, Samir
With rapid advances in information and communication technology, computer-mediated communication (CMC) technologies are utilizing multiple IT platforms such as email, websites, cell-phones/PDAs, social networking sites, and gaming environments. However, no studies have compared the effectiveness of a persuasive system using such alternative channels and various persuasive techniques. Moreover, how affective computing impacts the effectiveness of persuasive systems is not clear. This study proposes (1) persuasive technology channels in combination with persuasive strategies will have different persuasive effectiveness; (2) Adding positive emotion to a message that leads to a better overall user experience could increase persuasive effectiveness. The affective computing or emotion information was added to the experiment using emoticons. The initial results of a pilot study show that computer-mediated communication channels along with various persuasive strategies can affect the persuasive effectiveness to varying degrees. These results also shows that adding a positive emoticon to a message leads to a better user experience which increases the overall persuasive effectiveness of a system.
ERIC Educational Resources Information Center
Gasyna, Zbigniew L.
2008-01-01
Computational experiment is proposed in which a linear algebra method is applied to the solution of the Schrodinger equation for a diatomic oscillator. Calculations of the vibration-rotation spectrum for the HCl molecule are presented and the results show excellent agreement with experimental data. (Contains 1 table and 1 figure.)
ERIC Educational Resources Information Center
Lenard, Mary Jane; Wessels, Susan; Khanlarian, Cindi
2010-01-01
Using a model developed by Young (2000), this paper explores the relationship between performance in the Accounting Information Systems course, self-assessed computer skills, and attitudes toward computers. Results show that after taking the AIS course, students experience a change in perception about their use of computers. Females'…
ERIC Educational Resources Information Center
Buche, Mari W.; Davis, Larry R.; Vician, Chelley
2007-01-01
Computers are pervasive in business and education, and it would be easy to assume that all individuals embrace technology. However, evidence shows that roughly 30 to 40 percent of individuals experience some level of computer anxiety. Many academic programs involve computing-intensive courses, but the actual effects of this exposure on computer…
View of MISSE-8 taken during a session of EVA
2011-07-12
ISS028-E-016111 (12 July 2011) --- This close-up image, recorded during a July 12 spacewalk, shows the Materials on International Space Station Experiment - 8 (MISSE-8). The experiment package is a test bed for materials and computing elements attached to the outside of the orbiting complex. These materials and computing elements are being evaluated for the effects of atomic oxygen, ultraviolet, direct sunlight, radiation, and extremes of heat and cold. This experiment allows the development and testing of new materials and computing elements that can better withstand the rigors of space environments. Results will provide a better understanding of the durability of various materials and computing elements when they are exposed to the space environment, with applications in the design of future spacecraft.
An Educational Software for Simulating the Sample Size of Molecular Marker Experiments
ERIC Educational Resources Information Center
Helms, T. C.; Doetkott, C.
2007-01-01
We developed educational software to show graduate students how to plan molecular marker experiments. These computer simulations give the students feedback on the precision of their experiments. The objective of the software was to show students using a hands-on approach how: (1) environmental variation influences the range of the estimates of the…
[Computer technologies in teaching pathological anatomy].
Ponomarev, A B; Fedorov, D N
2015-01-01
The paper gives experience with personal computers used at the Academician A.L. Strukov Department of Pathological Anatomy for more than 20 years. It shows the objective necessity of introducing computer technologies at all stages of acquiring skills in anatomical pathology, including lectures, students' free work, test check, etc.
Learning and teaching with a computer scanner
NASA Astrophysics Data System (ADS)
Planinsic, G.; Gregorcic, B.; Etkina, E.
2014-09-01
This paper introduces the readers to simple inquiry-based activities (experiments with supporting questions) that one can do with a computer scanner to help students learn and apply the concepts of relative motion in 1 and 2D, vibrational motion and the Doppler effect. We also show how to use these activities to help students think like scientists. They will conduct simple experiments, construct different explanations for their observations, test their explanations in new experiments and represent their ideas in multiple ways.
Quantum computing: Quantum advantage deferred
NASA Astrophysics Data System (ADS)
Childs, Andrew M.
2017-12-01
A type of optics experiment called a boson sampler could be among the easiest routes to demonstrating the power of quantum computers. But recent work shows that super-classical boson sampling may be a long way off.
View of MISSE-8 taken during a session of EVA
2011-07-12
ISS028-E-016107 (12 July 2011) --- This medium close-up image, recorded during a July 12 spacewalk, shows the Materials on International Space Station Experiment - 8 (MISSE-8). The experiment package is a test bed for materials and computing elements attached to the outside of the orbiting complex. These materials and computing elements are being evaluated for the effects of atomic oxygen, ultraviolet, direct sunlight, radiation, and extremes of heat and cold. This experiment allows the development and testing of new materials and computing elements that can better withstand the rigors of space environments. Results will provide a better understanding of the durability of various materials and computing elements when they are exposed to the space environment, with applications in the design of future spacecraft.
NASA Astrophysics Data System (ADS)
Chiner, Esther; Garcia-Vera, Victoria E.
2017-11-01
The purpose of this study was to examine students' computer attitudes and experience, as well as students' perceptions about the use of two specific software applications (Google Drive Spreadsheets and Arquimedes) in the Building Engineering context. The relationships among these variables were also examined. Ninety-two students took part in this study. Results suggest that students hold favourable computer attitudes. Moreover, it was found a significant positive relationship among students' attitudes and their computer experience. Findings also show that students find Arquimedes software more useful and with higher output quality than Google Drive Spreadsheets, while the latter is perceived to be easier to use. Regarding the relationship among students' attitudes towards the use of computers and their perceptions about the use of both software applications, only a significant positive relationship in the case of Arquimedes was found. Findings are discussed in terms of its implications for practice and further research.
NASA Astrophysics Data System (ADS)
Choi, Byung-Soon; Gennaro, Eugene
Several researchers have suggested that the computer holds much promise as a tool for science teachers for use in their classrooms (Bork, 1979, Lunetta & Hofstein, 1981). It also has been said that there needs to be more research in determining the effectiveness of computer software (Tinker, 1983).This study compared the effectiveness of microcomputer simulated experiences with that of parallel instruction involving hands-on laboratory experiences for teaching the concept of volume displacement to junior high school students. This study also assessed the differential effect on students' understanding of the volume displacement concept using sex of the students as another independent variable. In addition, it compared the degree of retention, after 45 days, of both treatment groups.It was found that computer simulated experiences were as effective as hands-on laboratory experiences, and that males, having had hands-on laboratory experiences, performed better on the posttest than females having had the hands-on laboratory experiences. There were no significant differences in performance when comparing males with females using the computer simulation in the learning of the displacement concept. This study also showed that there were no significant differences in the retention levels when the retention scores of the computer simulation groups were compared to those that had the hands-on laboratory experiences. However, an ANOVA of the retention test scores revealed that males in both treatment conditions retained knowledge of volume displacement better than females.
Integrating Data Base into the Elementary School Science Program.
ERIC Educational Resources Information Center
Schlenker, Richard M.
This document describes seven science activities that combine scientific principles and computers. The objectives for the activities are to show students how the computer can be used as a tool to store and arrange scientific data, provide students with experience using the computer as a tool to manage scientific data, and provide students with…
Edwardson, S R; Pejsa, J
1993-01-01
A computer-based tutorial for teaching nursing financial management concepts was developed using the macro function of a commercially available spreadsheet program. The goals of the tutorial were to provide students with an experience with spreadsheets as a computer tool and to teach selected financial management concepts. Preliminary results show the tutorial was well received by students. Suggestions are made for overcoming the general lack of computer sophistication among students.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L.
1997-04-01
This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and executemore » program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.« less
Mass storage system experiences and future needs at the National Center for Atmospheric Research
NASA Technical Reports Server (NTRS)
Olear, Bernard T.
1992-01-01
This presentation is designed to relate some of the experiences of the Scientific Computing Division at NCAR dealing with the 'data problem'. A brief history and a development of some basic Mass Storage System (MSS) principles are given. An attempt is made to show how these principles apply to the integration of various components into NCAR's MSS. There is discussion of future MSS needs for future computing environments.
NASA Astrophysics Data System (ADS)
Brodyn, M. S.; Starkov, V. N.
2007-07-01
It is shown that in laser experiments performed by using an 'imperfect' setup when instrumental distortions are considerable, sufficiently accurate results can be obtained by the modern methods of computational physics. It is found for the first time that a new instrumental function — the 'cap' function — a 'sister' of a Gaussian curve proved to be demanded namely in laser experiments. A new mathematical model of a measurement path and carefully performed computational experiment show that a light beam transmitted through a mesoporous film has actually a narrower intensity distribution than the detected beam, and the amplitude of the real intensity distribution is twice as large as that for measured intensity distributions.
NASA Astrophysics Data System (ADS)
Liang, Ruiyu; Xi, Ji; Bao, Yongqiang
2017-07-01
To improve the performance of gain compensation based on three-segment sound pressure level (SPL) in hearing aids, an improved multichannel loudness compensation method based on eight-segment SPL was proposed. Firstly, the uniform cosine modulated filter bank was designed. Then, the adjacent channels which have low or gradual slopes were adaptively merged to obtain the corresponding non-uniform cosine modulated filter according to the audiogram of hearing impaired persons. Secondly, the input speech was decomposed into sub-band signals and the SPL of every sub-band signal was computed. Meanwhile, the audible SPL range from 0 dB SPL to 120 dB SPL was equally divided into eight segments. Based on these segments, a different prescription formula was designed to compute more detailed gain to compensate according to the audiogram and the computed SPL. Finally, the enhanced signal was synthesized. Objective experiments showed the decomposed signals after cosine modulated filter bank have little distortion. Objective experiments showed that the hearing aids speech perception index (HASPI) and hearing aids speech quality index (HASQI) increased 0.083 and 0.082 on average, respectively. Subjective experiments showed the proposed algorithm can effectively improve the speech recognition of six hearing impaired persons.
Cutting Costs on Computer Forms.
ERIC Educational Resources Information Center
Rupp, Robert V., Jr.
1989-01-01
Using the experience of Ford Motor Company, Oscar Meyer, and IBM, this article shows that companies are enjoying high quality product performance and substantially lower costs by converting from premium white bond computer stock forms to blended bond forms. School administrators are advised to do likewise. (MLH)
Computational Experiments for Science and Engineering Education
NASA Technical Reports Server (NTRS)
Xie, Charles
2011-01-01
How to integrate simulation-based engineering and science (SBES) into the science curriculum smoothly is a challenging question. For the importance of SBES to be appreciated, the core value of simulations-that they help people understand natural phenomena and solve engineering problems-must be taught. A strategy to achieve this goal is to introduce computational experiments to the science curriculum to replace or supplement textbook illustrations and exercises and to complement or frame hands-on or wet lab experiments. In this way, students will have an opportunity to learn about SBES without compromising other learning goals required by the standards and teachers will welcome these tools as they strengthen what they are already teaching. This paper demonstrates this idea using a number of examples in physics, chemistry, and engineering. These exemplary computational experiments show that it is possible to create a curriculum that is both deeper and wider.
Flexible Launch Vehicle Stability Analysis Using Steady and Unsteady Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2012-01-01
Launch vehicles frequently experience a reduced stability margin through the transonic Mach number range. This reduced stability margin can be caused by the aerodynamic undamping one of the lower-frequency flexible or rigid body modes. Analysis of the behavior of a flexible vehicle is routinely performed with quasi-steady aerodynamic line loads derived from steady rigid aerodynamics. However, a quasi-steady aeroelastic stability analysis can be unconservative at the critical Mach numbers, where experiment or unsteady computational aeroelastic analysis show a reduced or even negative aerodynamic damping.Amethod of enhancing the quasi-steady aeroelastic stability analysis of a launch vehicle with unsteady aerodynamics is developed that uses unsteady computational fluid dynamics to compute the response of selected lower-frequency modes. The response is contained in a time history of the vehicle line loads. A proper orthogonal decomposition of the unsteady aerodynamic line-load response is used to reduce the scale of data volume and system identification is used to derive the aerodynamic stiffness, damping, and mass matrices. The results are compared with the damping and frequency computed from unsteady computational aeroelasticity and from a quasi-steady analysis. The results show that incorporating unsteady aerodynamics in this way brings the enhanced quasi-steady aeroelastic stability analysis into close agreement with the unsteady computational aeroelastic results.
Computers in mathematics: teacher-inservice training at a distance
NASA Astrophysics Data System (ADS)
Friedman, Edward A.; Jurkat, M. P.
1993-01-01
While research and experience show many advantages for incorporation of computer technology into secondary school mathematics instruction, less than 5 percent of the nation's teachers are actively using computers in their classrooms. This is the case even though mathematics teachers in grades 7 - 12 are often familiar with computer technology and have computers available to them in their schools. The implementation bottleneck is in-service teacher training and there are few models of effective implementation available for teachers to emulate. Stevens Institute of Technology has been active since 1988 in research and development efforts to incorporate computers into classroom use. We have found that teachers need to see examples of classroom experience with hardware and software and they need to have assistance as they experiment with applications of software and the development of lesson plans. High-band width technology can greatly facilitate teacher training in this area through transmission of video documentaries, software discussions, teleconferencing, peer interactions, classroom observations, etc. We discuss the experience that Stevens has had with face-to-face teacher training as well as with satellite-based teleconferencing using one-way video and two- way audio. Included are reviews of analyses of this project by researchers from Educational Testing Service, Princeton University, and Bank Street School of Education.
Nass, C; Lee, K M
2001-09-01
Would people exhibit similarity-attraction and consistency-attraction toward unambiguously computer-generated speech even when personality is clearly not relevant? In Experiment 1, participants (extrovert or introvert) heard a synthesized voice (extrovert or introvert) on a book-buying Web site. Participants accurately recognized personality cues in text to speech and showed similarity-attraction in their evaluation of the computer voice, the book reviews, and the reviewer. Experiment 2, in a Web auction context, added personality of the text to the previous design. The results replicated Experiment 1 and demonstrated consistency (voice and text personality)-attraction. To maximize liking and trust, designers should set parameters, for example, words per minute or frequency range, that create a personality that is consistent with the user and the content being presented.
Accommodative and convergence response to computer screen and printed text
NASA Astrophysics Data System (ADS)
Ferreira, Andreia; Lira, Madalena; Franco, Sandra
2011-05-01
The aim of this work was to find out if differences exist in accommodative and convergence response for different computer monitors' and a printed text. It was also tried to relate the horizontal heterophoria value and accommodative response with the symptoms associated with computer use. Two independents experiments were carried out in this study. The first experiment was measuring the accommodative response on 89 subjects using the Grand Seiko WAM-5500 (Grand Seiko Co., Ltd., Japan). The accommodative response was measured using three computer monitors: a 17-inch cathode ray tube (CRT), two liquid crystal displays LCDs, one 17-inch (LCD17) and one 15 inches (LCD15) and a printed text. The text displayed was always the same for all the subjects and tests. A second experiment aimed to measure the value of habitual horizontal heterophoria on 80 subjects using the Von Graefe technique. The measurements were obtained using the same target presented on two different computer monitors, one 19-inch cathode ray tube (CRT) and other 19 inches liquid crystal displays (LCD) and printed on paper. A small survey about the incidence and prevalence of symptoms was performed similarly in both experiments. In the first experiment, the accommodation response was higher in the CRT and LCD's than for paper. There were not found significantly different response for both LCD monitors'. The second experiment showed that, the heterophoria values were similar for all the stimuli. On average, participants presented a small exophoria. In both experiments, asthenopia was the symptom that presented higher incidence. There are different accommodative responses when reading on paper or on computer monitors. This difference is more significant for CRT monitors. On the other hand, there was no difference in the values of convergence for the computer monitors' and paper. The symptoms associated with the use of computers are not related with the increase in accommodation and with the horizontal heterophoria values.
A Weibull distribution accrual failure detector for cloud computing.
Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin
2017-01-01
Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.
Experiments in Computing: A Survey
Moisseinen, Nella
2014-01-01
Experiments play a central role in science. The role of experiments in computing is, however, unclear. Questions about the relevance of experiments in computing attracted little attention until the 1980s. As the discipline then saw a push towards experimental computer science, a variety of technically, theoretically, and empirically oriented views on experiments emerged. As a consequence of those debates, today's computing fields use experiments and experiment terminology in a variety of ways. This paper analyzes experimentation debates in computing. It presents five ways in which debaters have conceptualized experiments in computing: feasibility experiment, trial experiment, field experiment, comparison experiment, and controlled experiment. This paper has three aims: to clarify experiment terminology in computing; to contribute to disciplinary self-understanding of computing; and, due to computing's centrality in other fields, to promote understanding of experiments in modern science in general. PMID:24688404
Experiments in computing: a survey.
Tedre, Matti; Moisseinen, Nella
2014-01-01
Experiments play a central role in science. The role of experiments in computing is, however, unclear. Questions about the relevance of experiments in computing attracted little attention until the 1980s. As the discipline then saw a push towards experimental computer science, a variety of technically, theoretically, and empirically oriented views on experiments emerged. As a consequence of those debates, today's computing fields use experiments and experiment terminology in a variety of ways. This paper analyzes experimentation debates in computing. It presents five ways in which debaters have conceptualized experiments in computing: feasibility experiment, trial experiment, field experiment, comparison experiment, and controlled experiment. This paper has three aims: to clarify experiment terminology in computing; to contribute to disciplinary self-understanding of computing; and, due to computing's centrality in other fields, to promote understanding of experiments in modern science in general.
Consulting room computers and their effect on general practitioner-patient communication.
Noordman, Janneke; Verhaak, Peter; van Beljouw, Ilse; van Dulmen, Sandra
2010-12-01
in the western medical world, computers form part of the standard equipment in the consulting rooms of most GPs. As the use of a computer requires time and attention from GPs, this may well interfere with the communication process. Yet, the information accessed on the computer may also enhance communication. the present study affords insight into the relationship between computer use and GP-patient communication recorded by the same GPs over two periods. videotaped GP consultations collected in 2001 and 2008 were used to observe computer use and GP-patient communication. In addition, patients questionnaires about their experiences with communication by the GP were analysed using multilevel models with patients (Level 1) nested within GPs (Level 2). both in 2008 and in 2001, GPs used their computer in almost every consultation. Still, our study showed a change in computer use by the GPs over time. In addition, the results indicate that computer use is negatively related to some communication aspects: the patient-directed gaze of the GP and the amount of information given by GPs. There is also a negative association between computer use and the body posture of the GP. Computer use by GPs is not associated with other (analysed) non-verbal and verbal behaviour of GPs and patients. Moreover, computer use is scarcely related to patients' experiences with the communication behaviour of the GP. GPs show greater reluctance to use computers in 2008 compared to 2001. Computer use can indeed affect the communication between GPs and patients. Therefore, GPs ought to remain aware of their computer use during consultations and at the same time keep the interaction with the patient alive.
Classical boson sampling algorithms with superior performance to near-term experiments
NASA Astrophysics Data System (ADS)
Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony
2017-12-01
It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.
Styopin, Nikita E; Vershinin, Anatoly V; Zingerman, Konstantin M; Levin, Vladimir A
2016-09-01
Different variants of the Uzawa algorithm are compared with one another. The comparison is performed for the case in which this algorithm is applied to large-scale systems of linear algebraic equations. These systems arise in the finite-element solution of the problems of elasticity theory for incompressible materials. A modification of the Uzawa algorithm is proposed. Computational experiments show that this modification improves the convergence of the Uzawa algorithm for the problems of solid mechanics. The results of computational experiments show that each variant of the Uzawa algorithm considered has its advantages and disadvantages and may be convenient in one case or another.
Federated data storage system prototype for LHC experiments and data intensive science
NASA Astrophysics Data System (ADS)
Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.
2017-10-01
Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.
NASA Astrophysics Data System (ADS)
Graves, A. Palmer
This study examines the effect of increasing the visual complexity used in computer assisted instruction in general chemistry. Traditional recitation instruction was used as a control for the experiment. One tutorial presented a chemistry topic using 3-D animation showing molecular activity and symbolic representation of the macroscopic view of a chemical phenomenon. A second tutorial presented the same topic but simultaneously presented students with a digital video movie showing the phenomena and 3-D animation showing the molecular view of the phenomena. This experimental set-up was used in two different experiments during the first semester of college level general chemistry course. The topics covered were the molecular effect of heating water through the solid-liquid phase change and the kinetic molecular theory used in explaining pressure changes. The subjects used in the experiment were 236 college students enrolled in a freshman chemistry course at a large university. The data indicated that the simultaneous presentation of digital video, showing the solid to liquid phase change of water, with a molecular animation, showing the molecular behavior during the phase change, had a significant effect on student particulate understanding when compared to traditional recitation. Although the effect of the KMT tutorial was not statistically significant, there was a positive effect on student particulate understanding. The use of computer tutorial also had a significant effect on student attitude toward their comprehension of the lesson.
Anderson, P. S. L.; Rayfield, E. J.
2012-01-01
Computational models such as finite-element analysis offer biologists a means of exploring the structural mechanics of biological systems that cannot be directly observed. Validated against experimental data, a model can be manipulated to perform virtual experiments, testing variables that are hard to control in physical experiments. The relationship between tooth form and the ability to break down prey is key to understanding the evolution of dentition. Recent experimental work has quantified how tooth shape promotes fracture in biological materials. We present a validated finite-element model derived from physical compression experiments. The model shows close agreement with strain patterns observed in photoelastic test materials and reaction forces measured during these experiments. We use the model to measure strain energy within the test material when different tooth shapes are used. Results show that notched blades deform materials for less strain energy cost than straight blades, giving insights into the energetic relationship between tooth form and prey materials. We identify a hypothetical ‘optimal’ blade angle that minimizes strain energy costs and test alternative prey materials via virtual experiments. Using experimental data and computational models offers an integrative approach to understand the mechanics of tooth morphology. PMID:22399789
Games at work: the recreational use of computer games during working hours.
Reinecke, Leonard
2009-08-01
The present study investigated the recreational use of video and computer games in the workplace. In an online survey, 833 employed users of online casual games reported on their use of computer games during working hours. The data indicate that playing computer games in the workplace elicits substantial levels of recovery experience. Recovery experience associated with gameplay was the strongest predictor for the use of games in the workplace. Furthermore, individuals with higher levels of work-related fatigue reported stronger recovery experience during gameplay and showed a higher tendency to play games during working hours than did persons with lower levels of work strain. Additionally, the social situation at work was found to have a significant influence on the use of games. Persons receiving less social support from colleagues and supervisors played games at work more frequently than did individuals with higher levels of social support. Furthermore, job control was positively related to the use of games at work. In sum, the results of the present study illustrate that computer games have a significant recovery potential. Implications of these findings for research on personal computer use during work and for games research in general are discussed.
Bukowski, Henryk; Hietanen, Jari K.; Samson, Dana
2015-01-01
ABSTRACT Two paradigms have shown that people automatically compute what or where another person is looking at. In the visual perspective-taking paradigm, participants judge how many objects they see; whereas, in the gaze cueing paradigm, participants identify a target. Unlike in the former task, in the latter task, the influence of what or where the other person is looking at is only observed when the other person is presented alone before the task-relevant objects. We show that this discrepancy across the two paradigms is not due to differences in visual settings (Experiment 1) or available time to extract the directional information (Experiment 2), but that it is caused by how attention is deployed in response to task instructions (Experiment 3). Thus, the mere presence of another person in the field of view is not sufficient to compute where/what that person is looking at, which qualifies the claimed automaticity of such computations. PMID:26924936
Bukowski, Henryk; Hietanen, Jari K; Samson, Dana
2015-09-14
Two paradigms have shown that people automatically compute what or where another person is looking at. In the visual perspective-taking paradigm, participants judge how many objects they see; whereas, in the gaze cueing paradigm, participants identify a target. Unlike in the former task, in the latter task, the influence of what or where the other person is looking at is only observed when the other person is presented alone before the task-relevant objects. We show that this discrepancy across the two paradigms is not due to differences in visual settings (Experiment 1) or available time to extract the directional information (Experiment 2), but that it is caused by how attention is deployed in response to task instructions (Experiment 3). Thus, the mere presence of another person in the field of view is not sufficient to compute where/what that person is looking at, which qualifies the claimed automaticity of such computations.
A Weibull distribution accrual failure detector for cloud computing
Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin
2017-01-01
Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229
Computational analysis of conserved RNA secondary structure in transcriptomes and genomes.
Eddy, Sean R
2014-01-01
Transcriptomics experiments and computational predictions both enable systematic discovery of new functional RNAs. However, many putative noncoding transcripts arise instead from artifacts and biological noise, and current computational prediction methods have high false positive rates. I discuss prospects for improving computational methods for analyzing and identifying functional RNAs, with a focus on detecting signatures of conserved RNA secondary structure. An interesting new front is the application of chemical and enzymatic experiments that probe RNA structure on a transcriptome-wide scale. I review several proposed approaches for incorporating structure probing data into the computational prediction of RNA secondary structure. Using probabilistic inference formalisms, I show how all these approaches can be unified in a well-principled framework, which in turn allows RNA probing data to be easily integrated into a wide range of analyses that depend on RNA secondary structure inference. Such analyses include homology search and genome-wide detection of new structural RNAs.
Computational multicore on two-layer 1D shallow water equations for erodible dambreak
NASA Astrophysics Data System (ADS)
Simanjuntak, C. A.; Bagustara, B. A. R. H.; Gunawan, P. H.
2018-03-01
The simulation of erodible dambreak using two-layer shallow water equations and SCHR scheme are elaborated in this paper. The results show that the two-layer SWE model in a good agreement with the data experiment which is performed by Louvain-la-Neuve Université Catholique de Louvain. Moreover, the parallel algorithm with multicore architecture are given in the results. The results show that Computer I with processor Intel(R) Core(TM) i5-2500 CPU Quad-Core has the best performance to accelerate the computational time. Moreover, Computer III with processor AMD A6-5200 APU Quad-Core is observed has higher speedup and efficiency. The speedup and efficiency of Computer III with number of grids 3200 are 3.716050530 times and 92.9% respectively.
Computation of shock wave/target interaction
NASA Technical Reports Server (NTRS)
Mark, A.; Kutler, P.
1983-01-01
Computational results of shock waves impinging on targets and the ensuing diffraction flowfield are presented. A number of two-dimensional cases are computed with finite difference techniques. The classical case of a shock wave/cylinder interaction is compared with shock tube data and shows the quality of the computations on a pressure-time plot. Similar results are obtained for a shock wave/rectangular body interaction. Here resolution becomes important and the use of grid clustering techniques tend to show good agreement with experimental data. Computational results are also compared with pressure data resulting from shock impingement experiments for a complicated truck-like geometry. Here of significance are the grid generation and clustering techniques used. For these very complicated bodies, grids are generated by numerically solving a set of elliptic partial differential equations.
NASA Technical Reports Server (NTRS)
Arya, L. M. (Principal Investigator)
1980-01-01
Predictive procedures for developing soil hydrologic properties (i.e., relationships of soil water pressure and hydraulic conductivity to soil water content) are presented. Three models of the soil water pressure-water content relationship and one model of the hydraulic conductivity-water content relationship are discussed. Input requirements for the models are indicated, and computational procedures are outlined. Computed hydrologic properties for Keith silt loam, a soil typer near Colby, Kansas, on which the 1978 Agricultural Soil Moisture Experiment was conducted, are presented. A comparison of computed results with experimental data in the dry range shows that analytical models utilizing a few basic hydrophysical parameters can produce satisfactory data for large-scale applications.
Computer Pure-Tone and Operator Stress: Report III.
ERIC Educational Resources Information Center
Dow, Caroline; Covert, Douglas C.
Pure-tone sound at 15,750 Herz generated by flyback transformers in many computer and video display terminal (VDT) monitors has stress-related productivity effects in some operators, especially women. College-age women in a controlled experiment simulating half a normal work day showed responses within the first half hour of exposure to a tone…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Thomas Martin; Celik, Cihangir; Dunn, Michael E
In October 2010, a series of benchmark experiments were conducted at the French Commissariat a l'Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE facility. These experiments were a joint effort between the United States Department of Energy Nuclear Criticality Safety Program and the CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems. This series of experiments consisted of three single-pulsed experiments with the SILENE reactor. For the first experiment, the reactor was bare (unshielded), whereasmore » in the second and third experiments, it was shielded by lead and polyethylene, respectively. The polyethylene shield of the third experiment had a cadmium liner on its internal and external surfaces, which vertically was located near the fuel region of SILENE. During each experiment, several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor. Nearly half of the foils and TLDs had additional high-density magnetite concrete, high-density barite concrete, standard concrete, and/or BoroBond shields. CEA Saclay provided all the concrete, and the US Y-12 National Security Complex provided the BoroBond. Measurement data from the experiments were published at the 2011 International Conference on Nuclear Criticality (ICNC 2011) and the 2013 Nuclear Criticality Safety Division (NCSD 2013) topical meeting. Preliminary computational results for the first experiment were presented in the ICNC 2011 paper, which showed poor agreement between the computational results and the measured values of the foils shielded by concrete. Recently the hydrogen content, boron content, and density of these concrete shields were further investigated within the constraints of the previously available data. New computational results for the first experiment are now available that show much better agreement with the measured values.« less
Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886
Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.
Puskaric, Marin; von Helversen, Bettina; Rieskamp, Jörg
2017-08-01
Social information such as observing others can improve performance in decision making. In particular, social information has been shown to be useful when finding the best solution on one's own is difficult, costly, or dangerous. However, past research suggests that when making decisions people do not always consider other people's behaviour when it is at odds with their own experiences. Furthermore, the cognitive processes guiding the integration of social information with individual experiences are still under debate. Here, we conducted two experiments to test whether information about other persons' behaviour influenced people's decisions in a classification task. Furthermore, we examined how social information is integrated with individual learning experiences by testing different computational models. Our results show that social information had a small but reliable influence on people's classifications. The best computational model suggests that in categorization people first make up their own mind based on the non-social information, which is then updated by the social information.
Mass storage system experiences and future needs at the National Center for Atmospheric Research
NASA Technical Reports Server (NTRS)
Olear, Bernard T.
1991-01-01
A summary and viewgraphs of a discussion presented at the National Space Science Data Center (NSSDC) Mass Storage Workshop is included. Some of the experiences of the Scientific Computing Division at the National Center for Atmospheric Research (NCAR) dealing the the 'data problem' are discussed. A brief history and a development of some basic mass storage system (MSS) principles are given. An attempt is made to show how these principles apply to the integration of various components into NCAR's MSS. Future MSS needs for future computing environments is discussed.
NASA Astrophysics Data System (ADS)
Gilbert-Valencia, Daniel H.
California community colleges contribute alarmingly few computer science degree or certificate earners. While the literature shows clear K-12 impediments to CS matriculation in higher education, very little is known about the experiences of those who overcome initial impediments to CS yet do not persist through to program completion. This phenomenological study explores insights into that specific experience by interviewing underrepresented, low income, first-generation college students who began community college intending to transfer to 4-year institutions majoring in CS but switched to another field and remain enrolled or graduated. This study explores the lived experiences of students facing barriers, their avenues for developing interest in CS, and the persistence support systems they encountered, specifically looking at how students constructed their academic choice from these experiences. The growing diversity within California's population necessitates that experiences specific to underrepresented students be considered as part of this exploration. Ten semi-structured interviews and observations were conducted, transcribed and coded. Artifacts supporting student experiences were also collected. Data was analyzed through a social-constructivist lens to provide insight into experiences and how they can be navigated to create actionable strategies for community college computer science departments wishing to increase student success. Three major themes emerged from this research: (1) students shared pre-college characteristics; (2) faced similar challenges in college CS courses; and (3) shared similar reactions to the "work" of computer science. Results of the study included (1) CS interest development hinged on computer ownership in the home; (2) participants shared characteristics that were ideal for college success but not CS success; and (3) encounters in CS departments produced unique challenges for participants. Though CS interest was and remains abundant, opportunities for learning programming skills before college were non-existent and there were few opportunities in college to build skills or establish a peer support networks. Recommendations for institutional leaders and further research are also provided.
A study of Mariner 10 flight experiences and some flight piece part failure rate computations
NASA Technical Reports Server (NTRS)
Paul, F. A.
1976-01-01
The problems and failures encountered in Mariner flight are discussed and the data available through a quantitative accounting of all electronic piece parts on the spacecraft are summarized. It also shows computed failure rates for electronic piece parts. It is intended that these computed data be used in the continued updating of the failure rate base used for trade-off studies and predictions for future JPL space missions.
The development of acoustic experiments for off-campus teaching and learning
NASA Astrophysics Data System (ADS)
Wild, Graham; Swan, Geoff
2011-05-01
In this article, we show the implementation of a computer-based digital storage oscilloscope (DSO) and function generator (FG) using the computer's soundcard for off-campus acoustic experiments. The microphone input is used for the DSO, and a speaker jack is used as the FG. In an effort to reduce the cost of implementing the experiment, we examine software available for free, online. A small number of applications were compared in terms of their interface and functionality, for both the DSO and the FG. The software was then used to investigate standing waves in pipes using the computer-based DSO. Standing wave theory taught in high school and in first year physics is based on a one-dimensional model. With the use of the DSO's fast Fourier transform function, the experimental uncertainly alone was not sufficient to account for the difference observed between the measure and the calculated frequencies. Hence the original experiment was expanded upon to include the end correction effect. The DSO was also used for other simple acoustics experiments, in areas such as the physics of music.
An experiment on the use of disposable plastics as a reinforcement in concrete beams
NASA Technical Reports Server (NTRS)
Chowdhury, Mostafiz R.
1992-01-01
Illustrated here is the concept of reinforced concrete structures by the use of computer simulation and an inexpensive hands-on design experiment. The students in our construction management program use disposable plastic as a reinforcement to demonstrate their understanding of reinforced concrete and prestressed concrete beams. The plastics used for such an experiment vary from plastic bottles to steel reinforced auto tires. This experiment will show the extent to which plastic reinforcement increases the strength of a concrete beam. The procedure of using such throw-away plastics in an experiment to explain the interaction between the reinforcement material and concrete, and a comparison of the test results for using different types of waste plastics are discussed. A computer analysis to simulate the structural response is used to compare the test results and to understand the analytical background of reinforced concrete design. This interaction of using computers to analyze structures and to relate the output results with real experimentation is found to be a very useful method for teaching a math-based analytical subject to our non-engineering students.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Virtanen, E.; Haapalehto, T.; Kouhia, J.
1995-09-01
Three experiments were conducted to study the behavior of the new horizontal steam generator construction of the PACTEL test facility. In the experiments the secondary side coolant level was reduced stepwise. The experiments were calculated with two computer codes RELAP5/MOD3.1 and APROS version 2.11. A similar nodalization scheme was used for both codes to that the results may be compared. Only the steam generator was modelled and the rest of the facility was given as a boundary condition. The results show that both codes calculate well the behaviour of the primary side of the steam generator. On the secondary sidemore » both codes calculate lower steam temperatures in the upper part of the heat exchange tube bundle than was measured in the experiments.« less
Spin-based quantum computation in multielectron quantum dots
NASA Astrophysics Data System (ADS)
Hu, Xuedong; Das Sarma, S.
2001-10-01
In a quantum computer the hardware and software are intrinsically connected because the quantum Hamiltonian (or more precisely its time development) is the code that runs the computer. We demonstrate this subtle and crucial relationship by considering the example of electron-spin-based solid-state quantum computer in semiconductor quantum dots. We show that multielectron quantum dots with one valence electron in the outermost shell do not behave simply as an effective single-spin system unless special conditions are satisfied. Our work compellingly demonstrates that a delicate synergy between theory and experiment (between software and hardware) is essential for constructing a quantum computer.
The Transfer of Abstract Principles Governing Complex Adaptive Systems
ERIC Educational Resources Information Center
Goldstone, Robert L.; Sakamoto, Yasuaki
2003-01-01
Four experiments explored participants' understanding of the abstract principles governing computer simulations of complex adaptive systems. Experiments 1, 2, and 3 showed better transfer of abstract principles across simulations that were relatively dissimilar, and that this effect was due to participants who performed relatively poorly on the…
NASA Technical Reports Server (NTRS)
2004-01-01
A new technology for reducing turbulent skin friction, called the Microblowing Technique (MBT), is presented. Results from proof-of-concept experiments show that this technology could potentially reduce turbulent skin friction by more than 50% of the skin friction of a solid flat plate for subsonic and supersonic flow conditions. The primary purpose of this review paper is to provide readers with information on the turbulent skin friction reduction obtained from many experiments using the MBT. Although the MBT has a penalty for obtaining the microblowing air associated with it, some combinations of the MBT with suction boundary layer control methods are an attractive alternative for a real application. Several computational simulations to understand the flow physics of the MBT are also included. More experiments and computational fluid dynamics (CFD) computations are needed for the understanding of the unsteady flow nature of the MBT and the optimization of this new technology.
Experimental Investigation of Jet Impingement Heat Transfer Using Thermochromic Liquid Crystals
NASA Technical Reports Server (NTRS)
Dempsey, Brian Paul
1997-01-01
Jet impingement cooling of a hypersonic airfoil leading edge is experimentally investigated using thermochromic liquid crystals (TLCS) to measure surface temperature. The experiment uses computer data acquisition with digital imaging of the TLCs to determine heat transfer coefficients during a transient experiment. The data reduction relies on analysis of a coupled transient conduction - convection heat transfer problem that characterizes the experiment. The recovery temperature of the jet is accounted for by running two experiments with different heating rates, thereby generating a second equation that is used to solve for the recovery temperature. The resulting solution requires a complicated numerical iteration that is handled by a computer. Because the computational data reduction method is complex, special attention is paid to error assessment. The error analysis considers random and systematic errors generated by the instrumentation along with errors generated by the approximate nature of the numerical methods. Results of the error analysis show that the experimentally determined heat transfer coefficients are accurate to within 15%. The error analysis also shows that the recovery temperature data may be in error by more than 50%. The results show that the recovery temperature data is only reliable when the recovery temperature of the jet is greater than 5 C, i.e. the jet velocity is in excess of 100 m/s. Parameters that were investigated include nozzle width, distance from the nozzle exit to the airfoil surface, and jet velocity. Heat transfer data is presented in graphical and tabular forms. An engineering analysis of hypersonic airfoil leading edge cooling is performed using the results from these experiments. Several suggestions for the improvement of the experimental technique are discussed.
NASA Astrophysics Data System (ADS)
Hassan, Irtaza; Donati, Luca; Stensitzki, Till; Keller, Bettina G.; Heyne, Karsten; Imhof, Petra
2018-04-01
We have combined infrared (IR) experiments with molecular dynamics (MD) simulations in solution at finite temperature to analyse the vibrational signature of the small floppy peptide Alanine-Leucine. IR spectra computed from first-principles MD simulations exhibit no distinct differences between conformational clusters of α -helix or β -sheet-like folds with different orientations of the bulky leucine side chain. All computed spectra show two prominent bands, in good agreement with the experiment, that are assigned to the stretch vibrations of the carbonyl and carboxyl group, respectively. Variations in band widths and exact maxima are likely due to small fluctuations in the backbone torsion angles.
ERIC Educational Resources Information Center
Kim, Jieun; Ryu, Hokyoung; Katuk, Norliza; Wang, Ruili; Choi, Gyunghyun
2014-01-01
The present study aims to show if a skill-challenge balancing (SCB) instruction strategy can assist learners to motivationally engage in computer-based learning. Csikszentmihalyi's flow theory (self-control, curiosity, focus of attention, and intrinsic interest) was applied to an account of the optimal learning experience in SCB-based learning…
Learning and Teaching with a Computer Scanner
ERIC Educational Resources Information Center
Planinsic, G.; Gregorcic, B.; Etkina, E.
2014-01-01
This paper introduces the readers to simple inquiry-based activities (experiments with supporting questions) that one can do with a computer scanner to help students learn and apply the concepts of relative motion in 1 and 2D, vibrational motion and the Doppler effect. We also show how to use these activities to help students think like…
Slimeware: engineering devices with slime mold.
Adamatzky, Andrew
2013-01-01
The plasmodium of the acellular slime mold Physarum polycephalum is a gigantic single cell visible to the unaided eye. The cell shows a rich spectrum of behavioral patterns in response to environmental conditions. In a series of simple experiments we demonstrate how to make computing, sensing, and actuating devices from the slime mold. We show how to program living slime mold machines by configurations of repelling and attracting gradients and demonstrate the workability of the living machines on tasks of computational geometry, logic, and arithmetic.
Experimental Blind Quantum Computing for a Classical Client.
Huang, He-Liang; Zhao, Qi; Ma, Xiongfeng; Liu, Chang; Su, Zu-En; Wang, Xi-Lin; Li, Li; Liu, Nai-Le; Sanders, Barry C; Lu, Chao-Yang; Pan, Jian-Wei
2017-08-04
To date, blind quantum computing demonstrations require clients to have weak quantum devices. Here we implement a proof-of-principle experiment for completely classical clients. Via classically interacting with two quantum servers that share entanglement, the client accomplishes the task of having the number 15 factorized by servers who are denied information about the computation itself. This concealment is accompanied by a verification protocol that tests servers' honesty and correctness. Our demonstration shows the feasibility of completely classical clients and thus is a key milestone towards secure cloud quantum computing.
Experimental Blind Quantum Computing for a Classical Client
NASA Astrophysics Data System (ADS)
Huang, He-Liang; Zhao, Qi; Ma, Xiongfeng; Liu, Chang; Su, Zu-En; Wang, Xi-Lin; Li, Li; Liu, Nai-Le; Sanders, Barry C.; Lu, Chao-Yang; Pan, Jian-Wei
2017-08-01
To date, blind quantum computing demonstrations require clients to have weak quantum devices. Here we implement a proof-of-principle experiment for completely classical clients. Via classically interacting with two quantum servers that share entanglement, the client accomplishes the task of having the number 15 factorized by servers who are denied information about the computation itself. This concealment is accompanied by a verification protocol that tests servers' honesty and correctness. Our demonstration shows the feasibility of completely classical clients and thus is a key milestone towards secure cloud quantum computing.
Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm
NASA Astrophysics Data System (ADS)
Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.
2014-06-01
With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.
Hagen, Monika E; Wagner, Oliver J; Inan, Ihsan; Morel, Philippe
2009-09-01
Due to improved ergonomics and dexterity, robotic surgery is promoted as being easily performed by surgeons with no special skills necessary. We tested this hypothesis by measuring IQ elements, computer gaming skills, general dexterity with chopsticks, and evaluating laparoscopic experience in correlation to performance ability with the da Vinci robot. Thirty-four individuals were tested for robotic dexterity, IQ elements, computer-gaming skills and general dexterity. Eighteen surgically inexperienced and 16 laparoscopically trained surgeons were included. Each individual performed three different tasks with the da Vinci surgical system and their times were recorded. An IQ test (elements: logical thinking, 3D imagination and technical understanding) was completed by each participant. Computer skills were tested with a simple computer game (hand-eye coordination) and general dexterity was evaluated by the ability to use chopsticks. We found no correlation between logical thinking, 3D imagination and robotic skills. Both computer gaming and general dexterity showed a slight but non-significant improvement in performance with the da Vinci robot (p > 0.05). A significant correlation between robotic skills, technical understanding and laparoscopic experience was observed (p < 0.05). The data support the conclusion that there are no significant correlations between robotic performance and logical thinking, 3D understanding, computer gaming skills and general dexterity. A correlation between robotic skills and technical understanding may exist. Laparoscopic experience seems to be the strongest predictor of performance with the da Vinci surgical system. Generally, it appears difficult to determine non-surgical predictors for robotic surgery.
NASA Astrophysics Data System (ADS)
Grinevich, P. G.; Santini, P. M.
2007-08-01
We study the complexification of the one-dimensional Newtonian particle in a monomial potential. We discuss two classes of motions on the associated Riemann surface: the rectilinear and the cyclic motions, corresponding to two different classes of real and autonomous Newtonian dynamics in the plane. The rectilinear motion has been studied in a number of papers, while the cyclic motion is much less understood. For small data, the cyclic time trajectories lead to isochronous dynamics. For bigger data the situation is quite complicated; computer experiments show that, for sufficiently small degree of the monomial, the motion is generically isochronous with integer period, which depends in a quite sensitive way on the initial data. If the degree of the monomial is sufficiently high, computer experiments show essentially chaotic behavior. We suggest a possible theoretical explanation of these different behaviors. We also introduce a two-parameter family of two-dimensional mappings, describing the motion of the center of the circle, as a convenient representation of the cyclic dynamics; we call such a mapping the center map. Computer experiments for the center map show a typical multifractal behavior with periodicity islands. Therefore the above complexification procedure generates dynamics amenable to analytic treatment and possessing a high degree of complexity.
Sukums, Felix; Mensah, Nathan; Mpembeni, Rose; Kaltschmidt, Jens; Haefeli, Walter E.; Blank, Antje
2014-01-01
Background The QUALMAT (Quality of Maternal and Prenatal Care: Bridging the Know-do Gap) project has introduced an electronic clinical decision support system (CDSS) for pre-natal and maternal care services in rural primary health facilities in Burkina Faso, Ghana, and Tanzania. Objective To report an assessment of health providers’ computer knowledge, experience, and attitudes prior to the implementation of the QUALMAT electronic CDSS. Design A cross-sectional study was conducted with providers in 24 QUALMAT project sites. Information was collected using structured questionnaires. Chi-squared tests and one-way ANOVA describe the association between computer knowledge, attitudes, and other factors. Semi-structured interviews and focus groups were conducted to gain further insights. Results A total of 108 providers responded, 63% were from Tanzania and 37% from Ghana. The mean age was 37.6 years, and 79% were female. Only 40% had ever used computers, and 29% had prior computer training. About 80% were computer illiterate or beginners. Educational level, age, and years of work experience were significantly associated with computer knowledge (p<0.01). Most (95.3%) had positive attitudes towards computers – average score (±SD) of 37.2 (±4.9). Females had significantly lower scores than males. Interviews and group discussions showed that although most were lacking computer knowledge and experience, they were optimistic about overcoming challenges associated with the introduction of computers in their workplace. Conclusions Given the low levels of computer knowledge among rural health workers in Africa, it is important to provide adequate training and support to ensure the successful uptake of electronic CDSSs in these settings. The positive attitudes to computers found in this study underscore that also rural care providers are ready to use such technology. PMID:25361721
Sukums, Felix; Mensah, Nathan; Mpembeni, Rose; Kaltschmidt, Jens; Haefeli, Walter E; Blank, Antje
2014-01-01
The QUALMAT (Quality of Maternal and Prenatal Care: Bridging the Know-do Gap) project has introduced an electronic clinical decision support system (CDSS) for pre-natal and maternal care services in rural primary health facilities in Burkina Faso, Ghana, and Tanzania. To report an assessment of health providers' computer knowledge, experience, and attitudes prior to the implementation of the QUALMAT electronic CDSS. A cross-sectional study was conducted with providers in 24 QUALMAT project sites. Information was collected using structured questionnaires. Chi-squared tests and one-way ANOVA describe the association between computer knowledge, attitudes, and other factors. Semi-structured interviews and focus groups were conducted to gain further insights. A total of 108 providers responded, 63% were from Tanzania and 37% from Ghana. The mean age was 37.6 years, and 79% were female. Only 40% had ever used computers, and 29% had prior computer training. About 80% were computer illiterate or beginners. Educational level, age, and years of work experience were significantly associated with computer knowledge (p<0.01). Most (95.3%) had positive attitudes towards computers - average score (±SD) of 37.2 (±4.9). Females had significantly lower scores than males. Interviews and group discussions showed that although most were lacking computer knowledge and experience, they were optimistic about overcoming challenges associated with the introduction of computers in their workplace. Given the low levels of computer knowledge among rural health workers in Africa, it is important to provide adequate training and support to ensure the successful uptake of electronic CDSSs in these settings. The positive attitudes to computers found in this study underscore that also rural care providers are ready to use such technology.
In vitro molecular machine learning algorithm via symmetric internal loops of DNA.
Lee, Ji-Hoon; Lee, Seung Hwan; Baek, Christina; Chun, Hyosun; Ryu, Je-Hwan; Kim, Jin-Woo; Deaton, Russell; Zhang, Byoung-Tak
2017-08-01
Programmable biomolecules, such as DNA strands, deoxyribozymes, and restriction enzymes, have been used to solve computational problems, construct large-scale logic circuits, and program simple molecular games. Although studies have shown the potential of molecular computing, the capability of computational learning with DNA molecules, i.e., molecular machine learning, has yet to be experimentally verified. Here, we present a novel molecular learning in vitro model in which symmetric internal loops of double-stranded DNA are exploited to measure the differences between training instances, thus enabling the molecules to learn from small errors. The model was evaluated on a data set of twenty dialogue sentences obtained from the television shows Friends and Prison Break. The wet DNA-computing experiments confirmed that the molecular learning machine was able to generalize the dialogue patterns of each show and successfully identify the show from which the sentences originated. The molecular machine learning model described here opens the way for solving machine learning problems in computer science and biology using in vitro molecular computing with the data encoded in DNA molecules. Copyright © 2017. Published by Elsevier B.V.
On Undecidability Aspects of Resilient Computations and Implications to Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S
2014-01-01
Future Exascale computing systems with a large number of processors, memory elements and interconnection links, are expected to experience multiple, complex faults, which affect both applications and operating-runtime systems. A variety of algorithms, frameworks and tools are being proposed to realize and/or verify the resilience properties of computations that guarantee correct results on failure-prone computing systems. We analytically show that certain resilient computation problems in presence of general classes of faults are undecidable, that is, no algorithms exist for solving them. We first show that the membership verification in a generic set of resilient computations is undecidable. We describe classesmore » of faults that can create infinite loops or non-halting computations, whose detection in general is undecidable. We then show certain resilient computation problems to be undecidable by using reductions from the loop detection and halting problems under two formulations, namely, an abstract programming language and Turing machines, respectively. These two reductions highlight different failure effects: the former represents program and data corruption, and the latter illustrates incorrect program execution. These results call for broad-based, well-characterized resilience approaches that complement purely computational solutions using methods such as hardware monitors, co-designs, and system- and application-specific diagnosis codes.« less
Kangas, Brian D; Berry, Meredith S; Cassidy, Rachel N; Dallery, Jesse; Vaidya, Manish; Hackenberg, Timothy D
2009-10-01
Adult human subjects engaged in a simulated Rock/Paper/Scissors game against a computer opponent. The computer opponent's responses were determined by programmed probabilities that differed across 10 blocks of 100 trials each. Response allocation in Experiment 1 was well described by a modified version of the generalized matching equation, with undermatching observed in all subjects. To assess the effects of instructions on response allocation, accurate probability-related information on how the computer was programmed to respond was provided to subjects in Experiment 2. Five of 6 subjects played the counter response of the computer's dominant programmed response near-exclusively (e.g., subjects played paper almost exclusively if the probability of rock was high), resulting in minor overmatching, and higher reinforcement rates relative to Experiment 1. On the whole, the study shows that the generalized matching law provides a good description of complex human choice in a gaming context, and illustrates a promising set of laboratory methods and analytic techniques that capture important features of human choice outside the laboratory.
Strategic flexibility in computational estimation for Chinese- and Canadian-educated adults.
Xu, Chang; Wells, Emma; LeFevre, Jo-Anne; Imbo, Ineke
2014-09-01
The purpose of the present study was to examine factors that influence strategic flexibility in computational estimation for Chinese- and Canadian-educated adults. Strategic flexibility was operationalized as the percentage of trials on which participants chose the problem-based procedure that best balanced proximity to the correct answer with simplification of the required calculation. For example, on 42 × 57, the optimal problem-based solution is 40 × 60 because 2,400 is closer to the exact answer 2,394 than is 40 × 50 or 50 × 60. In Experiment 1 (n = 50), where participants had free choice of estimation procedures, Chinese-educated participants were more likely to choose the optimal problem-based procedure (80% of trials) than Canadian-educated participants (50%). In Experiment 2 (n = 48), participants had to choose 1 of 3 solution procedures. They showed moderate strategic flexibility that was equal across groups (60%). In Experiment 3 (n = 50), participants were given the same 3 procedure choices as in Experiment 2 but different instructions and explicit feedback. When instructed to respond quickly, both groups showed moderate strategic flexibility as in Experiment 2 (60%). When instructed to respond as accurately as possible or to balance speed and accuracy, they showed very high strategic flexibility (greater than 90%). These findings suggest that solvers will show very different levels of strategic flexibility in response to instructions, feedback, and problem characteristics and that these factors interact with individual differences (e.g., arithmetic skills, nationality) to produce variable response patterns.
Advancements in RNASeqGUI towards a Reproducible Analysis of RNA-Seq Experiments
Russo, Francesco; Righelli, Dario
2016-01-01
We present the advancements and novelties recently introduced in RNASeqGUI, a graphical user interface that helps biologists to handle and analyse large data collected in RNA-Seq experiments. This work focuses on the concept of reproducible research and shows how it has been incorporated in RNASeqGUI to provide reproducible (computational) results. The novel version of RNASeqGUI combines graphical interfaces with tools for reproducible research, such as literate statistical programming, human readable report, parallel executions, caching, and interactive and web-explorable tables of results. These features allow the user to analyse big datasets in a fast, efficient, and reproducible way. Moreover, this paper represents a proof of concept, showing a simple way to develop computational tools for Life Science in the spirit of reproducible research. PMID:26977414
A qualitative study of technophobic students' reactions to a technology-rich college science course
NASA Astrophysics Data System (ADS)
Guttschow, Gena Lee
The use of technology in education has grown rapidly in the last 20 years. In fact, many of today's college students have had some sort of computer in their elementary school classrooms. One might think that this consistent exposure to computers would foster positive attitudes about computers but this is not always the case. Currently, a substantial number of college students dislike interacting with technology. People who dislike interacting with technology are often referred to as "technophobic". Technophobic people have negative thoughts and feelings about technology and they often have a desire to avoid interaction with technology. Technophobic students' negative feelings about technology have the potential to interfere with their learning when technology is utilized as a tool for instruction of school subjects. As computer use becomes prevalent and in many instances mandatory in education, the issue of technophobia increasingly needs to be understood and addressed. This is a qualitative study designed with the intent of gaining an understanding the experiences of technophobic students who are required to use technology to learn science in a college class. Six developmental college students enrolled in a computer based anatomy and physiology class were chosen to participate in the study based on their high technophobia scores. They were interviewed three times during the quarter and videotaped once. The interview data were transcribed, coded, and analyzed. The analysis resulted in six case studies describing each participant's experience and 11 themes representing overlapping areas in the participants' worlds of experience. A discussion of the themes, the meaning they hold for me as a science educator and how they relate to the existing literature, is presented. The participants' descriptions of their experiences showed that the technophobic students did use the computers and learned skills when they had to in order to complete assignments. It was also revealed that the technophobic participants' negative attitudes did not improve after learning computer skills. Lastly, based on the participants' experiences it seems important to start a class with step-by step computer training, teaching foundational computer skills, and slowly progress towards autonomous computer exploration.
The Effect of Social Grounding on Collaboration in a Computer-Mediated Small Group Discussion
ERIC Educational Resources Information Center
Ahern, Terence C.; Thomas, Julie A.; Tallent-Runnels, Mary K.; Lan, William Y.; Cooper, Sandra; Lu, Xiaoming; Cyrus, Jacqui
2006-01-01
There is a tremendous amount of pressure on educators to incorporate highly advanced computer-mediated communication (CMC) into the classroom, but the research shows that this is not an easy task. Part of the difficulty learners experience within current network applications is a lack of support from the design of the software for the development…
Computer-Based Experiment for Determining Planck's Constant Using LEDs
ERIC Educational Resources Information Center
Zhou, Feng; Cloninger, Todd
2008-01-01
Visible light emitting diodes (LEDs) have been widely used as power indicators. However, after the power is switched off, it takes a while for the LED to go off. Many students were fascinated by this simple demonstration. In this paper, by making use of computer-based data acquisition and modeling, we show the voltage across the LED undergoing an…
ERIC Educational Resources Information Center
Benbow, Ross J.; Vivyan, Erika
2016-01-01
Building from findings showing that undergraduate computer science continues to have the highest attrition rates proportionally for women within postsecondary science, technology, engineering, and mathematics disciplines--a phenomenon that defies basic social equity goals in a high status field--this paper seeks to better understand how student…
Design of a specialized computer for on-line monitoring of cardiac stroke volume
NASA Technical Reports Server (NTRS)
Webb, J. A., Jr.; Gebben, V. D.
1972-01-01
The design of a specialized analog computer for on-line determination of cardiac stroke volume by means of a modified version of the pressure pulse contour method is presented. The design consists of an analog circuit for computation and a timing circuit for detecting necessary events on the pressure waveform. Readouts of arterial pressures, systolic duration, heart rate, percent change in stroke volume, and percent change in cardiac output are provided for monitoring cardiac patients. Laboratory results showed that computational accuracy was within 3 percent, while animal experiments verified the operational capability of the computer. Patient safety considerations are also discussed.
NASA Technical Reports Server (NTRS)
Guruswamy, Guru
2004-01-01
A procedure to accurately generate AIC using the Navier-Stokes solver including grid deformation is presented. Preliminary results show good comparisons between experiment and computed flutter boundaries for a rectangular wing. A full wing body configuration of an orbital space plane is selected for demonstration on a large number of processors. In the final paper the AIC of full wing body configuration will be computed. The scalability of the procedure on supercomputer will be demonstrated.
Analysis of Biosignals During Immersion in Computer Games.
Yeo, Mina; Lim, Seokbeen; Yoon, Gilwon
2017-11-17
The number of computer game users is increasing as computers and various IT devices in connection with the Internet are commonplace in all ages. In this research, in order to find the relevance of behavioral activity and its associated biosignal, biosignal changes before and after as well as during computer games were measured and analyzed for 31 subjects. For this purpose, a device to measure electrocardiogram, photoplethysmogram and skin temperature was developed such that the effect of motion artifacts could be minimized. The device was made wearable for convenient measurement. The game selected for the experiments was League of Legends™. Analysis on the pulse transit time, heart rate variability and skin temperature showed increased sympathetic nerve activities during computer game, while the parasympathetic nerves became less active. Interestingly, the sympathetic predominance group showed less change in the heart rate variability as compared to the normal group. The results can be valuable for studying internet gaming disorder.
Sabouri, Sepideh; Matene, Elhacene; Vinet, Alain; Richer, Louis-Philippe; Cardinal, René; Armour, J Andrew; Pagé, Pierre; Kus, Teresa; Jacquemet, Vincent
2014-01-01
Epicardial high-density electrical mapping is a well-established experimental instrument to monitor in vivo the activity of the atria in response to modulations of the autonomic nervous system in sinus rhythm. In regions that are not accessible by epicardial mapping, noncontact endocardial mapping performed through a balloon catheter may provide a more comprehensive description of atrial activity. We developed a computer model of the canine right atrium to compare epicardial and noncontact endocardial mapping. The model was derived from an experiment in which electroanatomical reconstruction, epicardial mapping (103 electrodes), noncontact endocardial mapping (2048 virtual electrodes computed from a 64-channel balloon catheter), and direct-contact endocardial catheter recordings were simultaneously performed in a dog. The recording system was simulated in the computer model. For simulations and experiments (after atrio-ventricular node suppression), activation maps were computed during sinus rhythm. Repolarization was assessed by measuring the area under the atrial T wave (ATa), a marker of repolarization gradients. Results showed an epicardial-endocardial correlation coefficients of 0.80 and 0.63 (two dog experiments) and 0.96 (simulation) between activation times, and a correlation coefficients of 0.57 and 0.46 (two dog experiments) and 0.92 (simulation) between ATa values. Despite distance (balloon-atrial wall) and dimension reduction (64 electrodes), some information about atrial repolarization remained present in noncontact signals.
Sabouri, Sepideh; Matene, Elhacene; Vinet, Alain; Richer, Louis-Philippe; Cardinal, René; Armour, J. Andrew; Pagé, Pierre; Kus, Teresa; Jacquemet, Vincent
2014-01-01
Epicardial high-density electrical mapping is a well-established experimental instrument to monitor in vivo the activity of the atria in response to modulations of the autonomic nervous system in sinus rhythm. In regions that are not accessible by epicardial mapping, noncontact endocardial mapping performed through a balloon catheter may provide a more comprehensive description of atrial activity. We developed a computer model of the canine right atrium to compare epicardial and noncontact endocardial mapping. The model was derived from an experiment in which electroanatomical reconstruction, epicardial mapping (103 electrodes), noncontact endocardial mapping (2048 virtual electrodes computed from a 64-channel balloon catheter), and direct-contact endocardial catheter recordings were simultaneously performed in a dog. The recording system was simulated in the computer model. For simulations and experiments (after atrio-ventricular node suppression), activation maps were computed during sinus rhythm. Repolarization was assessed by measuring the area under the atrial T wave (ATa), a marker of repolarization gradients. Results showed an epicardial-endocardial correlation coefficients of 0.80 and 0.63 (two dog experiments) and 0.96 (simulation) between activation times, and a correlation coefficients of 0.57 and 0.46 (two dog experiments) and 0.92 (simulation) between ATa values. Despite distance (balloon-atrial wall) and dimension reduction (64 electrodes), some information about atrial repolarization remained present in noncontact signals. PMID:24598778
A method of non-contact reading code based on computer vision
NASA Astrophysics Data System (ADS)
Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan
2018-03-01
With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.
2013-01-01
Objective. This study compared the relationship between computer experience and performance on computerized cognitive tests and a traditional paper-and-pencil cognitive test in a sample of older adults (N = 634). Method. Participants completed computer experience and computer attitudes questionnaires, three computerized cognitive tests (Useful Field of View (UFOV) Test, Road Sign Test, and Stroop task) and a paper-and-pencil cognitive measure (Trail Making Test). Multivariate analysis of covariance was used to examine differences in cognitive performance across the four measures between those with and without computer experience after adjusting for confounding variables. Results. Although computer experience had a significant main effect across all cognitive measures, the effect sizes were similar. After controlling for computer attitudes, the relationship between computer experience and UFOV was fully attenuated. Discussion. Findings suggest that computer experience is not uniquely related to performance on computerized cognitive measures compared with paper-and-pencil measures. Because the relationship between computer experience and UFOV was fully attenuated by computer attitudes, this may imply that motivational factors are more influential to UFOV performance than computer experience. Our findings support the hypothesis that computer use is related to cognitive performance, and this relationship is not stronger for computerized cognitive measures. Implications and directions for future research are provided. PMID:22929395
NASA Astrophysics Data System (ADS)
Antoine, Marilyn V.
2011-12-01
The purpose of this research was to extend earlier research on sources of selfefficacy (Lent, Lopez, & Biechke, 1991; Usher & Pajares, 2009) to the information technology domain. The principal investigator examined how Bandura's (1977) sources of self-efficacy information---mastery experience, vicarious experience, verbal persuasion, and physiological states---shape computer self-efficacy beliefs and influence the decision to use or not use computers. The study took place at a mid-sized Historically Black College or University in the South. A convenience sample of 105 undergraduates was drawn from students enrolled in multiple sections of two introductory computer courses. There were 67 females and 38 males. This research was a correlational study of the following variables: sources of computer self-efficacy, general computer self-efficacy, outcome expectations, computer anxiety, and intention to use computers. The principal investigator administered a survey questionnaire containing 52 Likert items to measure the major study variables. Additionally, the survey instrument collected demographic variables such as gender, age, race, intended major, classification, technology use, technology adoption category, and whether the student owns a computer. The results reveal the following: (1) Mastery experience and verbal persuasion had statistically significant relationships to general computer self-efficacy, while vicarious experience and physiological states had non-significant relationships. Mastery experience had the strongest correlation to general computer self-efficacy. (2) All of the sources of computer self-efficacy had statistically significant relationships to personal outcome expectations. Vicarious experience had the strongest correlation to personal outcome expectations. (3) All of the sources of self-efficacy had statistically significant relationships to performance outcome expectations. Vicarious experience had the strongest correlation to performance outcome expectations. (4) Mastery experience and physiological states had statistically significant relationships to computer anxiety, while vicarious experience and verbal persuasion had non-significant relationships. Physiological states had the strongest correlation to computer anxiety. (5) Mastery experience, vicarious experience, and physiological states had statistically significant relationships to intention to use computers, while verbal persuasion had a non-significant relationship. Mastery experience had the strongest correlation to intention to use computers. Gender-related findings indicate that females reported higher average mastery experience, vicarious experience, physiological states, and intention to use computers than males. Females reported lower average general computer self-efficacy, computer anxiety, verbal persuasion, personal outcome expectations, and performance outcome expectations than males. The results of this study can be used to develop strategies for increasing general computer self-efficacy, outcome expectations, and intention to use computers. The results can also be used to develop strategies for reducing computer anxiety.
NASA Astrophysics Data System (ADS)
Suárez Araujo, Carmen Paz; Barahona da Fonseca, Isabel; Barahona da Fonseca, José; Simões da Fonseca, J.
2004-08-01
A theoretical approach that aims to the identification of information processing that may be responsible for emotional dimensions of subjective experience is studied as an initial step in the construction of a neural net model of affective dimensions of psychological experiences. In this paper it is suggested that a way of orientated recombination of attributes can be present not only in the perceptive processing but also in cognitive ones. We will present an analysis of the most important emotion theories, we show their neural organization and we propose the neural computation approach as an appropriate framework for generating knowledge about the neural base of emotional experience. Finally, in this study we present a scheme corresponding to framework to design a computational neural multi-system for Emotion (CONEMSE).
Retrieving relevant time-course experiments: a study on Arabidopsis microarrays.
Şener, Duygu Dede; Oğul, Hasan
2016-06-01
Understanding time-course regulation of genes in response to a stimulus is a major concern in current systems biology. The problem is usually approached by computational methods to model the gene behaviour or its networked interactions with the others by a set of latent parameters. The model parameters can be estimated through a meta-analysis of available data obtained from other relevant experiments. The key question here is how to find the relevant experiments which are potentially useful in analysing current data. In this study, the authors address this problem in the context of time-course gene expression experiments from an information retrieval perspective. To this end, they introduce a computational framework that takes a time-course experiment as a query and reports a list of relevant experiments retrieved from a given repository. These retrieved experiments can then be used to associate the environmental factors of query experiment with the findings previously reported. The model is tested using a set of time-course Arabidopsis microarrays. The experimental results show that relevant experiments can be successfully retrieved based on content similarity.
Quantum spin liquid signatures in Kitaev-like frustrated magnets
NASA Astrophysics Data System (ADS)
Gohlke, Matthias; Wachtel, Gideon; Yamaji, Youhei; Pollmann, Frank; Kim, Yong Baek
2018-02-01
Motivated by recent experiments on α -RuCl3 , we investigate a possible quantum spin liquid ground state of the honeycomb-lattice spin model with bond-dependent interactions. We consider the K -Γ model, where K and Γ represent the Kitaev and symmetric-anisotropic interactions between spin-1/2 moments on the honeycomb lattice. Using the infinite density matrix renormalization group, we provide compelling evidence for the existence of quantum spin liquid phases in an extended region of the phase diagram. In particular, we use transfer-matrix spectra to show the evolution of two-particle excitations with well-defined two-dimensional dispersion, which is a strong signature of a quantum spin liquid. These results are compared with predictions from Majorana mean-field theory and used to infer the quasiparticle excitation spectra. Further, we compute the dynamical structure factor using finite-size cluster computations and show that the results resemble the scattering continuum seen in neutron-scattering experiments on α -RuCl3 . We discuss these results in light of recent and future experiments.
Adiabatic Quantum Computation: Coherent Control Back Action.
Goswami, Debabrata
2006-11-22
Though attractive from scalability aspects, optical approaches to quantum computing are highly prone to decoherence and rapid population loss due to nonradiative processes such as vibrational redistribution. We show that such effects can be reduced by adiabatic coherent control, in which quantum interference between multiple excitation pathways is used to cancel coupling to the unwanted, non-radiative channels. We focus on experimentally demonstrated adiabatic controlled population transfer experiments wherein the details on the coherence aspects are yet to be explored theoretically but are important for quantum computation. Such quantum computing schemes also form a back-action connection to coherent control developments.
Implementation of DFT application on ternary optical computer
NASA Astrophysics Data System (ADS)
Junjie, Peng; Youyi, Fu; Xiaofeng, Zhang; Shuai, Kong; Xinyu, Wei
2018-03-01
As its characteristics of huge number of data bits and low energy consumption, optical computing may be used in the applications such as DFT etc. which needs a lot of computation and can be implemented in parallel. According to this, DFT implementation methods in full parallel as well as in partial parallel are presented. Based on resources ternary optical computer (TOC), extensive experiments were carried out. Experimental results show that the proposed schemes are correct and feasible. They provide a foundation for further exploration of the applications on TOC that needs a large amount calculation and can be processed in parallel.
Colour computer-generated holography for point clouds utilizing the Phong illumination model.
Symeonidou, Athanasia; Blinder, David; Schelkens, Peter
2018-04-16
A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.
A novel computational approach towards the certification of large-scale boson sampling
NASA Astrophysics Data System (ADS)
Huh, Joonsuk
Recent proposals of boson sampling and the corresponding experiments exhibit the possible disproof of extended Church-Turning Thesis. Furthermore, the application of boson sampling to molecular computation has been suggested theoretically. Till now, however, only small-scale experiments with a few photons have been successfully performed. The boson sampling experiments of 20-30 photons are expected to reveal the computational superiority of the quantum device. A novel theoretical proposal for the large-scale boson sampling using microwave photons is highly promising due to the deterministic photon sources and the scalability. Therefore, the certification protocol of large-scale boson sampling experiments should be presented to complete the exciting story. We propose, in this presentation, a computational protocol towards the certification of large-scale boson sampling. The correlations of paired photon modes and the time-dependent characteristic functional with its Fourier component can show the fingerprint of large-scale boson sampling. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(NRF-2015R1A6A3A04059773), the ICT R&D program of MSIP/IITP [2015-019, Fundamental Research Toward Secure Quantum Communication] and Mueunjae Institute for Chemistry (MIC) postdoctoral fellowship.
1001 Ways to run AutoDock Vina for virtual screening
NASA Astrophysics Data System (ADS)
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
1001 Ways to run AutoDock Vina for virtual screening.
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
NASA Astrophysics Data System (ADS)
Dattani, Nikesh S.
2017-06-01
Ideally, the cataloguing of spectroscopic linelists would not demand laborious and expensive experiments. Whatever an experiment might achieve, the same information would be attainable by running a calculation on a computer. Kolos and Wolniewicz were the first to demonstrate that calculations on a computer can outperform even the most sophisticated molecular spectroscopic experiments of the time, when their 1964 calculations of the dissociation energies of H_2 and D_{2} were found to be more than 1 cm^{-1} larger than the best experiments by Gerhard Herzberg, suggesting the experiment violated a strict variational principle. As explained in his Nobel Lecture, it took 5 more years for Herzberg to perform an experiment which caught up to the accuracy of the 1964 calculations. Today, numerical solutions to the Schrödinger equation, supplemented with relativistic and higher-order quantum electrodynamics (QED) corrections can provide ro-vibrational spectra for molecules that we strongly believe to be correct, even in the absence of experimental data. Why do we believe these calculated spectra are correct if we do not have experiments against which to test them? All evidence seen so far suggests that corrections due to gravity or other forces are not needed for a computer simulated QED spectrum of ro-vibrational energy transitions to be correct at the precision of typical spectrometers. Therefore a computer-generated spectrum can be considered to be as good as one coming from a more conventional spectrometer, and this has been shown to be true not just for the H_2 energies back in 1964, but now also for several other molecules. So are we at the stage where we can launch an array of calculations, each with just the atomic number changed in the input file, to reproduce the NIST energy level databases? Not quite. But I will show that for the 6e^- molecule Li_2, we have reproduced the vibrational spacings to within 0.001 cm^{-1} of the experimental spectrum, and I will discuss present-day prospects for replacing laborious experiments for spectra of certain systems within the reach of today's ``computer spectrometers''.
2006-06-05
Space shuttle STS-121 FIT (Fly Immunity and Tumors) payload. Using Drosophila (fruit fly) to complete the experiments. Computer screen showing green fluorescent protein used to visualize blood cells in Drosophila (fruit fly).
Learning Problem-Solving Rules as Search through a Hypothesis Space
ERIC Educational Resources Information Center
Lee, Hee Seung; Betts, Shawn; Anderson, John R.
2016-01-01
Learning to solve a class of problems can be characterized as a search through a space of hypotheses about the rules for solving these problems. A series of four experiments studied how different learning conditions affected the search among hypotheses about the solution rule for a simple computational problem. Experiment 1 showed that a problem…
NASA Technical Reports Server (NTRS)
1994-01-01
STS-59's MAPS (Measurement of Air Pollution from Satellites) experiment is sending real-time data that provides the most comprehensive view of carbon monoxide concentrations on Earth ever recorded. This computer image shows a summary of 'quick look' data obtained by the MAPS instrument during its first days of operations as part of the Space Shuttle Endeavour's SRL-1 payload.
Aquilina, Luc; Roques, Clément; Boisson, Alexandre; Vergnaud-Ayraud, Virginie; Labasque, Thierry; Pauwels, Hélène; Pételet-Giraud, Emmanuelle; Pettenati, Marie; Dufresne, Alexis; Bethencourt, Lorine; Bour, Olivier
2018-04-01
We investigate denitrification mechanisms through batch experiments using crushed rock and groundwater from a granitic aquifer subject to long term pumping (Ploemeur, France). Except for sterilized experiments, extensive denitrification reaction induces NO 3 decreases ranging from 0.3 to 0.6mmol/L. Carbon concentrations, either organic or inorganic, remain relatively stable and do not document potential heterotrophic denitrification. Batch experiments show a clear effect of mineral dissolution which is documented through cation (K, Na, Ca) and Fluoride production. These productions are tightly related to denitrification progress during the experiment. Conversely, limited amounts of SO 4 , systematically lower than autotrophic denitrification coupled to sulfur oxidation stoichiometry, are produced during the experiments which indicates that sulfur oxidation is not likely even when pyrite is added to the experiments. Analysis of cation ratios, both in isolated minerals of the granite and within water of the batch, allow the mineral dissolution during the experiments to be quantified. Using cation ratios, we show that batch experiments are characterized mainly by biotite dissolution. As biotite contains 21 to 30% of Fe and 0.3 to 1.7% of F, it constitutes a potential source for these two elements. Denitrification could be attributed to the oxidation of Fe(II) contained in biotite. We computed the amount of K and F produced through biotite dissolution when entirely attributing denitrification to biotite dissolution. Computed amounts show that this process may account for the observed K and F produced. We interpret these results as the development of microbial activity which induces mineral dissolution in order to uptake Fe(II) which is used for denitrification. Although pyrite is probably available, SO 4 and cation measurements favor a large biotite dissolution reaction which could account for all the observed Fe production. Chemical composition of groundwater produced from the Ploemeur site indicates similar denitrification processes although original composition shows mainly plagioclase dissolution. Copyright © 2017 Elsevier B.V. All rights reserved.
Computers in medicine: patients' attitudes
Cruickshank, P. J.
1984-01-01
Data are presented from two surveys where a 26-item questionnaire was used to measure patients' attitudes to diagnostic computers and to medical computers in general. The first group of respondents were 229 patients who had been given outpatient appointments at a hospital general medical clinic specializing in gastrointestinal problems, where some had experienced a diagnostic computer in use. The second group of respondents were 416 patients attending a group general practice where there was no computer. Patients who had experience of the diagnostic computer or a personal computer had more favourable attitudes to computers in medicine as did younger people and males. The two samples of patients showed broadly similar attitudes, and a notable finding was that over half of each group believed that, with a computer around, the personal touch of the doctor would be lost. PMID:6471021
ERIC Educational Resources Information Center
Paul, Sandra K.; Kranberg, Susan
The third report from a comprehensive Unesco study, this document traces the history of the application of computer-based technology to the book distribution process in the United States and indicates functional areas currently showing the effects of using this technology. Ways in which computer use is altering book distribution management…
Unraveling the electrolyte properties of Na3SbS4 through computation and experiment
NASA Astrophysics Data System (ADS)
Rush, Larry E.; Hood, Zachary D.; Holzwarth, N. A. W.
2017-12-01
Solid-state sodium electrolytes are expected to improve next-generation batteries on the basis of favorable energy density and reduced cost. Na3SbS4 represents a new solid-state ion conductor with high ionic conductivities in the mS/cm range. Here, we explore the tetragonal phase of Na3SbS4 and its interface with metallic sodium anode using a combination of experiments and first-principles calculations. The computed Na-ion vacancy migration energies of 0.1 eV are smaller than the value inferred from experiment, suggesting that grain boundaries or other factors dominate the experimental systems. Analysis of symmetric cells of the electrolyte—Na/Na 3SbS4/Na —show that a conductive solid electrolyte interphase forms. Computer simulations infer that the interface is likely to be related to Na3SbS3 , involving the conversion of the tetrahedral SbS43 - ions of the bulk electrolyte into trigonal pyramidal SbS33 - ions at the interface.
NASA Astrophysics Data System (ADS)
Anderson, Delia Marie Castro
Computer literacy and use have become commonplace in our colleges and universities. In an environment that demands the use of technology, educators should be knowledgeable of the components that make up the overall computer attitude of students and be willing to investigate the processes and techniques of effective teaching and learning that can take place with computer technology. The purpose of this study is two fold. First, it investigates the relationship between computer attitudes and gender, ethnicity, and computer experience. Second, it addresses the question of whether, and to what extent, students' attitudes toward computers change over a 16 week period in an undergraduate microbiology course that supplements the traditional lecture with computer-driven assignments. Multiple regression analyses, using data from the Computer Attitudes Scale (Loyd & Loyd, 1985), showed that, in the experimental group, no significant relationships were found between computer anxiety and gender or ethnicity or between computer confidence and gender or ethnicity. However, students who used computers the longest (p = .001) and who were self-taught (p = .046) had the lowest computer anxiety levels. Likewise students who used computers the longest (p = .001) and who were self-taught (p = .041) had the highest confidence levels. No significant relationships between computer liking, usefulness, or the use of Internet resources and gender, ethnicity, or computer experience were found. Dependent T-tests were performed to determine whether computer attitude scores (pretest and posttest) increased over a 16-week period for students who had been exposed to computer-driven assignments and other Internet resources. Results showed that students in the experimental group were less anxious about working with computers and considered computers to be more useful. In the control group, no significant changes in computer anxiety, confidence, liking, or usefulness were noted. Overall, students in the experimental group, who responded to the use of Internet Resources Survey, were positive (mean of 3.4 on the 4-point scale) toward their use of Internet resources which included the online courseware developed by the researcher. Findings from this study suggest that (1) the digital divide with respect to gender and ethnicity may be narrowing, and (2) students who are exposed to a course that augments computer-driven courseware with traditional teaching methods appear to have less anxiety, have a clearer perception of computer usefulness, and feel that online resources enhance their learning.
Progress on the Fabric for Frontier Experiments Project at Fermilab
NASA Astrophysics Data System (ADS)
Box, Dennis; Boyd, Joseph; Dykstra, Dave; Garzoglio, Gabriele; Herner, Kenneth; Kirby, Michael; Kreymer, Arthur; Levshina, Tanya; Mhashilkar, Parag; Sharma, Neha
2015-12-01
The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.
Cloud computing task scheduling strategy based on improved differential evolution algorithm
NASA Astrophysics Data System (ADS)
Ge, Junwei; He, Qian; Fang, Yiqiu
2017-04-01
In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.
Chang, Shao-Hsia; Yu, Nan-Ying
2014-07-01
The objective of this study was to compare the effect of computer-assisted practice with the sensorimotor approach on the remediation of handwriting problems in children with dysgraphia. In a randomized controlled trial, experiments were conducted to verify the intervention effect. Forty two children with handwriting deficit were assigned to computer-assisted instruction, sensorimotor training, or a control group. Handwriting performance was measured using the elementary reading/writing test and computerized handwriting evaluation before and after 6 weeks of intervention. Repeated-measures ANOVA of changed scores were conducted to show whether statistically significant differences across the three groups were present. Significant differences in the elementary reading/writing test were found among the three groups. The computer group showed more significant improvements than the other two groups did. In the kinematic and kinetic analyses, the computer group showed promising results in the remediation of handwriting speed and fluency. This study provided clinical evidence for applying a computer-assisted handwriting program for children with dysgraphia. Clinicians and school teachers are provided with a systematic intervention for the improvement of handwriting difficulties. Copyright © 2014 Elsevier Ltd. All rights reserved.
Challenges and opportunities of cloud computing for atmospheric sciences
NASA Astrophysics Data System (ADS)
Pérez Montes, Diego A.; Añel, Juan A.; Pena, Tomás F.; Wallom, David C. H.
2016-04-01
Cloud computing is an emerging technological solution widely used in many fields. Initially developed as a flexible way of managing peak demand it has began to make its way in scientific research. One of the greatest advantages of cloud computing for scientific research is independence of having access to a large cyberinfrastructure to fund or perform a research project. Cloud computing can avoid maintenance expenses for large supercomputers and has the potential to 'democratize' the access to high-performance computing, giving flexibility to funding bodies for allocating budgets for the computational costs associated with a project. Two of the most challenging problems in atmospheric sciences are computational cost and uncertainty in meteorological forecasting and climate projections. Both problems are closely related. Usually uncertainty can be reduced with the availability of computational resources to better reproduce a phenomenon or to perform a larger number of experiments. Here we expose results of the application of cloud computing resources for climate modeling using cloud computing infrastructures of three major vendors and two climate models. We show how the cloud infrastructure compares in performance to traditional supercomputers and how it provides the capability to complete experiments in shorter periods of time. The monetary cost associated is also analyzed. Finally we discuss the future potential of this technology for meteorological and climatological applications, both from the point of view of operational use and research.
Support News Publications Computing for Experiments Computing for Neutrino and Muon Physics Computing for Collider Experiments Computing for Astrophysics Research and Development Accelerator Modeling ComPASS - Impact of Detector Simulation on Particle Physics Collider Experiments Daniel Elvira's paper "Impact
Blind topological measurement-based quantum computation.
Morimae, Tomoyuki; Fujii, Keisuke
2012-01-01
Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-principle experiment demonstrating blind quantum computation in an optical system has raised new challenges regarding the scalability of blind quantum computation in realistic noisy conditions. Here we show that fault-tolerant blind quantum computation is possible in a topologically protected manner using the Raussendorf-Harrington-Goyal scheme. The error threshold of our scheme is 4.3 × 10(-3), which is comparable to that (7.5 × 10(-3)) of non-blind topological quantum computation. As the error per gate of the order 10(-3) was already achieved in some experimental systems, our result implies that secure cloud quantum computation is within reach.
Blind topological measurement-based quantum computation
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Fujii, Keisuke
2012-09-01
Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-principle experiment demonstrating blind quantum computation in an optical system has raised new challenges regarding the scalability of blind quantum computation in realistic noisy conditions. Here we show that fault-tolerant blind quantum computation is possible in a topologically protected manner using the Raussendorf-Harrington-Goyal scheme. The error threshold of our scheme is 4.3×10-3, which is comparable to that (7.5×10-3) of non-blind topological quantum computation. As the error per gate of the order 10-3 was already achieved in some experimental systems, our result implies that secure cloud quantum computation is within reach.
Rogers, T Ryan; Wang, Feng
2017-10-28
An atomic version of the Millikan oil drop experiment is performed computationally. It is shown that for planar molecules, the atomic version of the Millikan experiment can be used to define an atomic partial charge that is free from charge flow contributions. We refer to this charge as the Millikan-Thomson (MT) charge. Since the MT charge is directly proportional to the atomic forces under a uniform electric field, it is the most relevant charge for force field developments. The MT charge shows good stability with respect to different choices of the basis set. In addition, the MT charge can be easily calculated even at post-Hartree-Fock levels of theory. With the MT charge, it is shown that for a planar water dimer, the charge transfer from the proton acceptor to the proton donor is about -0.052 e. While both planar hydrated cations and anions show signs of charge transfer, anions show a much more significant charge transfer to the hydration water than the corresponding cations. It might be important to explicitly model the ion charge transfer to water in a force field at least for the anions.
The symmetric MSD encoder for one-step adder of ternary optical computer
NASA Astrophysics Data System (ADS)
Kai, Song; LiPing, Yan
2016-08-01
The symmetric Modified Signed-Digit (MSD) encoding is important for achieving the one-step MSD adder of Ternary Optical Computer (TOC). The paper described the symmetric MSD encoding algorithm in detail, and developed its truth table which has nine rows and nine columns. According to the truth table, the state table was developed, and the optical-path structure and circuit-implementation scheme of the symmetric MSD encoder (SME) for one-step adder of TOC were proposed. Finally, a series of experiments were designed and performed. The observed results of the experiments showed that the scheme to implement SME was correct, feasible and efficient.
Evaluation of two models for predicting elemental accumulation by arthropods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webster, J.R.; Crossley, D.A. Jr.
1978-06-15
Two different models have been proposed for predicting elemental accumulation by arthropods. Parameters of both models can be quantified from radioisotope elimination experiments. Our analysis of the 2 models shows that both predict identical elemental accumulation for a whole organism, though differing in the accumulation in body and gut. We quantified both models with experimental data from /sup 134/Cs and /sup 85/Sr elimination by crickets. Computer simulations of radioisotope accumulation were then compared with actual accumulation experiments. Neither model showed exact fit to the experimental data, though both showed the general pattern of elemental accumulation.
Note: Planetary gravities made simple: Sample test of a Mars rover wheel.
Viera-López, G; Serrano-Muñoz, A; Amigó-Vega, J; Cruzata, O; Altshuler, E
2017-08-01
We introduce an instrument for a wide spectrum of experiments on gravities other than our planet's. It is based on a large Atwood machine where one of the loads is a bucket equipped with a single board computer and different sensors. The computer is able to detect the falling (or rising) and then the stabilization of the effective gravity and to trigger actuators depending on the experiment. Gravities within the range 0.4 g-1.2 g are easily achieved with acceleration noise of the order of 0.01 g. Under Martian gravity, we are able to perform experiments of approximately 1.5 s duration. The system includes features such as WiFi and a web interface with tools for the setup, monitoring, and data analysis of the experiment. We briefly show a case study in testing the performance of a model Mars rover wheel in low gravities.
Note: Planetary gravities made simple: Sample test of a Mars rover wheel
NASA Astrophysics Data System (ADS)
Viera-López, G.; Serrano-Muñoz, A.; Amigó-Vega, J.; Cruzata, O.; Altshuler, E.
2017-08-01
We introduce an instrument for a wide spectrum of experiments on gravities other than our planet's. It is based on a large Atwood machine where one of the loads is a bucket equipped with a single board computer and different sensors. The computer is able to detect the falling (or rising) and then the stabilization of the effective gravity and to trigger actuators depending on the experiment. Gravities within the range 0.4 g-1.2 g are easily achieved with acceleration noise of the order of 0.01 g. Under Martian gravity, we are able to perform experiments of approximately 1.5 s duration. The system includes features such as WiFi and a web interface with tools for the setup, monitoring, and data analysis of the experiment. We briefly show a case study in testing the performance of a model Mars rover wheel in low gravities.
Sensitivity of LES results from turbine rim seals to changes in grid resolution and sector size
NASA Astrophysics Data System (ADS)
O'Mahoney, T.; Hills, N.; Chew, J.
2012-07-01
Large-Eddy Simulations (LES) were carried out for a turbine rim seal and the sensitivity of the results to changes in grid resolution and the size of the computational domain are investigated. Ingestion of hot annulus gas into the rotor-stator cavity is compared between LES results and against experiments and Unsteady Reynolds-Averaged Navier-Stokes (URANS) calculations. The LES calculations show greater ingestion than the URANS calculation and show better agreement with experiments. Increased grid resolution shows a small improvement in ingestion predictions whereas increasing the sector model size has little effect on the results. The contrast between the different CFD models is most stark in the inner cavity, where the URANS shows almost no ingestion. Particular attention is also paid to the presence of low frequency oscillations in the disc cavity. URANS calculations show such low frequency oscillations at different frequencies than the LES. The oscillations also take a very long time to develop in the LES. The results show that the difficult problem of estimating ingestion through rim seals could be overcome by using LES but that the computational requirements were still restrictive.
Numerical study of the vortex tube reconnection using vortex particle method on many graphics cards
NASA Astrophysics Data System (ADS)
Kudela, Henryk; Kosior, Andrzej
2014-08-01
Vortex Particle Methods are one of the most convenient ways of tracking the vorticity evolution. In the article we presented numerical recreation of the real life experiment concerning head-on collision of two vortex rings. In the experiment the evolution and reconnection of the vortex structures is tracked with passive markers (paint particles) which in viscous fluid does not follow the evolution of vorticity field. In numerical computations we showed the difference between vorticity evolution and movement of passive markers. The agreement with the experiment was very good. Due to problems with very long time of computations on a single processor the Vortex-in-Cell method was implemented on the multicore architecture of the graphics cards (GPUs). Vortex Particle Methods are very well suited for parallel computations. As there are myriads of particles in the flow and for each of them the same equations of motion have to be solved the SIMD architecture used in GPUs seems to be perfect. The main disadvantage in this case is the small amount of the RAM memory. To overcome this problem we created a multiGPU implementation of the VIC method. Some remarks on parallel computing are given in the article.
Computational Simulation of Acoustic Modes in Rocket Combustors
NASA Technical Reports Server (NTRS)
Harper, Brent (Technical Monitor); Merkle, C. L.; Sankaran, V.; Ellis, M.
2004-01-01
A combination of computational fluid dynamic analysis and analytical solutions is being used to characterize the dominant modes in liquid rocket engines in conjunction with laboratory experiments. The analytical solutions are based on simplified geometries and flow conditions and are used for careful validation of the numerical formulation. The validated computational model is then extended to realistic geometries and flow conditions to test the effects of various parameters on chamber modes, to guide and interpret companion laboratory experiments in simplified combustors, and to scale the measurements to engine operating conditions. In turn, the experiments are used to validate and improve the model. The present paper gives an overview of the numerical and analytical techniques along with comparisons illustrating the accuracy of the computations as a function of grid resolution. A representative parametric study of the effect of combustor mean flow Mach number and combustor aspect ratio on the chamber modes is then presented for both transverse and longitudinal modes. The results show that higher mean flow Mach numbers drive the modes to lower frequencies. Estimates of transverse wave mechanics in a high aspect ratio combustor are then contrasted with longitudinal modes in a long and narrow combustor to provide understanding of potential experimental simulations.
NASA Technical Reports Server (NTRS)
Grossi, M. D.; Gay, R. H.
1975-01-01
A computer simulation of the ionospheric experiment of the Apollo-Soyuz Test Project (ASTP) was performed. ASTP is the first example of USA/USSR cooperation in space and is scheduled for summer 1975. The experiment consists of performing dual-frequency Doppler measurements (at 162 and 324 MHz) between the Apollo Command Service Module (CSM) and the ASTP Docking Module (DM), both orbiting at 221-km height and at a relative distance of 300 km. The computer simulation showed that, with the Doppler measurement resolution of approximately 3 mHz provided by the instrumentation (in 10-sec integration time), ionospheric-induced Doppler shifts will be measurable accurately at all times, with some rare exceptions occurring when the radio path crosses regions of minimum ionospheric density. The computer simulation evaluated the ability of the experiment to measure changes of columnar electron content between CSM and DM (from which horizontal gradients of electron density at 221-km height can be obtained) and to measure variations in DM-to-ground columnar content (from which an averaged columnar content and the electron density at the DM can be deduced, under some simplifying assumptions).
Benchmarking gate-based quantum computers
NASA Astrophysics Data System (ADS)
Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans
2017-11-01
With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.
Physical Vapor Transport of Mercurous Chloride Crystals: Design of a Microgravity Experiment
NASA Technical Reports Server (NTRS)
Duval, W, M. B.; Singh, N. B.; Glicksman, M. E.
1997-01-01
Flow field characteristics predicted from a computational model show that the dynamical state of the flow, for practical crystal growth conditions of mercurous chloride, can range from steady to unsteady. Evidence that the flow field can be strongly dominated by convection for ground-based conditions is provided by the prediction of asymmetric velocity profiles bv the model which show reasonable agreement with laser Doppler velocimetry experiments in both magnitude and planform. Unsteady flow is shown to be correlated with a degradation of crystal quality as quantified by light scattering pattern measurements, A microgravity experiment is designed to show that an experiment performed with parameters which yield an unsteady flow becomes steady (diffusive-advective) in a microgravity environment of 10(exp -3) g(sub 0) as predicted by the model, and hence yields crystals with optimal quality.
A computational model of selection by consequences.
McDowell, J J
2004-05-01
Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior.
Computations of the Magnus effect for slender bodies in supersonic flow
NASA Technical Reports Server (NTRS)
Sturek, W. B.; Schiff, L. B.
1980-01-01
A recently reported Parabolized Navier-Stokes code has been employed to compute the supersonic flow field about spinning cone, ogive-cylinder, and boattailed bodies of revolution at moderate incidence. The computations were performed for flow conditions where extensive measurements for wall pressure, boundary layer velocity profiles and Magnus force had been obtained. Comparisons between the computational results and experiment indicate excellent agreement for angles of attack up to six degrees. The comparisons for Magnus effects show that the code accurately predicts the effects of body shape and Mach number for the selected models for Mach numbers in the range of 2-4.
Progress on the FabrIc for Frontier Experiments project at Fermilab
Box, Dennis; Boyd, Joseph; Dykstra, Dave; ...
2015-12-23
The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercialmore » cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. Hence, the progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide« less
In-Situ Tuff Water Migration/Heater Experiment: posttest thermal analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eaton, R.R.; Johnstone, J.K.; Nunziato, J.W.
This report describes posttest laboratory experiments and thermal computations for the In-Situ Tuff Water Migration/Heater Experiment that was conducted in Grouse Canyon Welded Tuff in G-Tunnel, Nevada Test Site. Posttest laboratory experiments were designed to determine the accuracy of the temperatures measured by the rockwall thermocouples during the in-situ test. The posttest laboratory experiments showed that the measured in-situ rockwall temperatures were 10 to 20{sup 0}C higher than the true rockwall temperatures. The posttest computational results, obtained with the thermal conduction code COYOTE, were compared with the experimentally obtained data and with calculated pretest results. Daily heater output power fluctuationsmore » (+-4%) caused by input power line variations and the sensitivity of temperature to heater output power required care in selecting the average heater output power values used in the code. The posttest calculated results compare reasonably well with the experimental data. 10 references, 14 figures, 5 tables.« less
Brain-Computer Interface Based on Generation of Visual Images
Bobrov, Pavel; Frolov, Alexander; Cantor, Charles; Fedulova, Irina; Bakhnyan, Mikhail; Zhavoronkov, Alexander
2011-01-01
This paper examines the task of recognizing EEG patterns that correspond to performing three mental tasks: relaxation and imagining of two types of pictures: faces and houses. The experiments were performed using two EEG headsets: BrainProducts ActiCap and Emotiv EPOC. The Emotiv headset becomes widely used in consumer BCI application allowing for conducting large-scale EEG experiments in the future. Since classification accuracy significantly exceeded the level of random classification during the first three days of the experiment with EPOC headset, a control experiment was performed on the fourth day using ActiCap. The control experiment has shown that utilization of high-quality research equipment can enhance classification accuracy (up to 68% in some subjects) and that the accuracy is independent of the presence of EEG artifacts related to blinking and eye movement. This study also shows that computationally-inexpensive Bayesian classifier based on covariance matrix analysis yields similar classification accuracy in this problem as a more sophisticated Multi-class Common Spatial Patterns (MCSP) classifier. PMID:21695206
Metabolic Flux Analysis in Isotope Labeling Experiments Using the Adjoint Approach.
Mottelet, Stephane; Gaullier, Gil; Sadaka, Georges
2017-01-01
Comprehension of metabolic pathways is considerably enhanced by metabolic flux analysis (MFA-ILE) in isotope labeling experiments. The balance equations are given by hundreds of algebraic (stationary MFA) or ordinary differential equations (nonstationary MFA), and reducing the number of operations is therefore a crucial part of reducing the computation cost. The main bottleneck for deterministic algorithms is the computation of derivatives, particularly for nonstationary MFA. In this article, we explain how the overall identification process may be speeded up by using the adjoint approach to compute the gradient of the residual sum of squares. The proposed approach shows significant improvements in terms of complexity and computation time when it is compared with the usual (direct) approach. Numerical results are obtained for the central metabolic pathways of Escherichia coli and are validated against reference software in the stationary case. The methods and algorithms described in this paper are included in the sysmetab software package distributed under an Open Source license at http://forge.scilab.org/index.php/p/sysmetab/.
NASA Astrophysics Data System (ADS)
Czakó, Gábor
2014-06-01
Motivated by a recent experiment [H. Pan and K. Liu, J. Chem. Phys. 140, 191101 (2014)], we report a quasiclassical trajectory study of the O(3P) + CH4(vk = 0, 1) → OH + CH3 [k = 1 and 3] reactions on an ab initio potential energy surface. The computed angular distributions and cross sections correlated to the OH(v = 0, 1) + CH3(v = 0) coincident product states can be directly compared to experiment for O + CH4(v3 = 0, 1). Both theory and experiment show that the ground-state reaction is backward scattered, whereas the angular distributions shift toward sideways and forward directions upon antisymmetric stretching (v3) excitation of the reactant. Theory predicts similar behavior for the O + CH4(v1 = 1) reaction. The simulations show that stretching excitation enhances the reaction up to about 15 kcal/mol collision energy, whereas the O + CH4(vk = 1) reactions produce smaller cross sections for OH(v = 1) + CH3(v = 0) than those of O + CH4(v = 0) → OH(v = 0) + CH3(v = 0). The former finding agrees with experiment and the latter awaits for confirmation. The computed cold OH rotational distributions of O + CH4(v = 0) are in good agreement with experiment.
Czakó, Gábor
2014-06-21
Motivated by a recent experiment [H. Pan and K. Liu, J. Chem. Phys. 140, 191101 (2014)], we report a quasiclassical trajectory study of the O((3)P) + CH4(vk = 0, 1) → OH + CH3 [k = 1 and 3] reactions on an ab initio potential energy surface. The computed angular distributions and cross sections correlated to the OH(v = 0, 1) + CH3(v = 0) coincident product states can be directly compared to experiment for O + CH4(v3 = 0, 1). Both theory and experiment show that the ground-state reaction is backward scattered, whereas the angular distributions shift toward sideways and forward directions upon antisymmetric stretching (v3) excitation of the reactant. Theory predicts similar behavior for the O + CH4(v1 = 1) reaction. The simulations show that stretching excitation enhances the reaction up to about 15 kcal/mol collision energy, whereas the O + CH4(vk = 1) reactions produce smaller cross sections for OH(v = 1) + CH3(v = 0) than those of O + CH4(v = 0) → OH(v = 0) + CH3(v = 0). The former finding agrees with experiment and the latter awaits for confirmation. The computed cold OH rotational distributions of O + CH4(v = 0) are in good agreement with experiment.
Perceptions and performance using computer-based testing: One institution's experience.
Bloom, Timothy J; Rich, Wesley D; Olson, Stephanie M; Adams, Michael L
2018-02-01
The purpose of this study was to evaluate student and faculty perceptions of the transition to a required computer-based testing format and to identify any impact of this transition on student exam performance. Separate questionnaires sent to students and faculty asked about perceptions of and problems with computer-based testing. Exam results from program-required courses for two years prior to and two years following the adoption of computer-based testing were compared to determine if this testing format impacted student performance. Responses to Likert-type questions about perceived ease of use showed no difference between students with one and three semesters experience with computer-based testing. Of 223 student-reported problems, 23% related to faculty training with the testing software. Students most commonly reported improved feedback (46% of responses) and ease of exam-taking (17% of responses) as benefits to computer-based testing. Faculty-reported difficulties were most commonly related to problems with student computers during an exam (38% of responses) while the most commonly identified benefit was collecting assessment data (32% of responses). Neither faculty nor students perceived an impact on exam performance due to computer-based testing. An analysis of exam grades confirmed there was no consistent performance difference between the paper and computer-based formats. Both faculty and students rapidly adapted to using computer-based testing. There was no evidence that switching to computer-based testing had any impact on student exam performance. Copyright © 2017 Elsevier Inc. All rights reserved.
Bayesian Latent Class Analysis Tutorial.
Li, Yuelin; Lord-Bessen, Jennifer; Shiyko, Mariya; Loeb, Rebecca
2018-01-01
This article is a how-to guide on Bayesian computation using Gibbs sampling, demonstrated in the context of Latent Class Analysis (LCA). It is written for students in quantitative psychology or related fields who have a working knowledge of Bayes Theorem and conditional probability and have experience in writing computer programs in the statistical language R . The overall goals are to provide an accessible and self-contained tutorial, along with a practical computation tool. We begin with how Bayesian computation is typically described in academic articles. Technical difficulties are addressed by a hypothetical, worked-out example. We show how Bayesian computation can be broken down into a series of simpler calculations, which can then be assembled together to complete a computationally more complex model. The details are described much more explicitly than what is typically available in elementary introductions to Bayesian modeling so that readers are not overwhelmed by the mathematics. Moreover, the provided computer program shows how Bayesian LCA can be implemented with relative ease. The computer program is then applied in a large, real-world data set and explained line-by-line. We outline the general steps in how to extend these considerations to other methodological applications. We conclude with suggestions for further readings.
Technology: Catalyst for Enhancing Chemical Education for Pre-service Teachers
NASA Astrophysics Data System (ADS)
Kumar, Vinay; Bedell, Julia Yang; Seed, Allen H.
1999-05-01
A DOE/KYEPSCoR-funded project enabled us to introduce a new curricular initiative aimed at improving the chemical education of pre-service elementary teachers. The new curriculum was developed in collaboration with the School of Education faculty. A new course for the pre-service teachers, "Discovering Chemistry with Lab" (CHE 105), was developed. The integrated lecture and lab course covers basic principles of chemistry and their applications in daily life. The course promotes reasoning and problem-solving skills and utilizes hands-on, discovery/guided-inquiry, and cooperative learning approaches. This paper describes the implementation of technology (computer-interfacing and simulation experiments) in the lab. Results of two assessment surveys conducted in the laboratory are also discussed. The key features of the lab course are eight new experiments, including four computer-interfacing/simulation experiments involving the use of Macintosh Power PCs, temperature and pH probes, and a serial box interface, and use of household materials. Several experiments and the midterm and final lab practical exams emphasize the discovery/guided-inquiry approach. The results of pre- and post-surveys showed very significant positive changes in students' attitude toward the relevancy of chemistry, use of technology (computers) in elementary school classrooms, and designing and teaching discovery-based units. Most students indicated that they would be very interested (52%) or interested (36%) in using computers in their science teaching.
Age and Pathway Diagnostics for a Stratospheric General Circulation Model
NASA Technical Reports Server (NTRS)
Schoeberl, Mark R.; Douglass, Anne R.; Polansky, Brian
2004-01-01
Using a variety of age diagnostic experiments we examine the stratospheric age spectrum of the Goddard Finite Volume Generd Circulation Model. Pulse tracer release age-of-air computations are compared to forward and backward trajectory computations. These comparisons show good agreement, and the age-of-air also compares well with observed long lived tracers. Pathway diagnostics show how air arrives in the lowermost stratosphere and the age structure of that region. Using tracers with different lifetimes we can estimate the age spectrum - this technique should be useful in diagnosing transport from various trace gas observations.
Demonstration of essentiality of entanglement in a Deutsch-like quantum algorithm
NASA Astrophysics Data System (ADS)
Huang, He-Liang; Goswami, Ashutosh K.; Bao, Wan-Su; Panigrahi, Prasanta K.
2018-06-01
Quantum algorithms can be used to efficiently solve certain classically intractable problems by exploiting quantum parallelism. However, the effectiveness of quantum entanglement in quantum computing remains a question of debate. This study presents a new quantum algorithm that shows entanglement could provide advantages over both classical algorithms and quantum algo- rithms without entanglement. Experiments are implemented to demonstrate the proposed algorithm using superconducting qubits. Results show the viability of the algorithm and suggest that entanglement is essential in obtaining quantum speedup for certain problems in quantum computing. The study provides reliable and clear guidance for developing useful quantum algorithms.
A parallel computational model for GATE simulations.
Rannou, F R; Vega-Acevedo, N; El Bitar, Z
2013-12-01
GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Computational analysis of blood clot dissolution using a vibrating catheter tip.
Lee, Jeong Hyun; Oh, Jin Sun; Yoon, Bye Ri; Choi, Seung Hong; Rhee, Kyehan; Jho, Jae Young; Han, Moon Hee
2012-04-01
We developed a novel concept of endovascular thrombolysis that employs a vibrating electroactive polymer actuator. In order to predict the efficacy of thrombolysis using the developed vibrating actuator, enzyme (plasminogen activator) perfusion into a clot was analyzed by solving flow fields and species transport equations considering the fluid structure interaction. In vitro thrombolysis experiments were also performed. Computational results showed that plasminogen activator perfusion into a clot was enhanced by actuator vibration at frequencies of 1 and 5 Hz. Plasminogen activator perfusion was affected by the actuator oscillation frequencies and amplitudes that were determined by electromechanical characteristics of a polymer actuator. Computed plasminogen activator perfused volumes were compared with experimentally measured dissolved clot volumes. The computed plasminogen activator perfusion volumes with threshold concentrations of 16% of the initial plasminogen activator concentration agreed well with the in vitro experimental data. This study showed the effectiveness of actuator oscillation on thrombolysis and the validity of the computational plasminogen activator perfusion model for predicting thrombolysis in complex flow fields induced by an oscillating actuator.
Reservoir computer predictions for the Three Meter magnetic field time evolution
NASA Astrophysics Data System (ADS)
Perevalov, A.; Rojas, R.; Lathrop, D. P.; Shani, I.; Hunt, B. R.
2017-12-01
The source of the Earth's magnetic field is the turbulent flow of liquid metal in the outer core. Our experiment's goal is to create Earth-like dynamo, to explore the mechanisms and to understand the dynamics of the magnetic and velocity fields. Since it is a complicated system, predictions of the magnetic field is a challenging problem. We present results of mimicking the three Meter experiment by a reservoir computer deep learning algorithm. The experiment is a three-meter diameter outer sphere and a one-meter diameter inner sphere with the gap filled with liquid sodium. The spheres can rotate up to 4 and 14 Hz respectively, giving a Reynolds number near to 108. Two external electromagnets apply magnetic fields, while an array of 31 external and 2 internal Hall sensors measure the resulting induced fields. We use this magnetic probe data to train a reservoir computer to predict the 3M time evolution and mimic waves in the experiment. Surprisingly accurate predictions can be made for several magnetic dipole time scales. This shows that such a complicated MHD system's behavior can be predicted. We gratefully acknowledge support from NSF EAR-1417148.
Devitt, P; Cehic, D; Palmer, E
1998-06-01
Student teaching of surgery has been devolved from the university in an effort to increase and broaden undergraduate clinical experience. In order to ensure uniformity of learning we have defined learning objectives and provided a computer-based package to supplement clinical teaching. A study was undertaken to evaluate the place of computer-based learning in a clinical environment. Twelve modules were provided for study during a 6-week attachment. These covered clinical problems related to cardiology, neurosurgery and gastrointestinal haemorrhage. Eighty-four fourth-year students undertook a pre- and post-test assessment on these three topics as well as acute abdominal pain. No extra learning material on the latter topic was provided during the attachment. While all students showed significant improvement in performance in the post-test assessment, those who had access to the computer material performed significantly better than did the controls. Within the topics, students in both groups performed equally well on the post-test assessment of acute abdominal pain but the control group's performance was significantly lacking on the topic of gastrointestinal haemorrhage, suggesting that the bulk of learning on this subject came from the computer material and little from the clinical attachment. This type of learning resource can be used to supplement the student's clinical experience and at the same time monitor what they learn during clinical clerkships and identify areas of weakness.
Foundations of Quantum Mechanics and Quantum Computation
NASA Astrophysics Data System (ADS)
Aspect, Alain; Leggett, Anthony; Preskill, John; Durt, Thomas; Pironio, Stefano
2013-03-01
I ask the question: What can we infer about the nature and structure of the physical world (a) from experiments already done to test the predictions of quantum mechanics (b) from the assumption that all future experiments will agree with those predictions? I discuss existing and projected experiments related to the two classic paradoxes of quantum mechanics, named respectively for EPR and Schrödinger's Cat, and show in particular that one natural conclusion from both types of experiment implies the abandonment of the concept of macroscopic counterfactual definiteness.
Computational gestalts and perception thresholds.
Desolneux, Agnès; Moisan, Lionel; Morel, Jean-Michel
2003-01-01
In 1923, Max Wertheimer proposed a research programme and method in visual perception. He conjectured the existence of a small set of geometric grouping laws governing the perceptual synthesis of phenomenal objects, or "gestalt" from the atomic retina input. In this paper, we review this set of geometric grouping laws, using the works of Metzger, Kanizsa and their schools. In continuation, we explain why the Gestalt theory research programme can be translated into a Computer Vision programme. This translation is not straightforward, since Gestalt theory never addressed two fundamental matters: image sampling and image information measurements. Using these advances, we shall show that gestalt grouping laws can be translated into quantitative laws allowing the automatic computation of gestalts in digital images. From the psychophysical viewpoint, a main issue is raised: the computer vision gestalt detection methods deliver predictable perception thresholds. Thus, we are set in a position where we can build artificial images and check whether some kind of agreement can be found between the computationally predicted thresholds and the psychophysical ones. We describe and discuss two preliminary sets of experiments, where we compared the gestalt detection performance of several subjects with the predictable detection curve. In our opinion, the results of this experimental comparison support the idea of a much more systematic interaction between computational predictions in Computer Vision and psychophysical experiments.
Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion
NASA Astrophysics Data System (ADS)
Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.
2017-01-01
We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.
NASA Astrophysics Data System (ADS)
Markauskaite, Lina; Kelly, Nick; Jacobson, Michael J.
2017-12-01
This paper gives a grounded cognition account of model-based learning of complex scientific knowledge related to socio-scientific issues, such as climate change. It draws on the results from a study of high school students learning about the carbon cycle through computational agent-based models and investigates two questions: First, how do students ground their understanding about the phenomenon when they learn and solve problems with computer models? Second, what are common sources of mistakes in students' reasoning with computer models? Results show that students ground their understanding in computer models in five ways: direct observation, straight abstraction, generalisation, conceptualisation, and extension. Students also incorporate into their reasoning their knowledge and experiences that extend beyond phenomena represented in the models, such as attitudes about unsustainable carbon emission rates, human agency, external events, and the nature of computational models. The most common difficulties of the students relate to seeing the modelled scientific phenomenon and connecting results from the observations with other experiences and understandings about the phenomenon in the outside world. An important contribution of this study is the constructed coding scheme for establishing different ways of grounding, which helps to understand some challenges that students encounter when they learn about complex phenomena with agent-based computer models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Box, D.; Boyd, J.; Di Benedetto, V.
2016-01-01
The FabrIc for Frontier Experiments (FIFE) project is an initiative within the Fermilab Scientific Computing Division designed to steer the computing model for non-LHC Fermilab experiments across multiple physics areas. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying size, needs, and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of solutions for high throughput computing, data management, database access and collaboration management within an experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid compute sites alongmore » with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including a common job submission service, software and reference data distribution through CVMFS repositories, flexible and robust data transfer clients, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken the leading role in defining the computing model for Fermilab experiments, aided in the design of experiments beyond those hosted at Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.« less
NASA Astrophysics Data System (ADS)
Nurjanah; Dahlan, J. A.; Wibisono, Y.
2017-02-01
This paper aims to make a design and development computer-based e-learning teaching material for improving mathematical understanding ability and spatial sense of junior high school students. Furthermore, the particular aims are (1) getting teaching material design, evaluation model, and intrument to measure mathematical understanding ability and spatial sense of junior high school students; (2) conducting trials computer-based e-learning teaching material model, asessment, and instrument to develop mathematical understanding ability and spatial sense of junior high school students; (3) completing teaching material models of computer-based e-learning, assessment, and develop mathematical understanding ability and spatial sense of junior high school students; (4) resulting research product is teaching materials of computer-based e-learning. Furthermore, the product is an interactive learning disc. The research method is used of this study is developmental research which is conducted by thought experiment and instruction experiment. The result showed that teaching materials could be used very well. This is based on the validation of computer-based e-learning teaching materials, which is validated by 5 multimedia experts. The judgement result of face and content validity of 5 validator shows that the same judgement result to the face and content validity of each item test of mathematical understanding ability and spatial sense. The reliability test of mathematical understanding ability and spatial sense are 0,929 and 0,939. This reliability test is very high. While the validity of both tests have a high and very high criteria.
Validation of the actuator line and disc techniques using the New MEXICO measurements
NASA Astrophysics Data System (ADS)
Sarmast, S.; Shen, W. Z.; Zhu, W. J.; Mikkelsen, R. F.; Breton, S. P.; Ivanell, S.
2016-09-01
Actuator line and disc techniques are employed to analyse the wake obtained in the New MEXICO wind turbine experiment. The New MEXICO measurement campaign done in 2014 is a follow-up to the MEXICO campaign, which was completed in 2006. Three flow configurations in axial flow condition are simulated and both computed loads and velocity fields around the rotor are compared with detailed PIV measurements. The comparisons show that the computed loadings are generally in agreement with the measurements under the rotor's design condition. Both actuator approaches under-predicted the loading in the inboard part of blade in stall condition as only 2D airfoil data were used in the simulations. The predicted wake velocities generally agree well with the PIV measurements. In the experiment, PIV measurements are also provided close to the hub and nacelle. To study the effect of hub and nacelle, numerical simulations are performed both in the presence and absence of the hub geometry. This study shows that the large hub used in the experiment has only small effects on overall wake behaviour.
Computing and Applying Atomic Regulons to Understand Gene Expression and Regulation
Faria, José P.; Davis, James J.; Edirisinghe, Janaka N.; ...
2016-11-24
Understanding gene function and regulation is essential for the interpretation, prediction, and ultimate design of cell responses to changes in the environment. A multitude of technologies, abstractions, and interpretive frameworks have emerged to answer the challenges presented by genome function and regulatory network inference. Here, we propose a new approach for producing biologically meaningful clusters of coexpressed genes, called Atomic Regulons (ARs), based on expression data, gene context, and functional relationships. We demonstrate this new approach by computing ARs for Escherichia coli, which we compare with the coexpressed gene clusters predicted by two prevalent existing methods: hierarchical clustering and k-meansmore » clustering. We test the consistency of ARs predicted by all methods against expected interactions predicted by the Context Likelihood of Relatedness (CLR) mutual information based method, finding that the ARs produced by our approach show better agreement with CLR interactions. We then apply our method to compute ARs for four other genomes: Shewanella oneidensis, Pseudomonas aeruginosa, Thermus thermophilus, and Staphylococcus aureus. We compare the AR clusters from all genomes to study the similarity of coexpression among a phylogenetically diverse set of species, identifying subsystems that show remarkable similarity over wide phylogenetic distances. We also study the sensitivity of our method for computing ARs to the expression data used in the computation, showing that our new approach requires less data than competing approaches to converge to a near final configuration of ARs. We go on to use our sensitivity analysis to identify the specific experiments that lead most rapidly to the final set of ARs for E. coli. As a result, this analysis produces insights into improving the design of gene expression experiments.« less
Computing and Applying Atomic Regulons to Understand Gene Expression and Regulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faria, José P.; Davis, James J.; Edirisinghe, Janaka N.
Understanding gene function and regulation is essential for the interpretation, prediction, and ultimate design of cell responses to changes in the environment. A multitude of technologies, abstractions, and interpretive frameworks have emerged to answer the challenges presented by genome function and regulatory network inference. Here, we propose a new approach for producing biologically meaningful clusters of coexpressed genes, called Atomic Regulons (ARs), based on expression data, gene context, and functional relationships. We demonstrate this new approach by computing ARs for Escherichia coli, which we compare with the coexpressed gene clusters predicted by two prevalent existing methods: hierarchical clustering and k-meansmore » clustering. We test the consistency of ARs predicted by all methods against expected interactions predicted by the Context Likelihood of Relatedness (CLR) mutual information based method, finding that the ARs produced by our approach show better agreement with CLR interactions. We then apply our method to compute ARs for four other genomes: Shewanella oneidensis, Pseudomonas aeruginosa, Thermus thermophilus, and Staphylococcus aureus. We compare the AR clusters from all genomes to study the similarity of coexpression among a phylogenetically diverse set of species, identifying subsystems that show remarkable similarity over wide phylogenetic distances. We also study the sensitivity of our method for computing ARs to the expression data used in the computation, showing that our new approach requires less data than competing approaches to converge to a near final configuration of ARs. We go on to use our sensitivity analysis to identify the specific experiments that lead most rapidly to the final set of ARs for E. coli. As a result, this analysis produces insights into improving the design of gene expression experiments.« less
Transitioning EEG experiments away from the laboratory using a Raspberry Pi 2.
Kuziek, Jonathan W P; Shienh, Axita; Mathewson, Kyle E
2017-02-01
Electroencephalography (EEG) experiments are typically performed in controlled laboratory settings to minimise noise and produce reliable measurements. These controlled conditions also reduce the applicability of the obtained results to more varied environments and may limit their relevance to everyday situations. Advances in computer portability may increase the mobility and applicability of EEG results while decreasing costs. In this experiment we show that stimulus presentation using a Raspberry Pi 2 computer provides a low cost, reliable alternative to a traditional desktop PC in the administration of EEG experimental tasks. Significant and reliable MMN and P3 activity, typical event-related potentials (ERPs) associated with an auditory oddball paradigm, were measured while experiments were administered using the Raspberry Pi 2. While latency differences in ERP triggering were observed between systems, these differences reduced power only marginally, likely due to the reduced processing power of the Raspberry Pi 2. An auditory oddball task administered using the Raspberry Pi 2 produced similar ERPs to those derived from a desktop PC in a laboratory setting. Despite temporal differences and slight increases in trials needed for similar statistical power, the Raspberry Pi 2 can be used to design and present auditory experiments comparable to a PC. Our results show that the Raspberry Pi 2 is a low cost alternative to the desktop PC when administering EEG experiments and, due to its small size and low power consumption, will enable mobile EEG experiments unconstrained by a traditional laboratory setting. Copyright © 2016 Elsevier B.V. All rights reserved.
Methods for modeling cytoskeletal and DNA filaments
NASA Astrophysics Data System (ADS)
Andrews, Steven S.
2014-02-01
This review summarizes the models that researchers use to represent the conformations and dynamics of cytoskeletal and DNA filaments. It focuses on models that address individual filaments in continuous space. Conformation models include the freely jointed, Gaussian, angle-biased chain (ABC), and wormlike chain (WLC) models, of which the first three bend at discrete joints and the last bends continuously. Predictions from the WLC model generally agree well with experiment. Dynamics models include the Rouse, Zimm, stiff rod, dynamic WLC, and reptation models, of which the first four apply to isolated filaments and the last to entangled filaments. Experiments show that the dynamic WLC and reptation models are most accurate. They also show that biological filaments typically experience strong hydrodynamic coupling and/or constrained motion. Computer simulation methods that address filament dynamics typically compute filament segment velocities from local forces using the Langevin equation and then integrate these velocities with explicit or implicit methods; the former are more versatile and the latter are more efficient. Much remains to be discovered in biological filament modeling. In particular, filament dynamics in living cells are not well understood, and current computational methods are too slow and not sufficiently versatile. Although primarily a review, this paper also presents new statistical calculations for the ABC and WLC models. Additionally, it corrects several discrepancies in the literature about bending and torsional persistence length definitions, and their relations to flexural and torsional rigidities.
ERIC Educational Resources Information Center
Kordaki, Maria
2011-01-01
This study presents an experiment aimed at the design of short learning courses in the context of LAMS, using a number of specific context-free collaboration design patterns implemented within LAMS. In fact, 25 Prospective Computer Engineers (PCEs) participated in this experiment. The analysis of the data shows that PCEs fully used these context…
Design and implementation of the one-step MSD adder of optical computer.
Song, Kai; Yan, Liping
2012-03-01
On the basis of the symmetric encoding algorithm for the modified signed-digit (MSD), a 7*7 truth table that can be realized with optical methods was developed. And based on the truth table, the optical path structures and circuit implementations of the one-step MSD adder of ternary optical computer (TOC) were designed. Experiments show that the scheme is correct, feasible, and efficient. © 2012 Optical Society of America
Discovering the gas laws and understanding the kinetic theory of gases with an iPad app
NASA Astrophysics Data System (ADS)
Davies, Gary B.
2017-07-01
Carrying out classroom experiments that demonstrate Boyle’s law and Gay-Lussac’s law can be challenging. Even if we are able to conduct classroom experiments using pressure gauges and syringes, the results of these experiments do little to illuminate the kinetic theory of gases. However, molecular dynamics simulations that run on computers allow us to visualise the behaviour of individual particles and to link this behaviour to the bulk properties of the gas e.g. its pressure and temperature. In this article, I describe how to carry out ‘computer experiments’ using a commercial molecular dynamics iPad app called Atoms in Motion [1]. Using the app, I show how to obtain data from simulations that demonstrate Boyle’s law and Gay-Lussac’s law, and hence also the combined gas law.
Laser-driven planar Rayleigh-Taylor instability experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glendinning, S.G.; Weber, S.V.; Bell, P.
1992-08-24
We have performed a series of experiments on the Nova Laser Facility to examine the hydrodynamic behavior of directly driven planar foils with initial perturbations of varying wavelength. The foils were accelerated with a single, frequency doubled, smoothed and temporally shaped laser beam at 0.8{times}10{sup 14} W/cm{sup 2}. The experiments are in good agreement with numerical simulations using the computer codes LASNEX and ORCHID which show growth rates reduced to about 70% of classical for this nonlinear regime.
NASA Technical Reports Server (NTRS)
Bean, T. A.; Bowhill, S. A.
1973-01-01
Partial-reflection data collected for the eclipse of July 10, 1972 as well as for July 9 and 11, 1972, are analyzed to determine eclipse effects on D-region electron densities. The partial-reflection experiment was set up to collect data using an on-line PDP-15 computer and DECtape storage. The electron-density profiles show good agreement with results from other eclipses. The partial-reflection programs were changed after the eclipse data collection to improve the operation of the partial-reflection system. These changes were mainly due to expanded computer hardware and have simplified the operations of the system considerably.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odlyzko, Michael L.; Held, Jacob T.; Mkhoyan, K. Andre, E-mail: mkhoyan@umn.edu
2016-07-15
Quantitatively calibrated annular dark field scanning transmission electron microscopy (ADF-STEM) imaging experiments were compared to frozen phonon multislice simulations adapted to include chemical bonding effects. Having carefully matched simulation parameters to experimental conditions, a depth-dependent bonding effect was observed for high-angle ADF-STEM imaging of aluminum nitride. This result is explained by computational predictions, systematically examined in the preceding portion of this study, showing the propagation of the converged STEM beam to be highly sensitive to net interatomic charge transfer. Thus, although uncertainties in experimental conditions and simulation accuracy remain, the computationally predicted experimental bonding effect withstands the experimental testing reportedmore » here.« less
Brain-computer interface for alertness estimation and improving
NASA Astrophysics Data System (ADS)
Hramov, Alexander; Maksimenko, Vladimir; Hramova, Marina
2018-02-01
Using wavelet analysis of the signals of electrical brain activity (EEG), we study the processes of neural activity, associated with perception of visual stimuli. We demonstrate that the brain can process visual stimuli in two scenarios: (i) perception is characterized by destruction of the alpha-waves and increase in the high-frequency (beta) activity, (ii) the beta-rhythm is not well pronounced, while the alpha-wave energy remains unchanged. The special experiments show that the motivation factor initiates the first scenario, explained by the increasing alertness. Based on the obtained results we build the brain-computer interface and demonstrate how the degree of the alertness can be estimated and controlled in real experiment.
Semi-physical Simulation Platform of a Parafoil Nonlinear Dynamic System
NASA Astrophysics Data System (ADS)
Gao, Hai-Tao; Yang, Sheng-Bo; Zhu, Er-Lin; Sun, Qing-Lin; Chen, Zeng-Qiang; Kang, Xiao-Feng
2013-11-01
Focusing on the problems in the process of simulation and experiment on a parafoil nonlinear dynamic system, such as limited methods, high cost and low efficiency we present a semi-physical simulation platform. It is designed by connecting parts of physical objects to a computer, and remedies the defect that a computer simulation is divorced from a real environment absolutely. The main components of the platform and its functions, as well as simulation flows, are introduced. The feasibility and validity are verified through a simulation experiment. The experimental results show that the platform has significance for improving the quality of the parafoil fixed-point airdrop system, shortening the development cycle and saving cost.
Simulating and assessing boson sampling experiments with phase-space representations
NASA Astrophysics Data System (ADS)
Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.
2018-04-01
The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.
Pain Assessment and Management in Nursing Education Using Computer-based Simulations.
Romero-Hall, Enilda
2015-08-01
It is very important for nurses to have a clear understanding of the patient's pain experience and of management strategies. However, a review of the nursing literature shows that one of the main barriers to proper pain management practice is lack of knowledge. Nursing schools are in a unique position to address the gap in pain management knowledge by facilitating the acquisition and use of knowledge by the next generation of nurses. The purpose of this article is to discuss the role of computer-based simulations as a reliable educational technology strategy that can enhance the learning experience of nursing students acquiring pain management knowledge and practice. Computer-based simulations provide a significant number of learning affordances that can help change nursing students' attitudes and behaviors toward and practice of pain assessment and management. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.
Factors influencing use of an e-health website in a community sample of older adults.
Czaja, Sara J; Sharit, Joseph; Lee, Chin Chin; Nair, Sankaran N; Hernández, Mario A; Arana, Neysarí; Fu, Shih Hua
2013-01-01
The use of the internet as a source of health information and link to healthcare services has raised concerns about the ability of consumers, especially vulnerable populations such as older adults, to access these applications. This study examined the influence of training on the ability of adults (aged 45+ years) to use the Medicare.gov website to solve problems related to health management. The influence of computer experience and cognitive abilities on performance was also examined. Seventy-one participants, aged 47-92, were randomized into a Multimedia training, Unimodal training, or Cold Start condition and completed three healthcare management problems. MEASUREMENT AND ANALYSES: Computer/internet experience was measured via questionnaire, and cognitive abilities were assessed using standard neuropsychological tests. Performance metrics included measures of navigation, accuracy and efficiency. Data were analyzed using analysis of variance, χ(2) and regression techniques. The data indicate that there was no difference among the three conditions on measures of accuracy, efficiency, or navigation. However, results of the regression analyses showed that, overall, people who received training performed better on the tasks, as evidenced by greater accuracy and efficiency. Performance was also significantly influenced by prior computer experience and cognitive abilities. Participants with more computer experience and higher cognitive abilities performed better. The findings indicate that training, experience, and abilities are important when using complex health websites. However, training alone is not sufficient. The complexity of web content needs to be considered to ensure successful use of these websites by those with lower abilities.
Factors influencing use of an e-health website in a community sample of older adults
Sharit, Joseph; Lee, Chin Chin; Nair, Sankaran N; Hernández, Mario A; Arana, Neysarí; Fu, Shih Hua
2013-01-01
Objective The use of the internet as a source of health information and link to healthcare services has raised concerns about the ability of consumers, especially vulnerable populations such as older adults, to access these applications. This study examined the influence of training on the ability of adults (aged 45+ years) to use the Medicare.gov website to solve problems related to health management. The influence of computer experience and cognitive abilities on performance was also examined. Design Seventy-one participants, aged 47–92, were randomized into a Multimedia training, Unimodal training, or Cold Start condition and completed three healthcare management problems. Measurement and analyses Computer/internet experience was measured via questionnaire, and cognitive abilities were assessed using standard neuropsychological tests. Performance metrics included measures of navigation, accuracy and efficiency. Data were analyzed using analysis of variance, χ2 and regression techniques. Results The data indicate that there was no difference among the three conditions on measures of accuracy, efficiency, or navigation. However, results of the regression analyses showed that, overall, people who received training performed better on the tasks, as evidenced by greater accuracy and efficiency. Performance was also significantly influenced by prior computer experience and cognitive abilities. Participants with more computer experience and higher cognitive abilities performed better. Conclusions The findings indicate that training, experience, and abilities are important when using complex health websites. However, training alone is not sufficient. The complexity of web content needs to be considered to ensure successful use of these websites by those with lower abilities. PMID:22802269
ELM Meets Urban Big Data Analysis: Case Studies
Chen, Huajun; Chen, Jiaoyan
2016-01-01
In the latest years, the rapid progress of urban computing has engendered big issues, which creates both opportunities and challenges. The heterogeneous and big volume of data and the big difference between physical and virtual worlds have resulted in lots of problems in quickly solving practical problems in urban computing. In this paper, we propose a general application framework of ELM for urban computing. We present several real case studies of the framework like smog-related health hazard prediction and optimal retain store placement. Experiments involving urban data in China show the efficiency, accuracy, and flexibility of our proposed framework. PMID:27656203
NASA Technical Reports Server (NTRS)
Choudhury, A. K.; Djalali, M.
1975-01-01
In this recursive method proposed, the gain matrix for the Kalman filter and the convariance of the state vector are computed not via the Riccati equation, but from certain other equations. These differential equations are of Chandrasekhar-type. The 'invariant imbedding' idea resulted in the reduction of the basic boundary value problem of transport theory to an equivalent initial value system, a significant computational advance. Initial value experience showed that there is some computational savings in the method and the loss of positive definiteness of the covariance matrix is less vulnerable.
Birgin, Osman; Catlioğlu, Hakan; Gürbüz, Ramazan; Aydin, Serhat
2010-10-01
This study aimed to investigate the experiences of pre-service mathematics (PSM) teachers with computers and their attitudes toward them. The Computer Attitude Scale, Computer Competency Survey, and Computer Use Information Form were administered to 180 Turkish PSM teachers. Results revealed that most PSM teachers used computers at home and at Internet cafes, and that their competency was generally intermediate and upper level. The study concludes that PSM teachers' attitudes about computers differ according to their years of study, computer ownership, level of computer competency, frequency of computer use, computer experience, and whether they had attended a computer-aided instruction course. However, computer attitudes were not affected by gender.
Blind topological measurement-based quantum computation
Morimae, Tomoyuki; Fujii, Keisuke
2012-01-01
Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-principle experiment demonstrating blind quantum computation in an optical system has raised new challenges regarding the scalability of blind quantum computation in realistic noisy conditions. Here we show that fault-tolerant blind quantum computation is possible in a topologically protected manner using the Raussendorf–Harrington–Goyal scheme. The error threshold of our scheme is 4.3×10−3, which is comparable to that (7.5×10−3) of non-blind topological quantum computation. As the error per gate of the order 10−3 was already achieved in some experimental systems, our result implies that secure cloud quantum computation is within reach. PMID:22948818
Bringing computational science to the public.
McDonagh, James L; Barker, Daniel; Alderson, Rosanna G
2016-01-01
The increasing use of computers in science allows for the scientific analyses of large datasets at an increasing pace. We provided examples and interactive demonstrations at Dundee Science Centre as part of the 2015 Women in Science festival, to present aspects of computational science to the general public. We used low-cost Raspberry Pi computers to provide hands on experience in computer programming and demonstrated the application of computers to biology. Computer games were used as a means to introduce computers to younger visitors. The success of the event was evaluated by voluntary feedback forms completed by visitors, in conjunction with our own self-evaluation. This work builds on the original work of the 4273π bioinformatics education program of Barker et al. (2013, BMC Bioinform. 14:243). 4273π provides open source education materials in bioinformatics. This work looks at the potential to adapt similar materials for public engagement events. It appears, at least in our small sample of visitors (n = 13), that basic computational science can be conveyed to people of all ages by means of interactive demonstrations. Children as young as five were able to successfully edit simple computer programs with supervision. This was, in many cases, their first experience of computer programming. The feedback is predominantly positive, showing strong support for improving computational science education, but also included suggestions for improvement. Our conclusions are necessarily preliminary. However, feedback forms suggest methods were generally well received among the participants; "Easy to follow. Clear explanation" and "Very easy. Demonstrators were very informative." Our event, held at a local Science Centre in Dundee, demonstrates that computer games and programming activities suitable for young children can be performed alongside a more specialised and applied introduction to computational science for older visitors.
Computation of rare transitions in the barotropic quasi-geostrophic equations
NASA Astrophysics Data System (ADS)
Laurie, Jason; Bouchet, Freddy
2015-01-01
We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.
Task-dependency and structure-dependency in number interference effects in sentence comprehension
Franck, Julie; Colonna, Saveria; Rizzi, Luigi
2015-01-01
We report three experiments on French that explore number mismatch effects in intervention configurations in the comprehension of object A’-dependencies, relative clauses and questions. The study capitalizes on the finding of object attraction in sentence production, in which speakers sometimes erroneously produce a verb that agrees in number with a plural object in object relative clauses. Evidence points to the role of three critical constructs from formal syntax: intervention, intermediate traces and c-command (Franck et al., 2010). Experiment 1, using a self-paced reading procedure on these grammatical structures with an agreement error on the verb, shows an enhancing effect of number mismatch in intervention configurations, with faster reading times with plural (mismatching) objects. Experiment 2, using an on-line grammaticality judgment task on the ungrammatical versions of these structures, shows an interference effect in the form of attraction, with slower response times with plural objects. Experiment 3 with a similar grammaticality judgment task shows stronger attraction from c-commanding than from preceding interveners. Overall, the data suggest that syntactic computations in performance refer to the same syntactic representations in production and comprehension, but that different tasks tap into different processes involved in parsing: whereas performance in self-paced reading reflects the intervention of the subject in the process of building an object A’-dependency, performance in grammaticality judgment reflects intervention of the object on the computation of the subject-verb agreement dependency. The latter shows the hallmarks of structure-dependent attraction effects in sentence production, in particular, a sensitivity to specific characteristics of hierarchical representations. PMID:25914652
Twisting Anderson pseudospins with light: Quench dynamics in THz-pumped BCS superconductors
NASA Astrophysics Data System (ADS)
Chou, Yang-Zhi; Liao, Yunxiang; Foster, Matthew
We study the preparation and the detection of coherent far-from-equilibrium BCS superconductor dynamics in THz pump-probe experiments. In a recent experiment, an intense monocycle THz pulse with center frequency ω = Δ was injected into a superconductor with BCS gap Δ the post-pump evolution was detected via the optical conductivity. It was argued that nonlinear coupling of the pump to the Anderson pseudospins of the superconductor induces coherent dynamics of the Higgs mode Δ (t) . We validate this picture in a 2D BCS model with a combination of exact numerics and the Lax reduction, and we compute the dynamical phase diagram. The main effect of the pump is to scramble the orientations of Anderson pseudospins along the Fermi surface by twisting them in the xy-plane. We show that more intense pulses can induce a far-from-equilibrium gapless phase (phase I), originally predicted in the context of interaction quenches. We show that the THz pump can reach phase I at much lower energy densities than an interaction quench, and we demonstrate that Lax reduction provides a quantitative tool for computing coherent BCS dynamics. We also compute the optical conductivity for the states discussed here.
Hybrid Quantum-Classical Approach to Quantum Optimal Control.
Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu
2017-04-14
A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czakó, Gábor, E-mail: czako@chem.elte.hu
Motivated by a recent experiment [H. Pan and K. Liu, J. Chem. Phys. 140, 191101 (2014)], we report a quasiclassical trajectory study of the O({sup 3}P) + CH{sub 4}(v{sub k} = 0, 1) → OH + CH{sub 3} [k = 1 and 3] reactions on an ab initio potential energy surface. The computed angular distributions and cross sections correlated to the OH(v = 0, 1) + CH{sub 3}(v = 0) coincident product states can be directly compared to experiment for O + CH{sub 4}(v{sub 3} = 0, 1). Both theory and experiment show that the ground-state reaction is backward scattered,more » whereas the angular distributions shift toward sideways and forward directions upon antisymmetric stretching (v{sub 3}) excitation of the reactant. Theory predicts similar behavior for the O + CH{sub 4}(v{sub 1} = 1) reaction. The simulations show that stretching excitation enhances the reaction up to about 15 kcal/mol collision energy, whereas the O + CH{sub 4}(v{sub k} = 1) reactions produce smaller cross sections for OH(v = 1) + CH{sub 3}(v = 0) than those of O + CH{sub 4}(v = 0) → OH(v = 0) + CH{sub 3}(v = 0). The former finding agrees with experiment and the latter awaits for confirmation. The computed cold OH rotational distributions of O + CH{sub 4}(v = 0) are in good agreement with experiment.« less
Hollmann, M; Mönch, T; Mulla-Osman, S; Tempelmann, C; Stadler, J; Bernarding, J
2008-10-30
In functional MRI (fMRI) complex experiments and applications require increasingly complex parameter handling as the experimental setup usually consists of separated soft- and hardware systems. Advanced real-time applications such as neurofeedback-based training or brain computer interfaces (BCIs) may even require adaptive changes of the paradigms and experimental setup during the measurement. This would be facilitated by an automated management of the overall workflow and a control of the communication between all experimental components. We realized a concept based on an XML software framework called Experiment Description Language (EDL). All parameters relevant for real-time data acquisition, real-time fMRI (rtfMRI) statistical data analysis, stimulus presentation, and activation processing are stored in one central EDL file, and processed during the experiment. A usability study comparing the central EDL parameter management with traditional approaches showed an improvement of the complete experimental handling. Based on this concept, a feasibility study realizing a dynamic rtfMRI-based brain computer interface showed that the developed system in combination with EDL was able to reliably detect and evaluate activation patterns in real-time. The implementation of a centrally controlled communication between the subsystems involved in the rtfMRI experiments reduced potential inconsistencies, and will open new applications for adaptive BCIs.
Computational Fluid Dynamics (CFD) investigation onto passenger car disk brake design
NASA Astrophysics Data System (ADS)
Munisamy, Kannan M.; Kanasan Moorthy, Shangkari K.
2013-06-01
The aim of this study is to investigate the flow and heat transfer in ventilated disc brakes using Computational Fluid Dynamics (CFD). NACA Series blade is designed for ventilated disc brake and the cooling characteristic is compared to the baseline design. The ventilated disc brakes are simulated using commercial CFD software FLUENTTM using simulation configuration that was obtained from experiment data. The NACA Series blade design shows improvements in Nusselt number compared to baseline design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silverman, T. J.; Bosco, N.; Kurtz, S.
2012-03-01
Concentrating photovoltaic (CPV) cell assemblies can fail due to thermomechanical fatigue in the die-attach layer. In this presentation, we show the latest results from our computational model of thermomechanical fatigue. The model is used to estimate the relative lifetime of cell assemblies exposed to various temperature histories consistent with service and with accelerated testing. We also present early results from thermal cycling experiments designed to help validate the computational model.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2011-01-01
Launch vehicles frequently experience a reduced stability margin through the transonic Mach number range. This reduced stability margin is caused by an undamping of the aerodynamics in one of the lower frequency flexible or rigid body modes. Analysis of the behavior of a flexible vehicle is routinely performed with quasi-steady aerodynamic lineloads derived from steady rigid computational fluid dynamics (CFD). However, a quasi-steady aeroelastic stability analysis can be unconservative at the critical Mach numbers where experiment or unsteady computational aeroelastic (CAE) analysis show a reduced or even negative aerodynamic damping. This paper will present a method of enhancing the quasi-steady aeroelastic stability analysis of a launch vehicle with unsteady aerodynamics. The enhanced formulation uses unsteady CFD to compute the response of selected lower frequency modes. The response is contained in a time history of the vehicle lineloads. A proper orthogonal decomposition of the unsteady aerodynamic lineload response is used to reduce the scale of data volume and system identification is used to derive the aerodynamic stiffness, damping and mass matrices. The results of the enhanced quasi-static aeroelastic stability analysis are compared with the damping and frequency computed from unsteady CAE analysis and from a quasi-steady analysis. The results show that incorporating unsteady aerodynamics in this way brings the enhanced quasi-steady aeroelastic stability analysis into close agreement with the unsteady CAE analysis.
The Improvement and Individualization of Computer-Assisted Instruction
1975-09-15
Spanish experiments had studied at least one Romance language and con- sequently were able to learn some of +he Spanish wordo by using cognates...Involved the acquisition of foreign- language vocabulary Items. The first (using Geraan vocabulary) concerned Itself with optimizing the selection of...method. Experiments with Spanish and Russian items showed that the method could be a powerful aid in building and retaining a large vocabulary of
Programming experience promotes higher STEM motivation among first-grade girls.
Master, Allison; Cheryan, Sapna; Moscatelli, Adriana; Meltzoff, Andrew N
2017-08-01
The gender gap in science, technology, engineering, and math (STEM) engagement is large and persistent. This gap is significantly larger in technological fields such as computer science and engineering than in math and science. Gender gaps begin early; young girls report less interest and self-efficacy in technology compared with boys in elementary school. In the current study (N=96), we assessed 6-year-old children's stereotypes about STEM fields and tested an intervention to develop girls' STEM motivation despite these stereotypes. First-grade children held stereotypes that boys were better than girls at robotics and programming but did not hold these stereotypes about math and science. Girls with stronger stereotypes about robotics and programming reported lower interest and self-efficacy in these domains. We experimentally tested whether positive experience with programming robots would lead to greater interest and self-efficacy among girls despite these stereotypes. Children were randomly assigned either to a treatment group that was given experience in programming a robot using a smartphone or to control groups (no activity or other activity). Girls given programming experience reported higher technology interest and self-efficacy compared with girls without this experience and did not exhibit a significant gender gap relative to boys' interest and self-efficacy. These findings show that children's views mirror current American cultural messages about who excels at computer science and engineering and show the benefit of providing young girls with chances to experience technological activities. Copyright © 2017 Elsevier Inc. All rights reserved.
Innovative Science Experiments Using Phoenix
ERIC Educational Resources Information Center
Kumar, B. P. Ajith; Satyanarayana, V. V. V.; Singh, Kundan; Singh, Parmanand
2009-01-01
A simple, flexible and very low cost hardware plus software framework for developing computer-interfaced science experiments is presented. It can be used for developing computer-interfaced science experiments without getting into the details of electronics or computer programming. For developing experiments this is a middle path between…
A computational model of selection by consequences.
McDowell, J J
2004-01-01
Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior. PMID:15357512
An Investment Behavior Analysis using by Brain Computer Interface
NASA Astrophysics Data System (ADS)
Suzuki, Kyoko; Kinoshita, Kanta; Miyagawa, Kazuhiro; Shiomi, Shinichi; Misawa, Tadanobu; Shimokawa, Tetsuya
In this paper, we will construct a new Brain Computer Interface (BCI), for the purpose of analyzing human's investment decision makings. The BCI is made up of three functional parts which take roles of, measuring brain information, determining market price in an artificial market, and specifying investment decision model, respectively. When subjects make decisions, their brain information is conveyed to the part of specifying investment decision model through the part of measuring brain information, whereas, their decisions of investment order are sent to the part of artificial market to form market prices. Both the support vector machine and the 3 layered perceptron are used to assess the investment decision model. In order to evaluate our BCI, we conduct an experiment in which subjects and a computer trader agent trade shares of stock in the artificial market and test how the computer trader agent can forecast market price formation and investment decision makings from the brain information of subjects. The result of the experiment shows that the brain information can improve the accuracy of forecasts, and so the computer trader agent can supply market liquidity to stabilize market volatility without his loss.
NASA Astrophysics Data System (ADS)
Mazurowski, Maciej A.; Zhang, Jing; Lo, Joseph Y.; Kuzmiak, Cherie M.; Ghate, Sujata V.; Yoon, Sora
2014-03-01
Providing high quality mammography education to radiology trainees is essential, as good interpretation skills potentially ensure the highest benefit of screening mammography for patients. We have previously proposed a computer-aided education system that utilizes trainee models, which relate human-assessed image characteristics to interpretation error. We proposed that these models be used to identify the most difficult and therefore the most educationally useful cases for each trainee. In this study, as a next step in our research, we propose to build trainee models that utilize features that are automatically extracted from images using computer vision algorithms. To predict error, we used a logistic regression which accepts imaging features as input and returns error as output. Reader data from 3 experts and 3 trainees were used. Receiver operating characteristic analysis was applied to evaluate the proposed trainee models. Our experiments showed that, for three trainees, our models were able to predict error better than chance. This is an important step in the development of adaptive computer-aided education systems since computer-extracted features will allow for faster and more extensive search of imaging databases in order to identify the most educationally beneficial cases.
Zhao, Ming; Rattanatamrong, Prapaporn; DiGiovanna, Jack; Mahmoudi, Babak; Figueiredo, Renato J; Sanchez, Justin C; Príncipe, José C; Fortes, José A B
2008-01-01
Dynamic data-driven brain-machine interfaces (DDDBMI) have great potential to advance the understanding of neural systems and improve the design of brain-inspired rehabilitative systems. This paper presents a novel cyberinfrastructure that couples in vivo neurophysiology experimentation with massive computational resources to provide seamless and efficient support of DDDBMI research. Closed-loop experiments can be conducted with in vivo data acquisition, reliable network transfer, parallel model computation, and real-time robot control. Behavioral experiments with live animals are supported with real-time guarantees. Offline studies can be performed with various configurations for extensive analysis and training. A Web-based portal is also provided to allow users to conveniently interact with the cyberinfrastructure, conducting both experimentation and analysis. New motor control models are developed based on this approach, which include recursive least square based (RLS) and reinforcement learning based (RLBMI) algorithms. The results from an online RLBMI experiment shows that the cyberinfrastructure can successfully support DDDBMI experiments and meet the desired real-time requirements.
Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays.
Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A; Wetzstein, Gordon
2017-02-28
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.
Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays
NASA Astrophysics Data System (ADS)
Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A.; Wetzstein, Gordon
2017-02-01
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.
Intrinsic motivation, curiosity, and learning: Theory and applications in educational technologies.
Oudeyer, P-Y; Gottlieb, J; Lopes, M
2016-01-01
This chapter studies the bidirectional causal interactions between curiosity and learning and discusses how understanding these interactions can be leveraged in educational technology applications. First, we review recent results showing how state curiosity, and more generally the experience of novelty and surprise, can enhance learning and memory retention. Then, we discuss how psychology and neuroscience have conceptualized curiosity and intrinsic motivation, studying how the brain can be intrinsically rewarded by novelty, complexity, or other measures of information. We explain how the framework of computational reinforcement learning can be used to model such mechanisms of curiosity. Then, we discuss the learning progress (LP) hypothesis, which posits a positive feedback loop between curiosity and learning. We outline experiments with robots that show how LP-driven attention and exploration can self-organize a developmental learning curriculum scaffolding efficient acquisition of multiple skills/tasks. Finally, we discuss recent work exploiting these conceptual and computational models in educational technologies, showing in particular how intelligent tutoring systems can be designed to foster curiosity and learning. © 2016 Elsevier B.V. All rights reserved.
Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ
Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing.more » Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.« less
Exploring Architectural Details Through a Wearable Egocentric Vision Device
Alletto, Stefano; Abati, Davide; Serra, Giuseppe; Cucchiara, Rita
2016-01-01
Augmented user experiences in the cultural heritage domain are in increasing demand by the new digital native tourists of 21st century. In this paper, we propose a novel solution that aims at assisting the visitor during an outdoor tour of a cultural site using the unique first person perspective of wearable cameras. In particular, the approach exploits computer vision techniques to retrieve the details by proposing a robust descriptor based on the covariance of local features. Using a lightweight wearable board, the solution can localize the user with respect to the 3D point cloud of the historical landmark and provide him with information about the details at which he is currently looking. Experimental results validate the method both in terms of accuracy and computational effort. Furthermore, user evaluation based on real-world experiments shows that the proposal is deemed effective in enriching a cultural experience. PMID:26901197
Topological phases in the Haldane model with spin–spin on-site interactions
NASA Astrophysics Data System (ADS)
Rubio-García, A.; García-Ripoll, J. J.
2018-04-01
Ultracold atom experiments allow the study of topological insulators, such as the non-interacting Haldane model. In this work we study a generalization of the Haldane model with spin–spin on-site interactions that can be implemented on such experiments. We focus on measuring the winding number, a topological invariant, of the ground state, which we compute using a mean-field calculation that effectively captures long-range correlations and a matrix product state computation in a lattice with 64 sites. Our main result is that we show how the topological phases present in the non-interacting model survive until the interactions are comparable to the kinetic energy. We also demonstrate the accuracy of our mean-field approach in efficiently capturing long-range correlations. Based on state-of-the-art ultracold atom experiments, we propose an implementation of our model that can give information about the topological phases.
Takalo, Jouni; Piironen, Arto; Honkanen, Anna; Lempeä, Mikko; Aikio, Mika; Tuukkanen, Tuomas; Vähäsöyrinki, Mikko
2012-01-01
Ideally, neuronal functions would be studied by performing experiments with unconstrained animals whilst they behave in their natural environment. Although this is not feasible currently for most animal models, one can mimic the natural environment in the laboratory by using a virtual reality (VR) environment. Here we present a novel VR system based upon a spherical projection of computer generated images using a modified commercial data projector with an add-on fish-eye lens. This system provides equidistant visual stimulation with extensive coverage of the visual field, high spatio-temporal resolution and flexible stimulus generation using a standard computer. It also includes a track-ball system for closed-loop behavioural experiments with walking animals. We present a detailed description of the system and characterize it thoroughly. Finally, we demonstrate the VR system's performance whilst operating in closed-loop conditions by showing the movement trajectories of the cockroaches during exploratory behaviour in a VR forest.
Semantic Coherence Facilitates Distributional Learning.
Ouyang, Long; Boroditsky, Lera; Frank, Michael C
2017-04-01
Computational models have shown that purely statistical knowledge about words' linguistic contexts is sufficient to learn many properties of words, including syntactic and semantic category. For example, models can infer that "postman" and "mailman" are semantically similar because they have quantitatively similar patterns of association with other words (e.g., they both tend to occur with words like "deliver," "truck," "package"). In contrast to these computational results, artificial language learning experiments suggest that distributional statistics alone do not facilitate learning of linguistic categories. However, experiments in this paradigm expose participants to entirely novel words, whereas real language learners encounter input that contains some known words that are semantically organized. In three experiments, we show that (a) the presence of familiar semantic reference points facilitates distributional learning and (b) this effect crucially depends both on the presence of known words and the adherence of these known words to some semantic organization. Copyright © 2016 Cognitive Science Society, Inc.
Agreement processing and attraction errors in aging: evidence from subject-verb agreement in German.
Reifegerste, Jana; Hauer, Franziska; Felser, Claudia
2017-11-01
Effects of aging on lexical processing are well attested, but the picture is less clear for grammatical processing. Where age differences emerge, these are usually ascribed to working-memory (WM) decline. Previous studies on the influence of WM on agreement computation have yielded inconclusive results, and work on aging and subject-verb agreement processing is lacking. In two experiments (Experiment 1: timed grammaticality judgment, Experiment 2: self-paced reading + WM test), we investigated older (OA) and younger (YA) adults' susceptibility to agreement attraction errors. We found longer reading latencies and judgment reaction times (RTs) for OAs. Further, OAs, particularly those with low WM scores, were more accepting of sentences with attraction errors than YAs. OAs showed longer reading latencies for ungrammatical sentences, again modulated by WM, than YAs. Our results indicate that OAs have greater difficulty blocking intervening nouns from interfering with the computation of agreement dependencies. WM can modulate this effect.
Exploring Architectural Details Through a Wearable Egocentric Vision Device.
Alletto, Stefano; Abati, Davide; Serra, Giuseppe; Cucchiara, Rita
2016-02-17
Augmented user experiences in the cultural heritage domain are in increasing demand by the new digital native tourists of 21st century. In this paper, we propose a novel solution that aims at assisting the visitor during an outdoor tour of a cultural site using the unique first person perspective of wearable cameras. In particular, the approach exploits computer vision techniques to retrieve the details by proposing a robust descriptor based on the covariance of local features. Using a lightweight wearable board, the solution can localize the user with respect to the 3D point cloud of the historical landmark and provide him with information about the details at which he is currently looking. Experimental results validate the method both in terms of accuracy and computational effort. Furthermore, user evaluation based on real-world experiments shows that the proposal is deemed effective in enriching a cultural experience.
Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa
2015-01-01
Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is supported by computational and behavioral data. PMID:25698947
Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa
2015-01-01
Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is supported by computational and behavioral data.
How does ytterbium chloride interact with DMPC bilayers? A computational and experimental study.
Gonzalez, Miguel A; Barriga, Hanna M G; Richens, Joanna L; Law, Robert V; O'Shea, Paul; Bresme, Fernando
2017-03-29
Lanthanide salts have been studied for many years, primarily in Nuclear Magnetic Resonance (NMR) experiments of mixed lipid-protein systems and more recently to study lipid flip-flop in model membrane systems. It is well recognised that lanthanide salts can influence the behaviour of both lipid and protein systems, however a full molecular level description of lipid-lanthanide interactions is still outstanding. Here we present a study of lanthanide-bilayer interactions, using molecular dynamics computer simulations, fluorescence electrostatic potential experiments and nuclear magnetic resonance. Computer simulations reveal the microscopic structure of DMPC lipid bilayers in the presence of Yb 3+ , and a surprising ability of the membranes to adsorb significant concentrations of Yb 3+ without disrupting the overall membrane structure. At concentrations commonly used in NMR experiments, Yb 3+ ions bind strongly to 5 lipids, inducing a small decrease of the area per lipid and a slight increase of the ordering of the aliphatic chains and the bilayer thickness. The area compressibility modulus increases by a factor of two, with respect to the free-salt case, showing that Yb 3+ ions make the bilayer more rigid. These modifications of the bilayer properties should be taken into account in the interpretation of NMR experiments.
NASA Astrophysics Data System (ADS)
Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi
2016-08-01
The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.
Inhoff, Albrecht W; Radach, Ralph; Eiter, Brianna M; Juhasz, Barbara
2003-07-01
Two experiments examined readers' use of parafoveally obtained word length information for word recognition. Both experiments manipulated the length (number of constituent characters) of a parafoveally previewed target word so that it was either accurately or inaccurately specified. In Experiment 1, previews also either revealed or denied useful orthographic information. In Experiment 2, parafoveal targets were either high- or low-frequency words. Eye movement contingent display changes were used to show the intact target upon its fixation. Examination of target viewing duration showed completely additive effects of word length previews and of ortho-graphic previews in Experiment 1, viewing duration being shorter in the accurate-length and the orthographic preview conditions. Experiment 2 showed completely additive effects of word length and word frequency, target viewing being shorter in the accurate-length and the high-frequency conditions. Together these results indicate that functionally distinct subsystems control the use of parafoveally visible spatial and linguistic information in reading. Parafoveally visible spatial information appears to be used for two distinct extralinguistic computations: visual object selection and saccade specification.
Model-Based and Model-Free Pavlovian Reward Learning: Revaluation, Revision and Revelation
Dayan, Peter; Berridge, Kent C.
2014-01-01
Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation. PMID:24647659
Model-based and model-free Pavlovian reward learning: revaluation, revision, and revelation.
Dayan, Peter; Berridge, Kent C
2014-06-01
Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations, and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response, and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation.
NASA Astrophysics Data System (ADS)
Márquez Damián, J. I.; Granada, J. R.; Malaspina, D. C.
2014-04-01
In this work we present an evaluation in ENDF-6 format of the scattering law for light and heavy water computed using the LEAPR module of NJOY99. The models used in this evaluation are based on experimental data on light water dynamics measured by Novikov, partial structure factors obtained by Soper, and molecular dynamics calculations performed with GROMACS using a reparameterized version of the flexible SPC model by Toukan and Rahman. The models use the Egelstaff-Schofield diffusion equation for translational motion, and a continuous spectrum calculated from the velocity autocorrelation function computed with GROMACS. The scattering law for H in H2O is computed using the incoherent approximation, and the scattering law D and O in D2O are computed using the Sköld approximation for coherent scattering. The calculations show significant improvement over ENDF/B-VI and ENDF/B-VII when compared with measurements of the total cross section, differential scattering experiments and quasi-elastic neutron scattering experiments (QENS).
NASA Astrophysics Data System (ADS)
Wan, Junwei; Chen, Hongyan; Zhao, Jing
2017-08-01
According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.
A First Attempt to Bring Computational Biology into Advanced High School Biology Classrooms
Gallagher, Suzanne Renick; Coon, William; Donley, Kristin; Scott, Abby; Goldberg, Debra S.
2011-01-01
Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors. PMID:22046118
A first attempt to bring computational biology into advanced high school biology classrooms.
Gallagher, Suzanne Renick; Coon, William; Donley, Kristin; Scott, Abby; Goldberg, Debra S
2011-10-01
Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.
Fast algorithms for computing phylogenetic divergence time.
Crosby, Ralph W; Williams, Tiffani L
2017-12-06
The inference of species divergence time is a key step in most phylogenetic studies. Methods have been available for the last ten years to perform the inference, but the performance of the methods does not yet scale well to studies with hundreds of taxa and thousands of DNA base pairs. For example a study of 349 primate taxa was estimated to require over 9 months of processing time. In this work, we present a new algorithm, AncestralAge, that significantly improves the performance of the divergence time process. As part of AncestralAge, we demonstrate a new method for the computation of phylogenetic likelihood and our experiments show a 90% improvement in likelihood computation time on the aforementioned dataset of 349 primates taxa with over 60,000 DNA base pairs. Additionally, we show that our new method for the computation of the Bayesian prior on node ages reduces the running time for this computation on the 349 taxa dataset by 99%. Through the use of these new algorithms we open up the ability to perform divergence time inference on large phylogenetic studies.
AMS,Chang-Diaz works with computers in the middeck
2016-08-24
STS091-378-028 (2-12 June 1998) --- Astronaut Franklin R. Chang-Diaz, payload commander, inputs data on a laptop computer associated with the Alpha Magnetic Spectrometer (AMS) hardware located in the aft cargo bay. Reference JSC photo number STS091-367-033, which shows the hardware as seen from Russia's Mir space station, which was docked with Discovery at the time. AMS is the first large magnet experiment ever placed in Earth orbit. The scientific goal of this high-energy physics experiment is to increase our understanding of the composition and origin of the universe. It is designed to search for and measure charged particles, including antimatter, outside Earth's atmosphere. The charge of such particles can be identified by their trajectories in a magnetic field.
Humeniuk, Stephan; Büchler, Hans Peter
2017-12-08
We present a method for computing the full probability distribution function of quadratic observables such as particle number or magnetization for the Fermi-Hubbard model within the framework of determinantal quantum Monte Carlo calculations. Especially in cold atom experiments with single-site resolution, such a full counting statistics can be obtained from repeated projective measurements. We demonstrate that the full counting statistics can provide important information on the size of preformed pairs. Furthermore, we compute the full counting statistics of the staggered magnetization in the repulsive Hubbard model at half filling and find excellent agreement with recent experimental results. We show that current experiments are capable of probing the difference between the Hubbard model and the limiting Heisenberg model.
Potential pitfalls of strain rate imaging: angle dependency
NASA Technical Reports Server (NTRS)
Castro, P. L.; Greenberg, N. L.; Drinko, J.; Garcia, M. J.; Thomas, J. D.
2000-01-01
Strain Rate Imaging (SRI) is a new echocardiographic technique that allows for the real-time determination of myocardial SR, which may be used for the early and accurate detection of coronary artery disease. We sought to study whether SR is affected by scan line alignment in a computer simulation and an in vivo experiment. Through the computer simulation and the in vivo experiment we generated and validated safe scanning sectors within the ultrasound scan sector and showed that while SRI will be an extremely valuable tool in detecting coronary artery disease there are potential pitfalls for the unwary clinician. Only after accounting for these affects due to angle dependency, can clinicians utilize SRI's potential as a valuable tool in detecting coronary artery disease.
Mirman, Daniel; Magnuson, James S.
2008-01-01
The authors investigated semantic neighborhood density effects on visual word processing to examine the dynamics of activation and competition among semantic representations. Experiment 1 validated feature-based semantic representations as a basis for computing semantic neighborhood density and suggested that near and distant neighbors have opposite effects on word processing. Experiment 2 confirmed these results: Word processing was slower for dense near neighborhoods and faster for dense distant neighborhoods. Analysis of a computational model showed that attractor dynamics can produce this pattern of neighborhood effects. The authors argue for reconsideration of traditional models of neighborhood effects in terms of attractor dynamics, which allow both inhibitory and facilitative effects to emerge. PMID:18194055
Lupker, Stephen J.
2017-01-01
The experiments reported here used “Reversed-Interior” (RI) primes (e.g., cetupmor-COMPUTER) in three different masked priming paradigms in order to test between different models of orthographic coding/visual word recognition. The results of Experiment 1, using a standard masked priming methodology, showed no evidence of priming from RI primes, in contrast to the predictions of the Bayesian Reader and LTRS models. By contrast, Experiment 2, using a sandwich priming methodology, showed significant priming from RI primes, in contrast to the predictions of open bigram models, which predict that there should be no orthographic similarity between these primes and their targets. Similar results were obtained in Experiment 3, using a masked prime same-different task. The results of all three experiments are most consistent with the predictions derived from simulations of the Spatial-coding model. PMID:29244824
Nordheim, Johanna; Hamm, Sabine; Kuhlmey, Adelheid; Suhr, Ralf
2015-08-01
Initial sporadic experiences in a Berlin nursing home showed that residents with dementia responded well to activating therapy with tablet computers. This innovative technology seemed to provide a differentiated and individual therapeutic access. These observations encouraged the nursing home management to contact the Institute of Medical Sociology and Rehabilitation Science at the Charité Universitätsmedizin Berlin with the aim to examine the practical experiences. The Centre for Quality in Care (ZQP) sponsored the 1 year pilot study. An examination of the feasibility and usability of tablet computers in the daily care of nursing home residents with dementia was carried out. In this study 14 residents (12 women and 2 men) of a special care unit for dementia patients were included in a 3-month intervention of tablet activation 3 times a week. Qualitative and quantitative methods were used to analyze data (e.g. observation protocols and videos, staff interviews, document analysis of nursing records and standardized resident interviews/proxy interviews). Nursing home residents suffering from dementia showed a high degree of acceptance of tablet computers. Most notable benefits were easy handling and the variety of multifunctional applications. Sustainable therapeutic effects resulted in stimulation of communication and interaction, improvement of well-being, memory training and reduction of neuropsychiatric symptoms. Furthermore, contact to family members of several residents was improved. The use of tablet computers was convincing as an activation therapy for nursing home residents with dementia. Further research and development of specially adapted software are required.
Heat Transfer on a Flat Plate with Uniform and Step Temperature Distributions
NASA Technical Reports Server (NTRS)
Bahrami, Parviz A.
2005-01-01
Heat transfer associated with turbulent flow on a step-heated or cooled section of a flat plate at zero angle of attack with an insulated starting section was computationally modeled using the GASP Navier-Stokes code. The algebraic eddy viscosity model of Baldwin-Lomax and the turbulent two-equation models, the K- model and the Shear Stress Turbulent model (SST), were employed. The variations from uniformity of the imposed experimental temperature profile were incorporated in the computations. The computations yielded satisfactory agreement with the experimental results for all three models. The Baldwin- Lomax model showed the closest agreement in heat transfer, whereas the SST model was higher and the K-omega model was yet higher than the experiments. In addition to the step temperature distribution case, computations were also carried out for a uniformly heated or cooled plate. The SST model showed the closest agreement with the Von Karman analogy, whereas the K-omega model was higher and the Baldwin-Lomax was lower.
Tuomivaara, S; Ketola, R; Huuhtanen, P; Toivonen, R
2008-02-01
Musculoskeletal strain and other symptoms are common in visual display unit (VDU) work. Psychosocial factors are closely related to the outcome and experience of musculoskeletal strain. The user-computer relationship from the viewpoint of the quality of perceived competence in computer use was assessed as a psychosocial stress indicator. It was assumed that the perceived competence in computer use moderates the experience of musculoskeletal strain and the success of the ergonomics intervention. The participants (n = 124, female 58%, male 42%) worked with VDU for more than 4 h per week. They took part in an ergonomics intervention and were allocated into three groups: intensive; education; and reference group. Musculoskeletal strain, the level of ergonomics of the workstation assessed by the experts in ergonomics and amount of VDU work were estimated at the baseline and at the 10-month follow-up. Age, gender and the perceived competence in computer use were assessed at the baseline. The perceived competence in computer use predicted strain in the upper and the lower part of the body at the follow-up. The interaction effect shows that the intensive ergonomics intervention procedure was the most effective among participants with high perceived competence. The interpretation of the results was that an anxiety-provoking and stressful user-computer relationship prevented the participants from being motivated and from learning in the ergonomics intervention. In the intervention it is important to increase the computer competence along with the improvements of physical workstation and work organization.
The Fabric for Frontier Experiments Project at Fermilab
NASA Astrophysics Data System (ADS)
Kirby, Michael
2014-06-01
The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere; 2) an extensive data management system for managing local and remote caches, cataloging, querying, moving, and tracking the use of data; 3) custom and generic database applications for calibrations, beam information, and other purposes; 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.
Tam, S F
2000-10-15
The aim of this controlled, quasi-experimental study was to evaluate the effects of both self-efficacy enhancement and social comparison training strategy on computer skills learning and self-concept outcome of trainees with physical disabilities. The self-efficacy enhancement group comprised 16 trainees, the tutorial training group comprised 15 trainees, and there were 25 subjects in the control group. Both the self-efficacy enhancement group and the tutorial training group received a 15 week computer skills training course, including generic Chinese computer operation, Chinese word processing and Chinese desktop publishing skills. The self-efficacy enhancement group received training with tutorial instructions that incorporated self-efficacy enhancement strategies and experienced self-enhancing social comparisons. The tutorial training group received behavioural learning-based tutorials only, and the control group did not receive any training. The following measurements were employed to evaluate the outcomes: the Self-Concept Questionnaire for the Physically Disabled Hong Kong Chinese (SCQPD), the computer self-efficacy rating scale and the computer performance rating scale. The self-efficacy enhancement group showed significantly better computer skills learning outcome, total self-concept, and social self-concept than the tutorial training group. The self-efficacy enhancement group did not show significant changes in their computer self-efficacy: however, the tutorial training group showed a significant lowering of their computer self-efficacy. The training strategy that incorporated self-efficacy enhancement and positive social comparison experiences maintained the computer self-efficacy of trainees with physical disabilities. This strategy was more effective in improving the learning outcome (p = 0.01) and self-concept (p = 0.05) of the trainees than the conventional tutorial-based training strategy.
Computational efficient segmentation of cell nuclei in 2D and 3D fluorescent micrographs
NASA Astrophysics Data System (ADS)
De Vylder, Jonas; Philips, Wilfried
2011-02-01
This paper proposes a new segmentation technique developed for the segmentation of cell nuclei in both 2D and 3D fluorescent micrographs. The proposed method can deal with both blurred edges as with touching nuclei. Using a dual scan line algorithm its both memory as computational efficient, making it interesting for the analysis of images coming from high throughput systems or the analysis of 3D microscopic images. Experiments show good results, i.e. recall of over 0.98.
Parallel algorithm for computation of second-order sequential best rotations
NASA Astrophysics Data System (ADS)
Redif, Soydan; Kasap, Server
2013-12-01
Algorithms for computing an approximate polynomial matrix eigenvalue decomposition of para-Hermitian systems have emerged as a powerful, generic signal processing tool. A technique that has shown much success in this regard is the sequential best rotation (SBR2) algorithm. Proposed is a scheme for parallelising SBR2 with a view to exploiting the modern architectural features and inherent parallelism of field-programmable gate array (FPGA) technology. Experiments show that the proposed scheme can achieve low execution times while requiring minimal FPGA resources.
Harnessing Disordered-Ensemble Quantum Dynamics for Machine Learning
NASA Astrophysics Data System (ADS)
Fujii, Keisuke; Nakajima, Kohei
2017-08-01
The quantum computer has an amazing potential of fast information processing. However, the realization of a digital quantum computer is still a challenging problem requiring highly accurate controls and key application strategies. Here we propose a platform, quantum reservoir computing, to solve these issues successfully by exploiting the natural quantum dynamics of ensemble systems, which are ubiquitous in laboratories nowadays, for machine learning. This framework enables ensemble quantum systems to universally emulate nonlinear dynamical systems including classical chaos. A number of numerical experiments show that quantum systems consisting of 5-7 qubits possess computational capabilities comparable to conventional recurrent neural networks of 100-500 nodes. This discovery opens up a paradigm for information processing with artificial intelligence powered by quantum physics.
Stenzel, Anna; Dolk, Thomas; Colzato, Lorenza S.; Sellaro, Roberta; Hommel, Bernhard; Liepelt, Roman
2014-01-01
A co-actor's intentionality has been suggested to be a key modulating factor for joint action effects like the joint Simon effect (JSE). However, in previous studies intentionality has often been confounded with agency defined as perceiving the initiator of an action as being the causal source of the action. The aim of the present study was to disentangle the role of agency and intentionality as modulating factors of the JSE. In Experiment 1, participants performed a joint go/nogo Simon task next to a co-actor who either intentionally controlled a response button with own finger movements (agency+/intentionality+) or who passively placed the hand on a response button that moved up and down on its own as triggered by computer signals (agency−/intentionality−). In Experiment 2, we included a condition in which participants believed that the co-actor intentionally controlled the response button with a Brain-Computer Interface (BCI) while placing the response finger clearly besides the response button, so that the causal relationship between agent and action effect was perceptually disrupted (agency−/intentionality+). As a control condition, the response button was computer controlled while the co-actor placed the response finger besides the response button (agency−/intentionality−). Experiment 1 showed that the JSE is present with an intentional co-actor and causality between co-actor and action effect, but absent with an unintentional co-actor and a lack of causality between co-actor and action effect. Experiment 2 showed that the JSE is absent with an intentional co-actor, but no causality between co-actor and action effect. Our findings indicate an important role of the co-actor's agency for the JSE. They also suggest that the attribution of agency has a strong perceptual basis. PMID:25140144
Stenzel, Anna; Dolk, Thomas; Colzato, Lorenza S; Sellaro, Roberta; Hommel, Bernhard; Liepelt, Roman
2014-01-01
A co-actor's intentionality has been suggested to be a key modulating factor for joint action effects like the joint Simon effect (JSE). However, in previous studies intentionality has often been confounded with agency defined as perceiving the initiator of an action as being the causal source of the action. The aim of the present study was to disentangle the role of agency and intentionality as modulating factors of the JSE. In Experiment 1, participants performed a joint go/nogo Simon task next to a co-actor who either intentionally controlled a response button with own finger movements (agency+/intentionality+) or who passively placed the hand on a response button that moved up and down on its own as triggered by computer signals (agency-/intentionality-). In Experiment 2, we included a condition in which participants believed that the co-actor intentionally controlled the response button with a Brain-Computer Interface (BCI) while placing the response finger clearly besides the response button, so that the causal relationship between agent and action effect was perceptually disrupted (agency-/intentionality+). As a control condition, the response button was computer controlled while the co-actor placed the response finger besides the response button (agency-/intentionality-). Experiment 1 showed that the JSE is present with an intentional co-actor and causality between co-actor and action effect, but absent with an unintentional co-actor and a lack of causality between co-actor and action effect. Experiment 2 showed that the JSE is absent with an intentional co-actor, but no causality between co-actor and action effect. Our findings indicate an important role of the co-actor's agency for the JSE. They also suggest that the attribution of agency has a strong perceptual basis.
A Comparison of Computed and Experimental Flowfields of the RAH-66 Helicopter
NASA Technical Reports Server (NTRS)
vanDam, C. P.; Budge, A. M.; Duque, E. P. N.
1996-01-01
This paper compares and evaluates numerical and experimental flowfields of the RAH-66 Comanche helicopter. The numerical predictions were obtained by solving the Thin-Layer Navier-Stokes equations. The computations use actuator disks to investigate the main and tail rotor effects upon the fuselage flowfield. The wind tunnel experiment was performed in the 14 x 22 foot facility located at NASA Langley. A suite of flow conditions, rotor thrusts and fuselage-rotor-tail configurations were tested. In addition, the tunnel model and the computational geometry were based upon the same CAD definition. Computations were performed for an isolated fuselage configuration and for a rotor on configuration. Comparisons between the measured and computed surface pressures show areas of correlation and some discrepancies. Local areas of poor computational grid-quality and local areas of geometry differences account for the differences. These calculations demonstrate the use of advanced computational fluid dynamic methodologies towards a flight vehicle currently under development. It serves as an important verification for future computed results.
Collaborative Working Architecture for IoT-Based Applications.
Mora, Higinio; Signes-Pont, María Teresa; Gil, David; Johnsson, Magnus
2018-05-23
The new sensing applications need enhanced computing capabilities to handle the requirements of complex and huge data processing. The Internet of Things (IoT) concept brings processing and communication features to devices. In addition, the Cloud Computing paradigm provides resources and infrastructures for performing the computations and outsourcing the work from the IoT devices. This scenario opens new opportunities for designing advanced IoT-based applications, however, there is still much research to be done to properly gear all the systems for working together. This work proposes a collaborative model and an architecture to take advantage of the available computing resources. The resulting architecture involves a novel network design with different levels which combines sensing and processing capabilities based on the Mobile Cloud Computing (MCC) paradigm. An experiment is included to demonstrate that this approach can be used in diverse real applications. The results show the flexibility of the architecture to perform complex computational tasks of advanced applications.
Robust tuning of robot control systems
NASA Technical Reports Server (NTRS)
Minis, I.; Uebel, M.
1992-01-01
The computed torque control problem is examined for a robot arm with flexible, geared, joint drive systems which are typical in many industrial robots. The standard computed torque algorithm is not directly applicable to this class of manipulators because of the dynamics introduced by the joint drive system. The proposed approach to computed torque control combines a computed torque algorithm with torque controller at each joint. Three such control schemes are proposed. The first scheme uses the joint torque control system currently implemented on the robot arm and a novel form of the computed torque algorithm. The other two use the standard computed torque algorithm and a novel model following torque control system based on model following techniques. Standard tasks and performance indices are used to evaluate the performance of the controllers. Both numerical simulations and experiments are used in evaluation. The study shows that all three proposed systems lead to improved tracking performance over a conventional PD controller.
Virtual experiment of optical spatial filtering in Matlab environment
NASA Astrophysics Data System (ADS)
Ji, Yunjing; Wang, Chunyong; Song, Yang; Lai, Jiancheng; Wang, Qinghua; Qi, Jing; Shen, Zhonghua
2017-08-01
The principle of spatial filtering experiment has been introduced, and the computer simulation platform with graphical user interface (GUI) has been made out in Matlab environment. Using it various filtering processes for different input image or different filtering purpose will be completed accurately, and filtering effect can be observed clearly with adjusting experimental parameters. The physical nature of the optical spatial filtering can be showed vividly, and so experimental teaching effect will be promoted.
Shallow Water Acoustic Experiments and Preliminary Planning for FY06 Fieldwork
2011-03-21
To) 5/1/2005-12/31/2010 4. TITLE AND SUBTITLE Shallow Water Acoustic Experiments and Preliminary Planning for FY06 Fieldwork 5a. CONTRACT NUMBERS...numerical computations show horizontal interference patterns within the duct. Richly de - tailed sound radiation fields are predicted at locations far...4) for the vertical modal amplitude Tm at x^L is now de - scribed in detail. First, the assumption of total transmission at the open-ended
Ji, S.; Hanes, D.M.; Shen, H.H.
2009-01-01
In this study, we report a direct comparison between a physical test and a computer simulation of rapidly sheared granular materials. An annular shear cell experiment was conducted. All parameters were kept the same between the physical and the computational systems to the extent possible. Artificially softened particles were used in the simulation to reduce the computational time to a manageable level. Sensitivity study on the particle stiffness ensured such artificial modification was acceptable. In the experiment, a range of normal stress was applied to a given amount of particles sheared in an annular trough with a range of controlled shear speed. Two types of particles, glass and Delrin, were used in the experiment. Qualitatively, the required torque to shear the materials under different rotational speed compared well with those in the physical experiments for both the glass and the Delrin particles. However, the quantitative discrepancies between the measured and simulated shear stresses were nearly a factor of two. Boundary conditions, particle size distribution, particle damping and friction, including a sliding and rolling, contact force model, were examined to determine their effects on the computational results. It was found that of the above, the rolling friction between particles had the most significant effect on the macro stress level. This study shows that discrete element simulation is a viable method for engineering design for granular material systems. Particle level information is needed to properly conduct these simulations. However, not all particle level information is equally important in the study regime. Rolling friction, which is not commonly considered in many discrete element models, appears to play an important role. ?? 2009 Elsevier Ltd.
The Modeling of Vibration Damping in SMA Wires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, D R; Kloucek, P; Seidman, T I
Through a mathematical and computational model of the physical behavior of shape memory alloy wires, this study shows that localized heating and cooling of such materials provides an effective means of damping vibrational energy. The thermally induced pseudo-elastic behavior of a shape memory wire is modeled using a continuum thermodynamic model and solved computationally as described by the authors in [23]. Computational experiments confirm that up to 80% of an initial shock of vibrational energy can be eliminated at the onset of a thermally-induced phase transformation through the use of spatially-distributed transformation regions along the length of a shape memorymore » alloy wire.« less
Du, Dongping; Yang, Hui; Ednie, Andrew R; Bennett, Eric S
2016-09-01
Glycan structures account for up to 35% of the mass of cardiac sodium ( Nav ) channels. To question whether and how reduced sialylation affects Nav activity and cardiac electrical signaling, we conducted a series of in vitro experiments on ventricular apex myocytes under two different glycosylation conditions, reduced protein sialylation (ST3Gal4(-/-)) and full glycosylation (control). Although aberrant electrical signaling is observed in reduced sialylation, realizing a better understanding of mechanistic details of pathological variations in INa and AP is difficult without performing in silico studies. However, computer model of Nav channels and cardiac myocytes involves greater levels of complexity, e.g., high-dimensional parameter space, nonlinear and nonconvex equations. Traditional linear and nonlinear optimization methods have encountered many difficulties for model calibration. This paper presents a new statistical metamodeling approach for efficient computer experiments and optimization of Nav models. First, we utilize a fractional factorial design to identify control variables from the large set of model parameters, thereby reducing the dimensionality of parametric space. Further, we develop the Gaussian process model as a surrogate of expensive and time-consuming computer models and then identify the next best design point that yields the maximal probability of improvement. This process iterates until convergence, and the performance is evaluated and validated with real-world experimental data. Experimental results show the proposed algorithm achieves superior performance in modeling the kinetics of Nav channels under a variety of glycosylation conditions. As a result, in silico models provide a better understanding of glyco-altered mechanistic details in state transitions and distributions of Nav channels. Notably, ST3Gal4(-/-) myocytes are shown to have higher probabilities accumulated in intermediate inactivation during the repolarization and yield a shorter refractory period than WTs. The proposed statistical design of computer experiments is generally extensible to many other disciplines that involve large-scale and computationally expensive models.
Evaluation of an eye-pointer interaction device for human-computer interaction.
Cáceres, Enrique; Carrasco, Miguel; Ríos, Sebastián
2018-03-01
Advances in eye-tracking technology have led to better human-computer interaction, and involve controlling a computer without any kind of physical contact. This research describes the transformation of a commercial eye-tracker for use as an alternative peripheral device in human-computer interactions, implementing a pointer that only needs the eye movements of a user facing a computer screen, thus replacing the need to control the software by hand movements. The experiment was performed with 30 test individuals who used the prototype with a set of educational videogames. The results show that, although most of the test subjects would prefer a mouse to control the pointer, the prototype tested has an empirical precision similar to that of the mouse, either when trying to control its movements or when attempting to click on a point of the screen.
Optimizing a reconfigurable material via evolutionary computation
NASA Astrophysics Data System (ADS)
Wilken, Sam; Miskin, Marc Z.; Jaeger, Heinrich M.
2015-08-01
Rapid prototyping by combining evolutionary computation with simulations is becoming a powerful tool for solving complex design problems in materials science. This method of optimization operates in a virtual design space that simulates potential material behaviors and after completion needs to be validated by experiment. However, in principle an evolutionary optimizer can also operate on an actual physical structure or laboratory experiment directly, provided the relevant material parameters can be accessed by the optimizer and information about the material's performance can be updated by direct measurements. Here we provide a proof of concept of such direct, physical optimization by showing how a reconfigurable, highly nonlinear material can be tuned to respond to impact. We report on an entirely computer controlled laboratory experiment in which a 6 ×6 grid of electromagnets creates a magnetic field pattern that tunes the local rigidity of a concentrated suspension of ferrofluid and iron filings. A genetic algorithm is implemented and tasked to find field patterns that minimize the force transmitted through the suspension. Searching within a space of roughly 1010 possible configurations, after testing only 1500 independent trials the algorithm identifies an optimized configuration of layered rigid and compliant regions.
NASA Astrophysics Data System (ADS)
Xu, Ye; Lee, Michael C.; Boroczky, Lilla; Cann, Aaron D.; Borczuk, Alain C.; Kawut, Steven M.; Powell, Charles A.
2009-02-01
Features calculated from different dimensions of images capture quantitative information of the lung nodules through one or multiple image slices. Previously published computer-aided diagnosis (CADx) systems have used either twodimensional (2D) or three-dimensional (3D) features, though there has been little systematic analysis of the relevance of the different dimensions and of the impact of combining different dimensions. The aim of this study is to determine the importance of combining features calculated in different dimensions. We have performed CADx experiments on 125 pulmonary nodules imaged using multi-detector row CT (MDCT). The CADx system computed 192 2D, 2.5D, and 3D image features of the lesions. Leave-one-out experiments were performed using five different combinations of features from different dimensions: 2D, 3D, 2.5D, 2D+3D, and 2D+3D+2.5D. The experiments were performed ten times for each group. Accuracy, sensitivity and specificity were used to evaluate the performance. Wilcoxon signed-rank tests were applied to compare the classification results from these five different combinations of features. Our results showed that 3D image features generate the best result compared with other combinations of features. This suggests one approach to potentially reducing the dimensionality of the CADx data space and the computational complexity of the system while maintaining diagnostic accuracy.
Evolving technologies for Space Station Freedom computer-based workstations
NASA Technical Reports Server (NTRS)
Jensen, Dean G.; Rudisill, Marianne
1990-01-01
Viewgraphs on evolving technologies for Space Station Freedom computer-based workstations are presented. The human-computer computer software environment modules are described. The following topics are addressed: command and control workstation concept; cupola workstation concept; Japanese experiment module RMS workstation concept; remote devices controlled from workstations; orbital maneuvering vehicle free flyer; remote manipulator system; Japanese experiment module exposed facility; Japanese experiment module small fine arm; flight telerobotic servicer; human-computer interaction; and workstation/robotics related activities.
ERIC Educational Resources Information Center
Mathies, Lorraine
1972-01-01
The ERIC information system is designed for computerized information storage and retrieval. While the computer can play an increasingly more vital role in facilitating reference searches of large literature collections, experience shows that manual searching gives the user skills and expertise that are essential to effectively use the computerized…
Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches
NASA Astrophysics Data System (ADS)
Duchaineau, Mark
2001-06-01
Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.
NASA Astrophysics Data System (ADS)
Yohana, Eflita; Yulianto, Mohamad Endy; Kwang-Hwang, Choi; Putro, Bondantio; Yohanes Aditya W., A.
2015-12-01
The study of humidity distribution simulation inside a room has been widely conducted by using computational fluid dynamics (CFD). Here, the simulation was done by employing inputs in the experiment of air humidity reduction in a sample house. Liquid dessicant CaCl2was used in this study to absorb humidity in the air, so that the enormity of humidity reduction occured during the experiment could be obtained.The experiment was conducted in the morning at 8 with liquid desiccant concentration of 50%, nozzle dimension of 0.2 mms attached in dehumidifier, and the debit of air which entered the sample house was 2.35 m3/min. Both in inlet and outlet sides of the room, a DHT 11 censor was installed and used to note changes in humidity and temperature during the experiment. In normal condition without turning on the dehumidifier, the censor noted that the average temperature inside the room was 28°C and RH of 65%.The experiment result showed that the relative humidity inside a sample house was decreasing up to 52% in inlet position. Further, through the results obtained from CFD simulation, the temperature distribution and relative humidity inside the sample house could be seen. It showed that the concentration of liquid desiccant of 50% experienced a decrease while the relative humidity distribution was considerably good since the average RH was 55% followed by the increase in air temperature of 29.2° C inside the sample house.
Brain Activity Associated with Emoticons: An fMRI Study
NASA Astrophysics Data System (ADS)
Yuasa, Masahide; Saito, Keiichi; Mukawa, Naoki
In this paper, we describe that brain activities associated with emoticons by using fMRI. In communication over a computer network, we use abstract faces such as computer graphics (CG) avatars and emoticons. These faces convey users' emotions and enrich their communications. However, the manner in which these faces influence the mental process is as yet unknown. The human brain may perceive the abstract face in an entirely different manner, depending on its level of reality. We conducted an experiment using fMRI in order to investigate the effects of emoticons. The results show that right inferior frontal gyrus, which associated with nonverbal communication, is activated by emoticons. Since the emoticons were created to reflect the real human facial expressions as accurately as possible, we believed that they would activate the right fusiform gyrus. However, this region was not found to be activated during the experiment. This finding is useful in understanding how abstract faces affect our behaviors and decision-making in communication over a computer network.
Using a combined computational-experimental approach to predict antibody-specific B cell epitopes.
Sela-Culang, Inbal; Benhnia, Mohammed Rafii-El-Idrissi; Matho, Michael H; Kaever, Thomas; Maybeno, Matt; Schlossman, Andrew; Nimrod, Guy; Li, Sheng; Xiang, Yan; Zajonc, Dirk; Crotty, Shane; Ofran, Yanay; Peters, Bjoern
2014-04-08
Antibody epitope mapping is crucial for understanding B cell-mediated immunity and required for characterizing therapeutic antibodies. In contrast to T cell epitope mapping, no computational tools are in widespread use for prediction of B cell epitopes. Here, we show that, utilizing the sequence of an antibody, it is possible to identify discontinuous epitopes on its cognate antigen. The predictions are based on residue-pairing preferences and other interface characteristics. We combined these antibody-specific predictions with results of cross-blocking experiments that identify groups of antibodies with overlapping epitopes to improve the predictions. We validate the high performance of this approach by mapping the epitopes of a set of antibodies against the previously uncharacterized D8 antigen, using complementary techniques to reduce method-specific biases (X-ray crystallography, peptide ELISA, deuterium exchange, and site-directed mutagenesis). These results suggest that antibody-specific computational predictions and simple cross-blocking experiments allow for accurate prediction of residues in conformational B cell epitopes. Copyright © 2014 Elsevier Ltd. All rights reserved.
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Minimizing Cache Misses Using Minimum-Surface Bodies
NASA Technical Reports Server (NTRS)
Frumkin, Michael; VanderWijngaart, Rob; Biegel, Bryan (Technical Monitor)
2002-01-01
A number of known techniques for improving cache performance in scientific computations involve the reordering of the iteration space. Some of these reorderings can be considered as coverings of the iteration space with the sets having good surface-to-volume ratio. Use of such sets reduces the number of cache misses in computations of local operators having the iteration space as a domain. First, we derive lower bounds which any algorithm must suffer while computing a local operator on a grid. Then we explore coverings of iteration spaces represented by structured and unstructured grids which allow us to approach these lower bounds. For structured grids we introduce a covering by successive minima tiles of the interference lattice of the grid. We show that the covering has low surface-to-volume ratio and present a computer experiment showing actual reduction of the cache misses achieved by using these tiles. For planar unstructured grids we show existence of a covering which reduces the number of cache misses to the level of structured grids. On the other hand, we present a triangulation of a 3-dimensional cube such that any local operator on the corresponding grid has significantly larger number of cache misses than a similar operator on a structured grid.
Sergiievskyi, Volodymyr P; Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel
2014-06-05
Molecular density functional theory (MDFT) offers an efficient implicit-solvent method to estimate molecule solvation free-energies, whereas conserving a fully molecular representation of the solvent. Even within a second-order approximation for the free-energy functional, the so-called homogeneous reference fluid approximation, we show that the hydration free-energies computed for a data set of 500 organic compounds are of similar quality as those obtained from molecular dynamics free-energy perturbation simulations, with a computer cost reduced by 2-3 orders of magnitude. This requires to introduce the proper partial volume correction to transform the results from the grand canonical to the isobaric-isotherm ensemble that is pertinent to experiments. We show that this correction can be extended to 3D-RISM calculations, giving a sound theoretical justification to empirical partial molar volume corrections that have been proposed recently.
A Computational Clonal Analysis of the Developing Mouse Limb Bud
Marcon, Luciano; Arqués, Carlos G.; Torres, Miguel S.; Sharpe, James
2011-01-01
A comprehensive spatio-temporal description of the tissue movements underlying organogenesis would be an extremely useful resource to developmental biology. Clonal analysis and fate mappings are popular experiments to study tissue movement during morphogenesis. Such experiments allow cell populations to be labeled at an early stage of development and to follow their spatial evolution over time. However, disentangling the cumulative effects of the multiple events responsible for the expansion of the labeled cell population is not always straightforward. To overcome this problem, we develop a novel computational method that combines accurate quantification of 2D limb bud morphologies and growth modeling to analyze mouse clonal data of early limb development. Firstly, we explore various tissue movements that match experimental limb bud shape changes. Secondly, by comparing computational clones with newly generated mouse clonal data we are able to choose and characterize the tissue movement map that better matches experimental data. Our computational analysis produces for the first time a two dimensional model of limb growth based on experimental data that can be used to better characterize limb tissue movement in space and time. The model shows that the distribution and shapes of clones can be described as a combination of anisotropic growth with isotropic cell mixing, without the need for lineage compartmentalization along the AP and PD axis. Lastly, we show that this comprehensive description can be used to reassess spatio-temporal gene regulations taking tissue movement into account and to investigate PD patterning hypothesis. PMID:21347315
Response Surface Model Building Using Orthogonal Arrays for Computer Experiments
NASA Technical Reports Server (NTRS)
Unal, Resit; Braun, Robert D.; Moore, Arlene A.; Lepsch, Roger A.
1997-01-01
This study investigates response surface methods for computer experiments and discusses some of the approaches available. Orthogonal arrays constructed for computer experiments are studied and an example application to a technology selection and optimization study for a reusable launch vehicle is presented.
Using the Computer as a Laboratory Instrument.
ERIC Educational Resources Information Center
Collings, Peter J.; Greenslade, Thomas B., Jr.
1989-01-01
Reports experiences during a two-year period in introducing the computer to the laboratory and students to the computer as a laboratory instrument. Describes a working philosophy, data acquisition system, and experiments. Summarizes the laboratory procedures of nine experiments, covering mechanics, heat, electromagnetism, and optics. (YP)
The Fabric for Frontier Experiments Project at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby, Michael
2014-01-01
The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere, 2) an extensive data management system for managing local and remote caches, cataloging, querying,more » moving, and tracking the use of data, 3) custom and generic database applications for calibrations, beam information, and other purposes, 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.« less
Automated Tests for Telephone Telepathy Using Mobile Phones.
Sheldrake, Rupert; Smart, Pamela; Avraamides, Leonidas
2015-01-01
To carry out automated experiments on mobile phones to test for telepathy in connection with telephone calls. Subjects, aged from 10 to 83, registered online with the names and mobile telephone numbers of three or two senders. A computer selected a sender at random, and asked him to call the subject via the computer. The computer then asked the subject to guess the caller׳s name, and connected the caller and the subject after receiving the guess. A test consisted of six trials. The effects of subjects׳ sex and age and the effects of time delays on guesses. The proportion of correct guesses of the caller׳s name, compared with the 33.3% or 50% mean chance expectations. In 2080 trials with three callers there were 869 hits (41.8%), above the 33.3% chance level (P < 1 × 10(-15)). The hit rate in incomplete tests was 43.8% (P = .00003) showing that optional stopping could not explain the positive results. In 745 trials with two callers, there were 411 hits (55.2%), above the 50% chance level (P = .003). An analysis of the data made it very unlikely that cheating could explain the positive results. These experiments showed that automated tests for telephone telepathy can be carried out using mobile phones. Copyright © 2015 Elsevier Inc. All rights reserved.
Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays
Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Wetzstein, Gordon
2017-01-01
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one. PMID:28193871
Experimental realization of universal geometric quantum gates with solid-state spins.
Zu, C; Wang, W-B; He, L; Zhang, W-G; Dai, C-Y; Wang, F; Duan, L-M
2014-10-02
Experimental realization of a universal set of quantum logic gates is the central requirement for the implementation of a quantum computer. In an 'all-geometric' approach to quantum computation, the quantum gates are implemented using Berry phases and their non-Abelian extensions, holonomies, from geometric transformation of quantum states in the Hilbert space. Apart from its fundamental interest and rich mathematical structure, the geometric approach has some built-in noise-resilience features. On the experimental side, geometric phases and holonomies have been observed in thermal ensembles of liquid molecules using nuclear magnetic resonance; however, such systems are known to be non-scalable for the purposes of quantum computing. There are proposals to implement geometric quantum computation in scalable experimental platforms such as trapped ions, superconducting quantum bits and quantum dots, and a recent experiment has realized geometric single-bit gates in a superconducting system. Here we report the experimental realization of a universal set of geometric quantum gates using the solid-state spins of diamond nitrogen-vacancy centres. These diamond defects provide a scalable experimental platform with the potential for room-temperature quantum computing, which has attracted strong interest in recent years. Our experiment shows that all-geometric and potentially robust quantum computation can be realized with solid-state spin quantum bits, making use of recent advances in the coherent control of this system.
The Information Science Experiment System - The computer for science experiments in space
NASA Technical Reports Server (NTRS)
Foudriat, Edwin C.; Husson, Charles
1989-01-01
The concept of the Information Science Experiment System (ISES), potential experiments, and system requirements are reviewed. The ISES is conceived as a computer resource in space whose aim is to assist computer, earth, and space science experiments, to develop and demonstrate new information processing concepts, and to provide an experiment base for developing new information technology for use in space systems. The discussion covers system hardware and architecture, operating system software, the user interface, and the ground communication link.
Development of Computer-Based Experiment Set on Simple Harmonic Motion of Mass on Springs
ERIC Educational Resources Information Center
Musik, Panjit
2017-01-01
The development of computer-based experiment set has become necessary in teaching physics in schools so that students can learn from their real experiences. The purpose of this study is to create and to develop the computer-based experiment set on simple harmonic motion of mass on springs for teaching and learning physics. The average period of…
Planned Axial Reorientation Investigation on Sloshsat
NASA Technical Reports Server (NTRS)
Chato, David J.
2000-01-01
This paper details the design and logic of an experimental investigation to study axial reorientation in low gravity. The Sloshsat free-flyer is described. The planned axial reorientation experiments and test matrixes are presented. Existing analytical tools are discussed. Estimates for settling range from 64 to 1127 seconds. The planned experiments are modelled using computational fluid dynamics. These models show promise in reducing settling estimates and demonstrate the ability of pulsed high thrust settling to emulate lower thrust continuous firing.
Time-scheduled delivery of computer health animations: "Installing" healthy habits of computer use.
Wang, Sy-Chyi; Chern, Jin-Yuan
2013-06-01
The development of modern technology brings convenience to our lives but removes physical activity from our daily routines, thereby putting our lives at risk. Extended computer use may contribute to symptoms such as visual impairment and musculoskeletal disorders. To help reduce the risk of physical inactivity and promote healthier computer use, this study developed a time-scheduled delivery of health-related animations for users sitting in front of computers for prolonged periods. In addition, we examined the effects that the program had on the computer-related health behavior intentions and actions of participants. Two waves of questionnaires were implemented for data collection before and after intervention. The results showed that the animation program indeed had a positive effect on participants' healthy computer use actions in terms of breathtaking, body massages, and body stretches. It also helped to bridge the intention-action gap of the health behaviors. The development and evaluation were documented, and users' experiences/suggestions were discussed at the end.
Control mechanism of double-rotator-structure ternary optical computer
NASA Astrophysics Data System (ADS)
Kai, SONG; Liping, YAN
2017-03-01
Double-rotator-structure ternary optical processor (DRSTOP) has two characteristics, namely, giant data-bits parallel computing and reconfigurable processor, which can handle thousands of data bits in parallel, and can run much faster than computers and other optical computer systems so far. In order to put DRSTOP into practical application, this paper established a series of methods, namely, task classification method, data-bits allocation method, control information generation method, control information formatting and sending method, and decoded results obtaining method and so on. These methods form the control mechanism of DRSTOP. This control mechanism makes DRSTOP become an automated computing platform. Compared with the traditional calculation tools, DRSTOP computing platform can ease the contradiction between high energy consumption and big data computing due to greatly reducing the cost of communications and I/O. Finally, the paper designed a set of experiments for DRSTOP control mechanism to verify its feasibility and correctness. Experimental results showed that the control mechanism is correct, feasible and efficient.
A Computer Simulation of Community Pharmacy Practice for Educational Use.
Bindoff, Ivan; Ling, Tristan; Bereznicki, Luke; Westbury, Juanita; Chalmers, Leanne; Peterson, Gregory; Ollington, Robert
2014-11-15
To provide a computer-based learning method for pharmacy practice that is as effective as paper-based scenarios, but more engaging and less labor-intensive. We developed a flexible and customizable computer simulation of community pharmacy. Using it, the students would be able to work through scenarios which encapsulate the entirety of a patient presentation. We compared the traditional paper-based teaching method to our computer-based approach using equivalent scenarios. The paper-based group had 2 tutors while the computer group had none. Both groups were given a prescenario and postscenario clinical knowledge quiz and survey. Students in the computer-based group had generally greater improvements in their clinical knowledge score, and third-year students using the computer-based method also showed more improvements in history taking and counseling competencies. Third-year students also found the simulation fun and engaging. Our simulation of community pharmacy provided an educational experience as effective as the paper-based alternative, despite the lack of a human tutor.
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...
2017-01-28
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
All you need is shape: Predicting shear banding in sand with LS-DEM
NASA Astrophysics Data System (ADS)
Kawamoto, Reid; Andò, Edward; Viggiani, Gioacchino; Andrade, José E.
2018-02-01
This paper presents discrete element method (DEM) simulations with experimental comparisons at multiple length scales-underscoring the crucial role of particle shape. The simulations build on technological advances in the DEM furnished by level sets (LS-DEM), which enable the mathematical representation of the surface of arbitrarily-shaped particles such as grains of sand. We show that this ability to model shape enables unprecedented capture of the mechanics of granular materials across scales ranging from macroscopic behavior to local behavior to particle behavior. Specifically, the model is able to predict the onset and evolution of shear banding in sands, replicating the most advanced high-fidelity experiments in triaxial compression equipped with sequential X-ray tomography imaging. We present comparisons of the model and experiment at an unprecedented level of quantitative agreement-building a one-to-one model where every particle in the more than 53,000-particle array has its own avatar or numerical twin. Furthermore, the boundary conditions of the experiment are faithfully captured by modeling the membrane effect as well as the platen displacement and tilting. The results show a computational tool that can give insight into the physics and mechanics of granular materials undergoing shear deformation and failure, with computational times comparable to those of the experiment. One quantitative measure that is extracted from the LS-DEM simulations that is currently not available experimentally is the evolution of three dimensional force chains inside and outside of the shear band. We show that the rotations on the force chains are correlated to the rotations in stress principal directions.
Fernández-Soto, Alicia; Martínez-Rodrigo, Arturo; Moncho-Bogani, José; Latorre, José Miguel; Fernández-Caballero, Antonio
2018-06-01
For the sake of establishing the neural correlates of phrase quadrature perception in harmonic rhythm, a musical experiment has been designed to induce music-evoked stimuli related to one important aspect of harmonic rhythm, namely the phrase quadrature. Brain activity is translated to action through electroencephalography (EEG) by using a brain-computer interface. The power spectral value of each EEG channel is estimated to obtain how power variance distributes as a function of frequency. The results of processing the acquired signals are in line with previous studies that use different musical parameters to induce emotions. Indeed, our experiment shows statistical differences in theta and alpha bands between the fulfillment and break of phrase quadrature, an important cue of harmonic rhythm, in two classical sonatas.
A method of semi-quantifying β-AP in brain PET-CT 11C-PiB images.
Jiang, Jiehui; Lin, Xiaoman; Wen, Junlin; Huang, Zhemin; Yan, Zhuangzhi
2014-01-01
Alzheimer's disease (AD) is a common health problem for elderly populations. Positron emission tomography-computed tomography (PET-CT)11C-PiB for beta-P (amyloid-β peptide, β-AP) imaging is an advanced method to diagnose AD in early stage. However, in practice radiologists lack a standardized value to semi-quantify β-AP. This paper proposes such a standardized value: SVβ-AP. This standardized value measures the mean ratio between the dimension of β-AP areas in PET and CT images. A computer aided diagnosis approach is also proposed to achieve SVβ-AP. A simulation experiment was carried out to pre-test the technical feasibility of the CAD approach and SVβ-AP. The experiment results showed that it is technically feasible.
Democratizing Children's Computation: Learning Computational Science as Aesthetic Experience
ERIC Educational Resources Information Center
Farris, Amy Voss; Sengupta, Pratim
2016-01-01
In this essay, Amy Voss Farris and Pratim Sengupta argue that a democratic approach to children's computing education in a science class must focus on the "aesthetics" of children's experience. In "Democracy and Education," Dewey links "democracy" with a distinctive understanding of "experience." For Dewey,…
Advantages of Parallel Processing and the Effects of Communications Time
NASA Technical Reports Server (NTRS)
Eddy, Wesley M.; Allman, Mark
2000-01-01
Many computing tasks involve heavy mathematical calculations, or analyzing large amounts of data. These operations can take a long time to complete using only one computer. Networks such as the Internet provide many computers with the ability to communicate with each other. Parallel or distributed computing takes advantage of these networked computers by arranging them to work together on a problem, thereby reducing the time needed to obtain the solution. The drawback to using a network of computers to solve a problem is the time wasted in communicating between the various hosts. The application of distributed computing techniques to a space environment or to use over a satellite network would therefore be limited by the amount of time needed to send data across the network, which would typically take much longer than on a terrestrial network. This experiment shows how much faster a large job can be performed by adding more computers to the task, what role communications time plays in the total execution time, and the impact a long-delay network has on a distributed computing system.
NASA Technical Reports Server (NTRS)
Przybyszewski, J.
1972-01-01
Computer-processed data from low-speed (10 rpm) slipring experiments with two similar (but of opposite polarity) gallium-lubricated tantalum slipring assemblies (hemisphere against disk) carrying 50 amperes dc in vacuum (10 to the minus 9th power torr) showed that the slipring assembly with the anodic hemisphere had significantly lower peak-to-peak values and standard deviations of coefficient-of-friction samples (a measure of smoothness of operation) than the slipring assembly with the cathodic hemisphere. Similar data from an experiment with the same slipring assemblies running currentless showed more random differences in the frictional behavior between the two assemblies.
Wilkinson, Ann; While, Alison E; Roberts, Julia
2009-04-01
This paper is a report of a review to describe and discuss the psychometric properties of instruments used in healthcare education settings measuring experience and attitudes of healthcare students regarding their information and communication technology skills and their use of computers and the Internet for education. Healthcare professionals are expected to be computer and information literate at registration. A previous review of evaluative studies of computer-based learning suggests that methods of measuring learners' attitudes to computers and computer aided learning are problematic. A search of eight health and social science databases located 49 papers, the majority published between 1995 and January 2007, focusing on the experience and attitudes of students in the healthcare professions towards computers and e-learning. An integrative approach was adopted, with narrative description of findings. Criteria for inclusion were quantitative studies using survey tools with samples of healthcare students and concerning computer and information literacy skills, access to computers, experience with computers and use of computers and the Internet for education purposes. Since the 1980s a number of instruments have been developed, mostly in the United States of America, to measure attitudes to computers, anxiety about computer use, information and communication technology skills, satisfaction and more recently attitudes to the Internet and computers for education. The psychometric properties are poorly described. Advances in computers and technology mean that many earlier tools are no longer valid. Measures of the experience and attitudes of healthcare students to the increased use of e-learning require development in line with computer and technology advances.
Demonstrations with a Liquid Crystal Shutter
ERIC Educational Resources Information Center
Kraftmakher, Yaakov
2012-01-01
The experiments presented show the response of a liquid crystal shutter to applied electric voltages and the delay of the operations. Both properties are important for liquid crystal displays of computers and television sets. Two characteristics of the shutter are determined: (i) the optical transmittance versus applied voltage of various…
An Intelligent Tutor for Intrusion Detection on Computer Systems.
ERIC Educational Resources Information Center
Rowe, Neil C.; Schiavo, Sandra
1998-01-01
Describes an intelligent tutor incorporating a program using artificial-intelligence planning methods to generate realistic audit files reporting actions of simulated users and intruders of a UNIX system, and a program simulating the system afterwards that asks students to inspect the audit and fix problems. Experiments show that students using…
Children Show Selective Trust in Technological Informants
ERIC Educational Resources Information Center
Danovitch, Judith H.; Alzahabi, Reem
2013-01-01
Although children are often exposed to technological devices early in life, little is known about how they evaluate these novel sources of information. In two experiments, children aged 3, 4, and 5 years old ("n" = 92) were presented with accurate and inaccurate computer informants, and they subsequently relied on information provided by…
TRoPICALS: A Computational Embodied Neuroscience Model of Compatibility Effects
ERIC Educational Resources Information Center
Caligiore, Daniele; Borghi, Anna M.; Parisi, Domenico; Baldassarre, Gianluca
2010-01-01
Perceiving objects activates the representation of their affordances. For example, experiments on compatibility effects showed that categorizing objects by producing certain handgrips (power or precision) is faster if the requested responses are compatible with the affordance elicited by the size of objects (e.g., small or large). The article…
Application of an efficient hybrid scheme for aeroelastic analysis of advanced propellers
NASA Technical Reports Server (NTRS)
Srivastava, R.; Sankar, N. L.; Reddy, T. S. R.; Huff, D. L.
1989-01-01
An efficient 3-D hybrid scheme is applied for solving Euler equations to analyze advanced propellers. The scheme treats the spanwise direction semi-explicitly and the other two directions implicitly, without affecting the accuracy, as compared to a fully implicit scheme. This leads to a reduction in computer time and memory requirement. The calculated power coefficients for two advanced propellers, SR3 and SR7L, and various advanced ratios showed good correlation with experiment. Spanwise distribution of elemental power coefficient and steady pressure coefficient differences also showed good agreement with experiment. A study of the effect of structural flexibility on the performance of the advanced propellers showed that structural deformation due to centrifugal and aero loading should be included for better correlation.
A maximum entropy reconstruction technique for tomographic particle image velocimetry
NASA Astrophysics Data System (ADS)
Bilsky, A. V.; Lozhkin, V. A.; Markovich, D. M.; Tokarev, M. P.
2013-04-01
This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART.
A molecular dynamics simulation study of chloroform
NASA Astrophysics Data System (ADS)
Tironi, Ilario G.; van Gunsteren, Wilfred F.
Three different chloroform models have been investigated using molecular dynamics computer simulation. The thermodynamic, structural and dynamic properties of the various models were investigated in detail. In particular, the potential energies, diffusion coefficients and rotational correlation times obtained for each model are compared with experiment. It is found that the theory of rotational Brownian motion fails in describing the rotational diffusion of chloroform. The force field of Dietz and Heinzinger was found to give good overall agreement with experiment. An extended investigation of this chloroform model has been performed. Values are reported for the isothermal compressibility, the thermal expansion coefficient and the constant volume heat capacity. The values agree well with experiment. The static and frequency dependent dielectric permittivity were computed from a 1·2 ns simulation conducted under reaction field boundary conditions. Considering the fact that the model is rigid with fixed partial charges, the static dielectric constant and Debye relaxation time compare well with experiment. From the same simulation the shear viscosity was computed using the off-diagonal elements of the pressure tensor, both via an Einstein type relation and via a Green-Kubo equation. The calculated viscosities show good agreement with experimental values. The excess Helmholtz energy is calculated using the thermodynamic integration technique and simulations of 50 and 80 ps. The value obtained for the excess Helmholtz energy matches the theoretical value within a few per cent.
New method of processing heat treatment experiments with numerical simulation support
NASA Astrophysics Data System (ADS)
Kik, T.; Moravec, J.; Novakova, I.
2017-08-01
In this work, benefits of combining modern software for numerical simulations of welding processes with laboratory research was described. Proposed new method of processing heat treatment experiments leading to obtaining relevant input data for numerical simulations of heat treatment of large parts was presented. It is now possible, by using experiments on small tested samples, to simulate cooling conditions comparable with cooling of bigger parts. Results from this method of testing makes current boundary conditions during real cooling process more accurate, but also can be used for improvement of software databases and optimization of a computational models. The point is to precise the computation of temperature fields for large scale hardening parts based on new method of temperature dependence determination of the heat transfer coefficient into hardening media for the particular material, defined maximal thickness of processed part and cooling conditions. In the paper we will also present an example of the comparison standard and modified (according to newly suggested methodology) heat transfer coefficient data’s and theirs influence on the simulation results. It shows how even the small changes influence mainly on distribution of temperature, metallurgical phases, hardness and stresses distribution. By this experiment it is also possible to obtain not only input data and data enabling optimization of computational model but at the same time also verification data. The greatest advantage of described method is independence of used cooling media type.
Social network modulation of reward-related signals
Fareri, Dominic S.; Niznikiewicz, Michael A.; Lee, Victoria K.; Delgado, Mauricio R.
2012-01-01
Everyday goals and experiences are often shared with others who may hold different places within our social networks. We investigated whether the experience of sharing a reward differs with respect to social network. Twenty human participants played a card guessing game for shared monetary outcomes with three partners: a computer, a confederate (out-of-network), and a friend (in-network). Participants subjectively rated the experience of sharing a reward more positively with their friend than the other partners. Neuroimaging results support participants’ subjective reports, as ventral striatal BOLD responses were more robust when sharing monetary gains with a friend, as compared to with the confederate or computer, suggesting a higher value for sharing with an in-network partner. Interestingly, ratings of social closeness co-varied with this activity, resulting in a significant partner × closeness interaction: exploratory analysis showed that only participants reporting higher levels of closeness demonstrated partner-related differences in striatal BOLD response. These results suggest that reward valuation in social contexts is sensitive to distinctions of social network, such that sharing positive experiences with in-network others may carry higher value. PMID:22745503
Enhancing the Undergraduate Computing Experience in Chemical Engineering CACHE Corporation
ERIC Educational Resources Information Center
Edgar, Thomas F.
2006-01-01
This white paper focuses on the integration and enhancement of the computing experience for undergraduates throughout the chemical engineering curriculum. The computing experience for undergraduates in chemical engineering should have continuity and be coordinated from course to course, because a single software solution is difficult to achieve in…
Mobile Cloud Computing with SOAP and REST Web Services
NASA Astrophysics Data System (ADS)
Ali, Mushtaq; Fadli Zolkipli, Mohamad; Mohamad Zain, Jasni; Anwar, Shahid
2018-05-01
Mobile computing in conjunction with Mobile web services drives a strong approach where the limitations of mobile devices may possibly be tackled. Mobile Web Services are based on two types of technologies; SOAP and REST, which works with the existing protocols to develop Web services. Both the approaches carry their own distinct features, yet to keep the constraint features of mobile devices in mind, the better in two is considered to be the one which minimize the computation and transmission overhead while offloading. The load transferring of mobile device to remote servers for execution called computational offloading. There are numerous approaches to implement computational offloading a viable solution for eradicating the resources constraints of mobile device, yet a dynamic method of computational offloading is always required for a smooth and simple migration of complex tasks. The intention of this work is to present a distinctive approach which may not engage the mobile resources for longer time. The concept of web services utilized in our work to delegate the computational intensive tasks for remote execution. We tested both SOAP Web services approach and REST Web Services for mobile computing. Two parameters considered in our lab experiments to test; Execution Time and Energy Consumption. The results show that RESTful Web services execution is far better than executing the same application by SOAP Web services approach, in terms of execution time and energy consumption. Conducting experiments with the developed prototype matrix multiplication app, REST execution time is about 200% better than SOAP execution approach. In case of energy consumption REST execution is about 250% better than SOAP execution approach.
Influence of direct computer experience on older adults' attitudes toward computers.
Jay, G M; Willis, S L
1992-07-01
This research examined whether older adults' attitudes toward computers became more positive as a function of computer experience. The sample comprised 101 community-dwelling older adults aged 57 to 87. The intervention involved a 2-week computer training program in which subjects learned to use a desktop publishing software program. A multidimensional computer attitude measure was used to assess differential attitude change and maintenance of change following training. The results indicated that older adults' computer attitudes are modifiable and that direct computer experience is an effective means of change. Attitude change as a function of training was found for the attitude dimensions targeted by the intervention program: computer comfort and efficacy. In addition, maintenance of attitude change was established for at least two weeks following training.
A context-specific latent inhibition effect in a human conditioned suppression task.
Byron Nelson, James; del Carmen Sanjuan, Maria
2006-06-01
Three studies used a computer video game preparation to demonstrate latent inhibition in adult humans. In all studies participants fired torpedoes at a target spaceship by clicking the mouse. Conditioned stimuli (CSs) were presented in the form of coloured "sensors" at the bottom of the screen. Conditioning was conducted by pairing a sensor with an attack from the target spaceship. Participants learned to suppress their rate of mouse clicking in preparation for an attack. In Experiment 1 a total of 10 preexposures to the sensor CS, prior to conditioning, retarded acquisition of suppression. In Experiment 2 the effect of preexposure was shown to be context specific. Experiment 3 showed little generalization of the preexposure effect from one sensor CS to another. Experiment 3 also showed that preexposure did not make the sensor CS inhibitory. Comparisons with conditioned suppression procedures with animals and negative-priming procedures are briefly discussed.
Radar multipath study for rain-on-radome experiments at the Aircraft Landing Dynamics Facility
NASA Technical Reports Server (NTRS)
Mackenzie, Anne I.; Staton, Leo D.
1990-01-01
An analytical study to determine the feasibility of a rain-on-radome experiment at the Aircraft Landing Dynamics Facility (ALDF) at the Langley Research Center is described. The experiment would measure the effects of heavy rain on the transmission of X-band weather radar signals, looking in particular for sources of anomalous attenuation. Feasibility is determined with regard to multipath signals arising from the major structural components of the ALDF. A computer program simulates the transmit and receive antennas, direct-path and multipath signals, and expected attenuation by rain. In the simulation, antenna height, signal polarization, and rainfall rate are variable parameters. The study shows that the rain-on-radome experiment is feasible with regard to multipath signals. The total received signal, taking into account multipath effects, could be measured by commercially available equipment. The study also shows that horizontally polarized signals would produce better experimental results than vertically polarized signals.
BIOSSES: a semantic sentence similarity estimation system for the biomedical domain.
Sogancioglu, Gizem; Öztürk, Hakime; Özgür, Arzucan
2017-07-15
The amount of information available in textual format is rapidly increasing in the biomedical domain. Therefore, natural language processing (NLP) applications are becoming increasingly important to facilitate the retrieval and analysis of these data. Computing the semantic similarity between sentences is an important component in many NLP tasks including text retrieval and summarization. A number of approaches have been proposed for semantic sentence similarity estimation for generic English. However, our experiments showed that such approaches do not effectively cover biomedical knowledge and produce poor results for biomedical text. We propose several approaches for sentence-level semantic similarity computation in the biomedical domain, including string similarity measures and measures based on the distributed vector representations of sentences learned in an unsupervised manner from a large biomedical corpus. In addition, ontology-based approaches are presented that utilize general and domain-specific ontologies. Finally, a supervised regression based model is developed that effectively combines the different similarity computation metrics. A benchmark data set consisting of 100 sentence pairs from the biomedical literature is manually annotated by five human experts and used for evaluating the proposed methods. The experiments showed that the supervised semantic sentence similarity computation approach obtained the best performance (0.836 correlation with gold standard human annotations) and improved over the state-of-the-art domain-independent systems up to 42.6% in terms of the Pearson correlation metric. A web-based system for biomedical semantic sentence similarity computation, the source code, and the annotated benchmark data set are available at: http://tabilab.cmpe.boun.edu.tr/BIOSSES/ . gizemsogancioglu@gmail.com or arzucan.ozgur@boun.edu.tr. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Memory states influence value-based decisions.
Duncan, Katherine D; Shohamy, Daphna
2016-11-01
Using memory to guide decisions allows past experience to improve future outcomes. However, the circumstances that modulate how and when memory influences decisions are not well understood. Here, we report that the use of memories to guide decisions depends on the context in which these decisions are made. We show that decisions made in the context of familiar images are more likely to be influenced by past events than are decisions made in the context of novel images (Experiment 1), that this bias persists even when a temporal gap is introduced between the image presentation and the decision (Experiment 2), and that contextual novelty facilitates value learning whereas familiarity facilitates the retrieval and use of previously learned values (Experiment 3). These effects are consistent with neurobiological and computational models of memory, which propose that familiar images evoke a lingering "retrieval state" that facilitates the recollection of other episodic memories. Together, these experiments highlight the importance of episodic memory for decision-making and provide an example of how computational and neurobiological theories can lead to new insights into how and when different types of memories guide our choices. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Using medical knowledge sources on handheld computers--a qualitative study among junior doctors.
Axelson, Christian; Wårdh, Inger; Strender, Lars-Erik; Nilsson, Gunnar
2007-09-01
The emergence of mobile computing could have an impact on how junior doctors learn. To exploit this opportunity it is essential to understand their information seeking process. To explore junior doctors' experiences of using medical knowledge sources on handheld computers. Interviews with five Swedish junior doctors. A qualitative manifest content analysis of a focus group interview followed by a qualitative latent content analysis of two individual interviews. A focus group interview showed that users were satisfied with access to handheld medical knowledge sources, but there was concern about contents, reliability and device dependency. Four categories emerged from individual interviews: (1) A feeling of uncertainty about using handheld technology in medical care; (2) A sense of security that handhelds can provide; (3) A need for contents to be personalized; (4) A degree of adaptability to make the handheld a versatile information tool. A theme was established to link the four categories together, as expressed in the Conclusion section. Junior doctors' experiences of using medical knowledge sources on handheld computers shed light on the need to decrease uncertainty about clinical decisions during medical internship, and to find ways to influence the level of self-confidence in the junior doctor's process of decision-making.
Quo vadimus? The 21st Century and multimedia
NASA Technical Reports Server (NTRS)
Kuhn, Allan D.
1991-01-01
The concept is related of computer driven multimedia to the NASA Scientific and Technical Information Program (STIP). Multimedia is defined here as computer integration and output of text, animation, audio, video, and graphics. Multimedia is the stage of computer based information that allows access to experience. The concepts are also drawn in of hypermedia, intermedia, interactive multimedia, hypertext, imaging, cyberspace, and virtual reality. Examples of these technology developments are given for NASA, private industry, and academia. Examples of concurrent technology developments and implementations are given to show how these technologies, along with multimedia, have put us at the threshold of the 21st century. The STI Program sees multimedia as an opportunity for revolutionizing the way STI is managed.
Machining fixture layout optimization using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Dou, Jianping; Wang, Xingsong; Wang, Lei
2011-05-01
Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.
Numerical simulation of steady supersonic flow over spinning bodies of revolution
NASA Technical Reports Server (NTRS)
Sturek, W. B.; Schiff, L. B.
1982-01-01
A recently reported parabolized Navier-Stokes code has been employed to compute the supersonic flowfield about a spinning cone and spinning and nonspinning ogive cylinder and boattailed bodies of revolution at moderate incidence. The computations were performed for flow conditions where extensive measurements for wall pressure, boundary-layer velocity profiles, and Magnus force had been obtained. Comparisons between the computational results and experiment indicate excellent agreement for angles of attack up to 6 deg. At angles greater than 6 deg discrepancies are noted which are tentatively attributed to turbulence modeling errors. The comparisons for Magnus effects show that the code accurately predicts the effects of body shape for the selected models.
Navier-Stokes and Comprehensive Analysis Performance Predictions of the NREL Phase VI Experiment
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Burklund, Michael D.; Johnson, Wayne
2003-01-01
A vortex lattice code, CAMRAD II, and a Reynolds-Averaged Navier-Stoke code, OVERFLOW-D2, were used to predict the aerodynamic performance of a two-bladed horizontal axis wind turbine. All computations were compared with experimental data that was collected at the NASA Ames Research Center 80- by 120-Foot Wind Tunnel. Computations were performed for both axial as well as yawed operating conditions. Various stall delay models and dynamics stall models were used by the CAMRAD II code. Comparisons between the experimental data and computed aerodynamic loads show that the OVERFLOW-D2 code can accurately predict the power and spanwise loading of a wind turbine rotor.
Self-Organized Service Negotiation for Collaborative Decision Making
Zhang, Bo; Zheng, Ziming
2014-01-01
This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM. PMID:25243228
The role of beliefs in lexical alignment: evidence from dialogs with humans and computers.
Branigan, Holly P; Pickering, Martin J; Pearson, Jamie; McLean, Janet F; Brown, Ash
2011-10-01
Five experiments examined the extent to which speakers' alignment (i.e., convergence) on words in dialog is mediated by beliefs about their interlocutor. To do this, we told participants that they were interacting with another person or a computer in a task in which they alternated between selecting pictures that matched their 'partner's' descriptions and naming pictures themselves (though in reality all responses were scripted). In both text- and speech-based dialog, participants tended to repeat their partner's choice of referring expression. However, they showed a stronger tendency to align with 'computer' than with 'human' partners, and with computers that were presented as less capable than with computers that were presented as more capable. The tendency to align therefore appears to be mediated by beliefs, with the relevant beliefs relating to an interlocutor's perceived communicative capacity. Copyright © 2011 Elsevier B.V. All rights reserved.
Self-organized service negotiation for collaborative decision making.
Zhang, Bo; Huang, Zhenhua; Zheng, Ziming
2014-01-01
This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM.
Computer-aided detection of initial polyp candidates with level set-based adaptive convolution
NASA Astrophysics Data System (ADS)
Zhu, Hongbin; Duan, Chaijie; Liang, Zhengrong
2009-02-01
In order to eliminate or weaken the interference between different topological structures on the colon wall, adaptive and normalized convolution methods were used to compute the first and second order spatial derivatives of computed tomographic colonography images, which is the beginning of various geometric analyses. However, the performance of such methods greatly depends on the single-layer representation of the colon wall, which is called the starting layer (SL) in the following text. In this paper, we introduce a level set-based adaptive convolution (LSAC) method to compute the spatial derivatives, in which the level set method is employed to determine a more reasonable SL. The LSAC was applied to a computer-aided detection (CAD) scheme to detect the initial polyp candidates, and experiments showed that it benefits the CAD scheme in both the detection sensitivity and specificity as compared to our previous work.
Distributed Computation of the knn Graph for Large High-Dimensional Point Sets
Plaku, Erion; Kavraki, Lydia E.
2009-01-01
High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors. PMID:19847318
Experimental comparison of two quantum computing architectures.
Linke, Norbert M; Maslov, Dmitri; Roetteler, Martin; Debnath, Shantanu; Figgatt, Caroline; Landsman, Kevin A; Wright, Kenneth; Monroe, Christopher
2017-03-28
We run a selection of algorithms on two state-of-the-art 5-qubit quantum computers that are based on different technology platforms. One is a publicly accessible superconducting transmon device (www. ibm.com/ibm-q) with limited connectivity, and the other is a fully connected trapped-ion system. Even though the two systems have different native quantum interactions, both can be programed in a way that is blind to the underlying hardware, thus allowing a comparison of identical quantum algorithms between different physical systems. We show that quantum algorithms and circuits that use more connectivity clearly benefit from a better-connected system of qubits. Although the quantum systems here are not yet large enough to eclipse classical computers, this experiment exposes critical factors of scaling quantum computers, such as qubit connectivity and gate expressivity. In addition, the results suggest that codesigning particular quantum applications with the hardware itself will be paramount in successfully using quantum computers in the future.
Improved look-up table method of computer-generated holograms.
Wei, Hui; Gong, Guanghong; Li, Ni
2016-11-10
Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.
SPREADSHEET-BASED PROGRAM FOR ERGONOMIC ADJUSTMENT OF NOTEBOOK COMPUTER AND WORKSTATION SETTINGS.
Nanthavanij, Suebsak; Prae-Arporn, Kanlayanee; Chanjirawittaya, Sorajak; Paripoonyo, Satirajit; Rodloy, Somsak
2015-06-01
This paper discusses a computer program, ErgoNBC, which provides suggestions regarding the ergonomic settings of a notebook computer (NBC), workstation components, and selected accessories in order to help computer users to assume an appropriate work posture during the NBC work. From the users' body height, NBC and workstation component data, ErgoNBC computes the recommended tilt angle of NBC base unit, NBC screen angle, distance between the user and NBC, seat height and work surface height. If necessary, the NBC base support, seat cushion and footrest, including their settings, are recommended. An experiment involving twenty-four university students was conducted to evaluate the recommendations provided by ErgoNBC. The Rapid Upper Limb Assessment (RULA) technique was used to analyze their work postures both before and after implementing the Ergo NBC's recommendations. The results clearly showed that ErgoNBC could significantly help to improve the subjects' work postures.
A decision support model for investment on P2P lending platform.
Zeng, Xiangxiang; Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace-Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone.
Wan, Shixiang; Zou, Quan
2017-01-01
Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.
A decision support model for investment on P2P lending platform
Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace—Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone. PMID:28877234
Computational models of epileptiform activity.
Wendling, Fabrice; Benquet, Pascal; Bartolomei, Fabrice; Jirsa, Viktor
2016-02-15
We reviewed computer models that have been developed to reproduce and explain epileptiform activity. Unlike other already-published reviews on computer models of epilepsy, the proposed overview starts from the various types of epileptiform activity encountered during both interictal and ictal periods. Computational models proposed so far in the context of partial and generalized epilepsies are classified according to the following taxonomy: neural mass, neural field, detailed network and formal mathematical models. Insights gained about interictal epileptic spikes and high-frequency oscillations, about fast oscillations at seizure onset, about seizure initiation and propagation, about spike-wave discharges and about status epilepticus are described. This review shows the richness and complementarity of the various modeling approaches as well as the fruitful contribution of the computational neuroscience community in the field of epilepsy research. It shows that models have progressively gained acceptance and are now considered as an efficient way of integrating structural, functional and pathophysiological data about neural systems into "coherent and interpretable views". The advantages, limitations and future of modeling approaches are discussed. Perspectives in epilepsy research and clinical epileptology indicate that very promising directions are foreseen, like model-guided experiments or model-guided therapeutic strategy, among others. Copyright © 2015 Elsevier B.V. All rights reserved.
Taking Lessons Learned from a Proxy Application to a Full Application for SNAP and PARTISN
Womeldorff, Geoffrey Alan; Payne, Joshua Estes; Bergen, Benjamin Karl
2017-06-09
SNAP is a proxy application which simulates the computational motion of a neutral particle transport code, PARTISN. Here in this work, we have adapted parts of SNAP separately; we have re-implemented the iterative shell of SNAP in the task-model runtime Legion, showing an improvement to the original schedule, and we have created multiple Kokkos implementations of the computational kernel of SNAP, displaying similar performance to the native Fortran. We then translate our Kokkos experiments in SNAP to PARTISN, necessitating engineering development, regression testing, and further thought.
Integration of Openstack cloud resources in BES III computing cluster
NASA Astrophysics Data System (ADS)
Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan
2017-10-01
Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.
Taking Lessons Learned from a Proxy Application to a Full Application for SNAP and PARTISN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Womeldorff, Geoffrey Alan; Payne, Joshua Estes; Bergen, Benjamin Karl
SNAP is a proxy application which simulates the computational motion of a neutral particle transport code, PARTISN. Here in this work, we have adapted parts of SNAP separately; we have re-implemented the iterative shell of SNAP in the task-model runtime Legion, showing an improvement to the original schedule, and we have created multiple Kokkos implementations of the computational kernel of SNAP, displaying similar performance to the native Fortran. We then translate our Kokkos experiments in SNAP to PARTISN, necessitating engineering development, regression testing, and further thought.
Flexibility of Bricard's linkages and other structures via resultants and computer algebra.
Lewis, Robert H; Coutsias, Evangelos A
2016-07-01
Flexibility of structures is extremely important for chemistry and robotics. Following our earlier work, we study flexibility using polynomial equations, resultants, and a symbolic algorithm of our creation that analyzes the resultant. We show that the software solves a classic arrangement of quadrilaterals in the plane due to Bricard. We fill in several gaps in Bricard's work and discover new flexible arrangements that he was apparently unaware of. This provides strong evidence for the maturity of the software, and is a wonderful example of mathematical discovery via computer assisted experiment.
[Research of controlling of smart home system based on P300 brain-computer interface].
Wang, Jinjia; Yang, Chengjie
2014-08-01
Using electroencephalogram (EEG) signal to control external devices has always been the research focus in the field of brain-computer interface (BCI). This is especially significant for those disabilities who have lost capacity of movements. In this paper, the P300-based BCI and the microcontroller-based wireless radio frequency (RF) technology are utilized to design a smart home control system, which can be used to control household appliances, lighting system, and security devices directly. Experiment results showed that the system was simple, reliable and easy to be populirised.
One Head Start Classroom's Experience: Computers and Young Children's Development.
ERIC Educational Resources Information Center
Fischer, Melissa Anne; Gillespie, Catherine Wilson
2003-01-01
Contends that early childhood educators need to understand how exposure to computers and constructive computer programs affects the development of children. Specifically examines: (1) research on children's technology experiences; (2) determining best practices; and (3) addressing educators' concerns about computers replacing other developmentally…
Visuomotor learning by passive motor experience
Sakamoto, Takashi; Kondo, Toshiyuki
2015-01-01
Humans can adapt to unfamiliar dynamic and/or kinematic transformations through the active motor experience. Recent studies of neurorehabilitation using robots or brain-computer interface (BCI) technology suggest that passive motor experience would play a measurable role in motor recovery, however our knowledge of passive motor learning is limited. To clarify the effects of passive motor experience on human motor learning, we performed arm reaching experiments guided by a robotic manipulandum. The results showed that the passive motor experience had an anterograde transfer effect on the subsequent motor execution, whereas no retrograde interference was confirmed in the ABA paradigm experiment. This suggests that the passive experience of the error between visual and proprioceptive sensations leads to the limited but actual compensation of behavior, although it is fragile and cannot be consolidated as a persistent motor memory. PMID:26029091
Roy, Tapta Kanchan; Sharma, Rahul; Gerber, R Benny
2016-01-21
First-principles quantum calculations for anharmonic vibrational spectroscopy of three protected dipeptides are carried out and compared with experimental data. Using hybrid HF/MP2 potentials, the Vibrational Self-Consistent Field with Second-Order Perturbation Correction (VSCF-PT2) algorithm is used to compute the spectra without any ad hoc scaling or fitting. All of the vibrational modes (135 for the largest system) are treated quantum mechanically and anharmonically using full pair-wise coupling potentials to represent the interaction between different modes. In the hybrid potential scheme the MP2 method is used for the harmonic part of the potential and a modified HF method is used for the anharmonic part. The overall agreement between computed spectra and experiment is very good and reveals different signatures for different conformers. This study shows that first-principles spectroscopic calculations of good accuracy are possible for dipeptides hence it opens possibilities for determination of dipeptide conformer structures by comparison of spectroscopic calculations with experiment.
Conditional cooperation and confusion in public-goods experiments
Burton-Chellew, Maxwell N.; El Mouden, Claire; West, Stuart A.
2016-01-01
Economic experiments are often used to study if humans altruistically value the welfare of others. A canonical result from public-good games is that humans vary in how they value the welfare of others, dividing into fair-minded conditional cooperators, who match the cooperation of others, and selfish noncooperators. However, an alternative explanation for the data are that individuals vary in their understanding of how to maximize income, with misunderstanding leading to the appearance of cooperation. We show that (i) individuals divide into the same behavioral types when playing with computers, whom they cannot be concerned with the welfare of; (ii) behavior across games with computers and humans is correlated and can be explained by variation in understanding of how to maximize income; (iii) misunderstanding correlates with higher levels of cooperation; and (iv) standard control questions do not guarantee understanding. These results cast doubt on certain experimental methods and demonstrate that a common assumption in behavioral economics experiments, that choices reveal motivations, will not necessarily hold. PMID:26787890
Luo, Yiqi; Zhang, Shen; Tao, Ran; Geng, Haiyan
2016-02-01
We conducted two experiments to explore how social decision making is influenced by the interaction of eye contact and social value orientation (SVO). Specifically, participants with a Prosocial (Prosocials) or a Proself (Proselfs) SVO played Prisoner Dilemma games with a computer partner following supraliminal (Experiment 1) and subliminal (Experiment 2) direct gaze from that partner. Results showed that participants made more cooperative decisions after supraliminal eye contact than no eye contact, and the effect only existed for the Prosocials but not for the Proselfs. Nevertheless, when the computer partner made a subliminal eye contact with the participants, although more cooperative choices were found among the Prosocials following subliminal eye contact, relative to no contact, the Proselfs demonstrated reduced cooperation rates. These findings suggest that Prosocials and Proselfs interpreted eye contact in distinct ways at different levels of awareness, which led to various social decision making. Copyright © 2016 Elsevier Inc. All rights reserved.
Scaling analysis applied to the NORVEX code development and thermal energy flight experiment
NASA Technical Reports Server (NTRS)
Skarda, J. Raymond Lee; Namkoong, David; Darling, Douglas
1991-01-01
A scaling analysis is used to study the dominant flow processes that occur in molten phase change material (PCM) under 1 g and microgravity conditions. Results of the scaling analysis are applied to the development of the NORVEX (NASA Oak Ridge Void Experiment) computer program and the preparation of the Thermal Energy Storage (TES) flight experiment. The NORVEX computer program which is being developed to predict melting and freezing with void formation in a 1 g or microgravity environment of the PCM is described. NORVEX predictions are compared with the scaling and similarity results. The approach to be used to validate NORVEX with TES flight data is also discussed. Similarity and scaling show that the inertial terms must be included as part of the momentum equation in either the 1 g or microgravity environment (a creeping flow assumption is invalid). A 10(exp -4) environment was found to be a suitable microgravity environment for the proposed PCM.
Conditional cooperation and confusion in public-goods experiments.
Burton-Chellew, Maxwell N; El Mouden, Claire; West, Stuart A
2016-02-02
Economic experiments are often used to study if humans altruistically value the welfare of others. A canonical result from public-good games is that humans vary in how they value the welfare of others, dividing into fair-minded conditional cooperators, who match the cooperation of others, and selfish noncooperators. However, an alternative explanation for the data are that individuals vary in their understanding of how to maximize income, with misunderstanding leading to the appearance of cooperation. We show that (i) individuals divide into the same behavioral types when playing with computers, whom they cannot be concerned with the welfare of; (ii) behavior across games with computers and humans is correlated and can be explained by variation in understanding of how to maximize income; (iii) misunderstanding correlates with higher levels of cooperation; and (iv) standard control questions do not guarantee understanding. These results cast doubt on certain experimental methods and demonstrate that a common assumption in behavioral economics experiments, that choices reveal motivations, will not necessarily hold.
ERIC Educational Resources Information Center
Kim, Charles; Jackson, Deborah; Keiller, Peter
2016-01-01
A new, interdisciplinary, team-taught course has been designed to educate students in Electrical and Computer Engineering (ECE) so that they can respond to global and urgent issues concerning computer control systems in nuclear power plants. This paper discusses our experience and assessment of the interdisciplinary computer and nuclear energy…
ERIC Educational Resources Information Center
Pritchard, Benjamin P.; Simpson, Scott; Zurek, Eva; Autschbach, Jochen
2014-01-01
A computational experiment investigating the [superscript 1]H and [superscript 13]C nuclear magnetic resonance (NMR) chemical shifts of molecules with unpaired electrons has been developed and implemented. This experiment is appropriate for an upper-level undergraduate laboratory course in computational, physical, or inorganic chemistry. The…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jing Yanfei, E-mail: yanfeijing@uestc.edu.c; Huang Tingzhu, E-mail: tzhuang@uestc.edu.c; Duan Yong, E-mail: duanyong@yahoo.c
This study is mainly focused on iterative solutions with simple diagonal preconditioning to two complex-valued nonsymmetric systems of linear equations arising from a computational chemistry model problem proposed by Sherry Li of NERSC. Numerical experiments show the feasibility of iterative methods to some extent when applied to the problems and reveal the competitiveness of our recently proposed Lanczos biconjugate A-orthonormalization methods to other classic and popular iterative methods. By the way, experiment results also indicate that application specific preconditioners may be mandatory and required for accelerating convergence.
The application of digital techniques to the analysis of metallurgical experiments
NASA Technical Reports Server (NTRS)
Rathz, T. J.
1977-01-01
The application of a specific digital computer system (known as the Image Data Processing System) to the analysis of three NASA-sponsored metallurgical experiments is discussed in some detail. The basic hardware and software components of the Image Data Processing System are presented. Many figures are presented in the discussion of each experimental analysis in an attempt to show the accuracy and speed that the Image Data Processing System affords in analyzing photographic images dealing with metallurgy, and in particular with material processing.
Remote control system for high-perfomance computer simulation of crystal growth by the PFC method
NASA Astrophysics Data System (ADS)
Pavlyuk, Evgeny; Starodumov, Ilya; Osipov, Sergei
2017-04-01
Modeling of crystallization process by the phase field crystal method (PFC) - one of the important directions of modern computational materials science. In this paper, the practical side of the computer simulation of the crystallization process by the PFC method is investigated. To solve problems using this method, it is necessary to use high-performance computing clusters, data storage systems and other often expensive complex computer systems. Access to such resources is often limited, unstable and accompanied by various administrative problems. In addition, the variety of software and settings of different computing clusters sometimes does not allow researchers to use unified program code. There is a need to adapt the program code for each configuration of the computer complex. The practical experience of the authors has shown that the creation of a special control system for computing with the possibility of remote use can greatly simplify the implementation of simulations and increase the performance of scientific research. In current paper we show the principal idea of such a system and justify its efficiency.
SCEAPI: A unified Restful Web API for High-Performance Computing
NASA Astrophysics Data System (ADS)
Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi
2017-10-01
The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.
Statistical Computations Underlying the Dynamics of Memory Updating
Gershman, Samuel J.; Radulescu, Angela; Norman, Kenneth A.; Niv, Yael
2014-01-01
Psychophysical and neurophysiological studies have suggested that memory is not simply a carbon copy of our experience: Memories are modified or new memories are formed depending on the dynamic structure of our experience, and specifically, on how gradually or abruptly the world changes. We present a statistical theory of memory formation in a dynamic environment, based on a nonparametric generalization of the switching Kalman filter. We show that this theory can qualitatively account for several psychophysical and neural phenomena, and present results of a new visual memory experiment aimed at testing the theory directly. Our experimental findings suggest that humans can use temporal discontinuities in the structure of the environment to determine when to form new memory traces. The statistical perspective we offer provides a coherent account of the conditions under which new experience is integrated into an old memory versus forming a new memory, and shows that memory formation depends on inferences about the underlying structure of our experience. PMID:25375816
Mobility-Aware Caching and Computation Offloading in 5G Ultra-Dense Cellular Networks
Chen, Min; Hao, Yixue; Qiu, Meikang; Song, Jeungeun; Wu, Di; Humar, Iztok
2016-01-01
Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D) caching, Small cell Base Station (SBS) caching and Macrocell Base Station (MBS) caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching). In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user’s quality of experience (QoE) and the heterogeneity of mobile terminals in terms of caching and computing capabilities. PMID:27347975
Mobility-Aware Caching and Computation Offloading in 5G Ultra-Dense Cellular Networks.
Chen, Min; Hao, Yixue; Qiu, Meikang; Song, Jeungeun; Wu, Di; Humar, Iztok
2016-06-25
Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D) caching, Small cell Base Station (SBS) caching and Macrocell Base Station (MBS) caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching). In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user's quality of experience (QoE) and the heterogeneity of mobile terminals in terms of caching and computing capabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas
2014-08-01
The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems andmore » then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.« less
NASA Astrophysics Data System (ADS)
Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.
2012-12-01
Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.
Computational Study of Hypersonic Boundary Layer Stability on Cones
NASA Astrophysics Data System (ADS)
Gronvall, Joel Edwin
Due to the complex nature of boundary layer laminar-turbulent transition in hypersonic flows and the resultant effect on the design of re-entry vehicles, there remains considerable interest in developing a deeper understanding of the underlying physics. To that end, the use of experimental observations and computational analysis in a complementary manner will provide the greatest insights. It is the intent of this work to provide such an analysis for two ongoing experimental investigations. The first focuses on the hypersonic boundary layer transition experiments for a slender cone that are being conducted at JAXA's free-piston shock tunnel HIEST facility. Of particular interest are the measurements of disturbance frequencies associated with transition at high enthalpies. The computational analysis provided for these cases included two-dimensional CFD mean flow solutions for use in boundary layer stability analyses. The disturbances in the boundary layer were calculated using the linear parabolized stability equations. Estimates for transition locations, comparisons of measured disturbance frequencies and computed frequencies, and a determination of the type of disturbances present were made. It was found that for the cases where the disturbances were measured at locations where the flow was still laminar but nearly transitional, that the highly amplified disturbances showed reasonable agreement with the computations. Additionally, an investigation of the effects of finite-rate chemistry and vibrational excitation on flows over cones was conducted for a set of theoretical operational conditions at the HIEST facility. The second study focuses on transition in three-dimensional hypersonic boundary layers, and for this the cone at angle of attack experiments being conducted at the Boeing/AFOSR Mach-6 quiet tunnel at Purdue University were examined. Specifically, the effect of surface roughness on the development of the stationary crossflow instability are investigated in this work. One standard mean flow solution and two direct numerical simulations of a slender cone at an angle of attack were computed. The direct numerical simulations included a digitally-filtered, randomly distributed surface roughness and were performed using a high-order, low-dissipation numerical scheme on appropriately resolved grids. Comparisons with experimental observations showed excellent qualitative agreement. Comparisons with similar previous computational work were also made and showed agreement in the wavenumber range of the most unstable crossflow modes.
A Numerical Evaluation of Icing Effects on a Natural Laminar Flow Airfoil
NASA Technical Reports Server (NTRS)
Chung, James J.; Addy, Harold E., Jr.
2000-01-01
As a part of CFD code validation efforts within the Icing Branch of NASA Glenn Research Center, computations were performed for natural laminar flow (NLF) airfoil, NLF-0414. with 6 and 22.5 minute ice accretions. Both 3-D ice castings and 2-D machine-generated ice shapes were used in wind tunnel tests to study the effects of natural ice is well as simulated ice. They were mounted in the test section of the Low Turbulence Pressure Tunnel (LTPT) at NASA Langley that the 2-dimensionality of the flow can be maintained. Aerodynamic properties predicted by computations were compared to data obtained through the experiment by the authors at the LTPT. Computations were performed only in 2-D and in the case of 3-D ice, the digitized ice shape obtained at one spanwise location was used. The comparisons were mainly concentrated on the lift characteristics over Reynolds numbers ranging from 3 to 10 million and Mach numbers ranging from 0.12 to 0.29. WIND code computations indicated that the predicted stall angles were in agreement with experiment within one or two degrees. The maximum lift values obtained by computations were in good agreement with those of the experiment for the 6 minute ice shapes and the minute 3-D ice, but were somewhat lower in the case of the 22.5 minute 2-D ice. In general, the Reynolds number variation did not cause much change in the lift values while the variation of Mach number showed more change in the lift. The Spalart-Allmaras (S-A) turbulence model was the best performing model for the airfoil with the 22.5 minute ice and the Shear Stress Turbulence (SST) turbulence model was the best for the airfoil with the 6 minute ice and also for the clean airfoil. The pressure distribution on the surface of the iced airfoil showed good agreement for the 6 minute ice. However, relatively poor agreement of the pressure distribution on the upper surface aft of the leading edge horn for the 22.5 minute ice suggests that improvements are needed in the grid or turbulence models.
A Fast Approach to Automatic Detection of Brain Lesions
Koley, Subhranil; Chakraborty, Chandan; Mainero, Caterina; Fischl, Bruce; Aganj, Iman
2017-01-01
Template matching is a popular approach to computer-aided detection of brain lesions from magnetic resonance (MR) images. The outcomes are often sufficient for localizing lesions and assisting clinicians in diagnosis. However, processing large MR volumes with three-dimensional (3D) templates is demanding in terms of computational resources, hence the importance of the reduction of computational complexity of template matching, particularly in situations in which time is crucial (e.g. emergent stroke). In view of this, we make use of 3D Gaussian templates with varying radii and propose a new method to compute the normalized cross-correlation coefficient as a similarity metric between the MR volume and the template to detect brain lesions. Contrary to the conventional fast Fourier transform (FFT) based approach, whose runtime grows as O(N logN) with the number of voxels, the proposed method computes the cross-correlation in O(N). We show through our experiments that the proposed method outperforms the FFT approach in terms of computational time, and retains comparable accuracy. PMID:29082383
NASA Astrophysics Data System (ADS)
Wang, Lusheng; Yang, Yong; Lin, Guohui
Finding the closest object for a query in a database is a classical problem in computer science. For some modern biological applications, computing the similarity between two objects might be very time consuming. For example, it takes a long time to compute the edit distance between two whole chromosomes and the alignment cost of two 3D protein structures. In this paper, we study the nearest neighbor search problem in metric space, where the pair-wise distance between two objects in the database is known and we want to minimize the number of distances computed on-line between the query and objects in the database in order to find the closest object. We have designed two randomized approaches for indexing metric space databases, where objects are purely described by their distances with each other. Analysis and experiments show that our approaches only need to compute O(logn) objects in order to find the closest object, where n is the total number of objects in the database.
An integrated compact airborne multispectral imaging system using embedded computer
NASA Astrophysics Data System (ADS)
Zhang, Yuedong; Wang, Li; Zhang, Xuguo
2015-08-01
An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.
Experimental Identification of Non-Abelian Topological Orders on a Quantum Simulator.
Li, Keren; Wan, Yidun; Hung, Ling-Yan; Lan, Tian; Long, Guilu; Lu, Dawei; Zeng, Bei; Laflamme, Raymond
2017-02-24
Topological orders can be used as media for topological quantum computing-a promising quantum computation model due to its invulnerability against local errors. Conversely, a quantum simulator, often regarded as a quantum computing device for special purposes, also offers a way of characterizing topological orders. Here, we show how to identify distinct topological orders via measuring their modular S and T matrices. In particular, we employ a nuclear magnetic resonance quantum simulator to study the properties of three topologically ordered matter phases described by the string-net model with two string types, including the Z_{2} toric code, doubled semion, and doubled Fibonacci. The third one, non-Abelian Fibonacci order is notably expected to be the simplest candidate for universal topological quantum computing. Our experiment serves as the basic module, built on which one can simulate braiding of non-Abelian anyons and ultimately, topological quantum computation via the braiding, and thus provides a new approach of investigating topological orders using quantum computers.
NASA Technical Reports Server (NTRS)
Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.
1982-01-01
Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.
Internet messenger based smart virtual class learning using ubiquitous computing
NASA Astrophysics Data System (ADS)
Umam, K.; Mardi, S. N. S.; Hariadi, M.
2017-06-01
Internet messenger (IM) has become an important educational technology component in college education, IM makes it possible for students to engage in learning and collaborating at smart virtual class learning (SVCL) using ubiquitous computing. However, the model of IM-based smart virtual class learning using ubiquitous computing and empirical evidence that would favor a broad application to improve engagement and behavior are still limited. In addition, the expectation that IM based SVCL using ubiquitous computing could improve engagement and behavior on smart class cannot be confirmed because the majority of the reviewed studies followed instructions paradigms. This article aims to present the model of IM-based SVCL using ubiquitous computing and showing learners’ experiences in improved engagement and behavior for learner-learner and learner-lecturer interactions. The method applied in this paper includes design process and quantitative analysis techniques, with the purpose of identifying scenarios of ubiquitous computing and realize the impressions of learners and lecturers about engagement and behavior aspect and its contribution to learning
Research on elastic resource management for multi-queue under cloud computing environment
NASA Astrophysics Data System (ADS)
CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang
2017-10-01
As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.
Taylor, Gavin J; Paulk, Angelique C; Pearson, Thomas W J; Moore, Richard J D; Stacey, Jacqui A; Ball, David; van Swinderen, Bruno; Srinivasan, Mandyam V
2015-10-01
When using virtual-reality paradigms to study animal behaviour, careful attention must be paid to how the animal's actions are detected. This is particularly relevant in closed-loop experiments where the animal interacts with a stimulus. Many different sensor types have been used to measure aspects of behaviour, and although some sensors may be more accurate than others, few studies have examined whether, and how, such differences affect an animal's behaviour in a closed-loop experiment. To investigate this issue, we conducted experiments with tethered honeybees walking on an air-supported trackball and fixating a visual object in closed-loop. Bees walked faster and along straighter paths when the motion of the trackball was measured in the classical fashion - using optical motion sensors repurposed from computer mice - than when measured more accurately using a computer vision algorithm called 'FicTrac'. When computer mouse sensors were used to measure bees' behaviour, the bees modified their behaviour and achieved improved control of the stimulus. This behavioural change appears to be a response to a systematic error in the computer mouse sensor that reduces the sensitivity of this sensor system under certain conditions. Although the large perceived inertia and mass of the trackball relative to the honeybee is a limitation of tethered walking paradigms, observing differences depending on the sensor system used to measure bee behaviour was not expected. This study suggests that bees are capable of fine-tuning their motor control to improve the outcome of the task they are performing. Further, our findings show that caution is required when designing virtual-reality experiments, as animals can potentially respond to the artificial scenario in unexpected and unintended ways. © 2015. Published by The Company of Biologists Ltd.
Using Computer Games for Instruction: The Student Experience
ERIC Educational Resources Information Center
Grimley, Michael; Green, Richard; Nilsen, Trond; Thompson, David; Tomes, Russell
2011-01-01
Computer games are fun, exciting and motivational when used as leisure pursuits. But do they have similar attributes when utilized for educational purposes? This article investigates whether learning by computer game can improve student experiences compared with a more formal lecture approach and whether computer games have potential for improving…
Experience with a UNIX based batch computing facility for H1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhards, R.; Kruener-Marquis, U.; Szkutnik, Z.
1994-12-31
A UNIX based batch computing facility for the H1 experiment at DESY is described. The ultimate goal is to replace the DESY IBM mainframe by a multiprocessor SGI Challenge series computer, using the UNIX operating system, for most of the computing tasks in H1.
Predictors of Computer Anxiety and Performance in Information Systems.
ERIC Educational Resources Information Center
Anderson, Alastair A.
1996-01-01
Reports on the results of a study of business undergraduates in Australia that was conducted to determine whether or not perceived knowledge of software, microcomputer experience, overall knowledge of computers, programming experience, and gender were predictors of computer anxiety. Use of the Computer Anxiety Rating Scale is discussed.…
Emergent Leadership and Team Effectiveness on a Team Resource Allocation Task
1987-10-01
equivalent training and experience on this task, but they had different levels of experience with computers and video games . This differential experience...typed: that is. it is sex-typed to the extent that males spend mnore time on related instrumeuts like computers and video games . However. the sex...perform better or worse than less talkative teams? Did teams with much computer and ’or video game experience perform better than inexperienced teams
ERIC Educational Resources Information Center
Grimley, Michael; Green, Richard; Nilsen, Trond; Thompson, David
2012-01-01
Computer games are purported to be effective instructional tools that enhance motivation and improve engagement. The aim of this study was to investigate how tertiary student experiences change when instruction was computer game based compared to lecture based, and whether experiences differed between high and low achieving students. Participants…
Centrifuge in space fluid flow visualization experiment
NASA Technical Reports Server (NTRS)
Arnold, William A.; Wilcox, William R.; Regel, Liya L.; Dunbar, Bonnie J.
1993-01-01
A prototype flow visualization system is constructed to examine buoyancy driven flows during centrifugation in space. An axial density gradient is formed by imposing a thermal gradient between the two ends of the test cell. Numerical computations for this geometry showed that the Prandtl number plays a limited part in determining the flow.
Formal Abstraction in Engineering Education--Challenges and Technology Support
ERIC Educational Resources Information Center
Neuper, Walther A.
2017-01-01
This is a position paper in the field of Engineering Education, which is at the very beginning in Europe. It relates challenges in the new field to the emerging technology of (Computer) Theorem Proving (TP). Experience shows, that "teaching" abstract models, for instance the wave equation in mechanical engineering and in electrical…
ERIC Educational Resources Information Center
Ozdemir, Selcuk
2010-01-01
Many countries around the world install millions of computers, printers, projectors, smartboards, and similar technologies in primary and secondary schools to equip new generations with the ability to effectively access and critically evaluate information and communication technologies. However, experiences from different countries show that…
An Exploration of Turnover Experience of IT Professionals in the District of Columbia
ERIC Educational Resources Information Center
Edeh, George
2016-01-01
Turnover among information technology professionals costs organization in revenues. According to Computer Economics (2008), the cost of replacing one information technology employee is $50,000. Reports from the United States Department of Commerce and Office of Technology Policy showed that turnover rate in Information Technology has exceeded 20%…
Beyond Introductory Programming: Success Factors for Advanced Programming
ERIC Educational Resources Information Center
Hoskey, Arthur; Maurino, Paula San Millan
2011-01-01
Numerous studies document high drop-out and failure rates for students in computer programming classes. Studies show that even when some students pass programming classes, they still do not know how to program. Many factors have been considered to explain this problem including gender, age, prior programming experience, major, math background,…
Examining the Acquisition of Phonological Word Forms with Computational Experiments
ERIC Educational Resources Information Center
Vitevitch, Michael S.; Storkel, Holly L.
2013-01-01
It has been hypothesized that known words in the lexicon strengthen newly formed representations of novel words, resulting in words with dense neighborhoods being learned more quickly than words with sparse neighborhoods. Tests of this hypothesis in a connectionist network showed that words with dense neighborhoods were learned better than words…
Learning about Computer-Based Education in Adult Basic Education.
ERIC Educational Resources Information Center
Fahy, Patrick J.
In 1979 the adult basic education department at the Alberta Vocational Centre (AVC), Edmonton, began to use the Control Data PLATO system. Results of the first PLATO project showed students using PLATO learned at least as much as students in regular classes. Students learned faster and reported great satisfaction with PLATO experiences. Staff and…
Playing with Water Drops: From Wetting to Optics through Electrostatics
ERIC Educational Resources Information Center
Domps, A.; Roques-Carmes, T.
2011-01-01
We present a consistent series of activities, including experiments and basic computational studies, investigating the shape and optical properties of water drops in connection with novel technological devices. Most of the work can be carried out with simple teaching equipment and is well suited to undergraduate students. Firstly, we show how the…
Designing Effective Web Forms for Older Web Users
ERIC Educational Resources Information Center
Li, Hui; Rau, Pei-Luen Patrick; Fujimura, Kaori; Gao, Qin; Wang, Lin
2012-01-01
This research aims to provide insight for web form design for older users. The effects of task complexity and information structure of web forms on older users' performance were examined. Forty-eight older participants with abundant computer and web experience were recruited. The results showed significant differences in task time and error rate…
De-aliasing for signal restoration in Propeller MR imaging.
Chiu, Su-Chin; Chang, Hing-Chiu; Chu, Mei-Lan; Wu, Ming-Long; Chung, Hsiao-Wen; Lin, Yi-Ru
2017-02-01
Objects falling outside of the true elliptical field-of-view (FOV) in Propeller imaging show unique aliasing artifacts. This study proposes a de-aliasing approach to restore the signal intensities in Propeller images without extra data acquisition. Computer simulation was performed on the Shepp-Logan head phantom deliberately placed obliquely to examine the signal aliasing. In addition, phantom and human imaging experiments were performed using Propeller imaging with various readouts on a 3.0 Tesla MR scanner. De-aliasing using the proposed method was then performed, with the first low-resolution single-blade image used to find out the aliasing patterns in all the single-blade images, followed by standard Propeller reconstruction. The Propeller images without and with de-aliasing were compared. Computer simulations showed signal loss at the image corners along with aliasing artifacts distributed along directions corresponding to the rotational blades, consistent with clinical observations. The proposed de-aliasing operation successfully restored the correct images in both phantom and human experiments. The de-aliasing operation is an effective adjunct to Propeller MR image reconstruction for retrospective restoration of aliased signals. Copyright © 2016 Elsevier Inc. All rights reserved.
Opposition-Based Memetic Algorithm and Hybrid Approach for Sorting Permutations by Reversals.
Soncco-Álvarez, José Luis; Muñoz, Daniel M; Ayala-Rincón, Mauricio
2018-02-21
Sorting unsigned permutations by reversals is a difficult problem; indeed, it was proved to be NP-hard by Caprara (1997). Because of its high complexity, many approximation algorithms to compute the minimal reversal distance were proposed until reaching the nowadays best-known theoretical ratio of 1.375. In this article, two memetic algorithms to compute the reversal distance are proposed. The first one uses the technique of opposition-based learning leading to an opposition-based memetic algorithm; the second one improves the previous algorithm by applying the heuristic of two breakpoint elimination leading to a hybrid approach. Several experiments were performed with one-hundred randomly generated permutations, single benchmark permutations, and biological permutations. Results of the experiments showed that the proposed OBMA and Hybrid-OBMA algorithms achieve the best results for practical cases, that is, for permutations of length up to 120. Also, Hybrid-OBMA showed to improve the results of OBMA for permutations greater than or equal to 60. The applicability of our proposed algorithms was checked processing permutations based on biological data, in which case OBMA gave the best average results for all instances.
The discursive self-construction of suicidal help seekers in computer-mediated discourse.
Kupferberg, Irit; Gilat, Izhak
2012-01-01
The study focuses on the discursive self-construction of suicidal help seekers in an open computer-mediated forum for mental help. Our theoretical framework is inspired by a functionalist approach to discourse, which emphasizes that language resources are self-displaying. It also espouses discursive psychology, which prioritizes the study of psychological and social phenomena in discursive processes. In addition, we adopt the Four World Approach to the analysis of positioning. Qualitative and quantitative analyses show that the density of 'irrealis' (i.e. negation, future and wishes) units and figurative forms was significantly higher in the suicidal messages compared with the messages of other troubled selves, who produced more 'realis' units (i.e. specific and generic stories) and information questions. We interpret these findings as showing that in their attempt to conceptualize conflict and pain, suicidal help-seekers shied away from the narration of past experience and focused instead on the construction of death. The other troubled help seekers used realis units and questions in order to describe their experience to guarantee that help would be provided.
Rattanatamrong, Prapaporn; Matsunaga, Andrea; Raiturkar, Pooja; Mesa, Diego; Zhao, Ming; Mahmoudi, Babak; Digiovanna, Jack; Principe, Jose; Figueiredo, Renato; Sanchez, Justin; Fortes, Jose
2010-01-01
The CyberWorkstation (CW) is an advanced cyber-infrastructure for Brain-Machine Interface (BMI) research. It allows the development, configuration and execution of BMI computational models using high-performance computing resources. The CW's concept is implemented using a software structure in which an "experiment engine" is used to coordinate all software modules needed to capture, communicate and process brain signals and motor-control commands. A generic BMI-model template, which specifies a common interface to the CW's experiment engine, and a common communication protocol enable easy addition, removal or replacement of models without disrupting system operation. This paper reviews the essential components of the CW and shows how templates can facilitate the processes of BMI model development, testing and incorporation into the CW. It also discusses the ongoing work towards making this process infrastructure independent.
NASA Astrophysics Data System (ADS)
Class, G.; Meyder, R.; Stratmanns, E.
1985-12-01
The large data base for validation and development of computer codes for two-phase flow, generated at the COSIMA facility, is reviewed. The aim of COSIMA is to simulate the hydraulic, thermal, and mechanical conditions in the subchannel and the cladding of fuel rods in pressurized water reactors during the blowout phase of a loss of coolant accident. In terms of fuel rod behavior, it is found that during blowout under realistic conditions only small strains are reached. For cladding rupture extremely high rod internal pressures are necessary. The behavior of fuel rod simulators and the effect of thermocouples attached to the cladding outer surface are clarified. Calculations performed with the codes RELAP and DRUFAN show satisfactory agreement with experiments. This can be improved by updating the phase separation models in the codes.
Geodesic Distance Algorithm for Extracting the Ascending Aorta from 3D CT Images
Jang, Yeonggul; Jung, Ho Yub; Hong, Youngtaek; Cho, Iksung; Shim, Hackjoon; Chang, Hyuk-Jae
2016-01-01
This paper presents a method for the automatic 3D segmentation of the ascending aorta from coronary computed tomography angiography (CCTA). The segmentation is performed in three steps. First, the initial seed points are selected by minimizing a newly proposed energy function across the Hough circles. Second, the ascending aorta is segmented by geodesic distance transformation. Third, the seed points are effectively transferred through the next axial slice by a novel transfer function. Experiments are performed using a database composed of 10 patients' CCTA images. For the experiment, the ground truths are annotated manually on the axial image slices by a medical expert. A comparative evaluation with state-of-the-art commercial aorta segmentation algorithms shows that our approach is computationally more efficient and accurate under the DSC (Dice Similarity Coefficient) measurements. PMID:26904151
Numerical Simulations of the Boundary Layer Transition Flight Experiment
NASA Technical Reports Server (NTRS)
Tang, Chun Y.; Trumble, Kerry A.; Campbell, Charles H.; Lessard, Victor R.; Wood, William A.
2010-01-01
Computational Fluid Dynamics (CFD) simulations were used to study the possible effects that the Boundary Layer Transition (BLT) Flight Experiments may have on the heating environment of the Space Shuttle during its entry to Earth. To investigate this issue, hypersonic calculations using the Data-Parallel Line Relaxation (DPLR) and Langley Aerothermodynamic Upwind Relaxation (LAURA) CFD codes were computed for a 0.75 tall protuberance at flight conditions of Mach 15 and 18. These initial results showed high surface heating on the BLT trip and the areas surrounding the protuberance. Since the predicted peak heating rates would exceed the thermal limits of the materials selected to construct the BLT trip, many changes to the geometry were attempted in order to reduce the surface heat flux. The following paper describes the various geometry revisions and the resulting heating environments predicted by the CFD codes.
Differential equations as a tool for community identification.
Krawczyk, Małgorzata J
2008-06-01
We consider the task of identification of a cluster structure in random networks. The results of two methods are presented: (i) the Newman algorithm [M. E. J. Newman and M. Girvan, Phys. Rev. E 69, 026113 (2004)]; and (ii) our method based on differential equations. A series of computer experiments is performed to check if in applying these methods we are able to determine the structure of the network. The trial networks consist initially of well-defined clusters and are disturbed by introducing noise into their connectivity matrices. Further, we show that an improvement of the previous version of our method is possible by an appropriate choice of the threshold parameter beta . With this change, the results obtained by the two methods above are similar, and our method works better, for all the computer experiments we have done.
Grecucci, Alessandro; Giorgetta, Cinzia; Bonini, Nicolao; Sanfey, Alan G.
2013-01-01
Emotion regulation is important for psychological well-being. Although it is known that alternative regulation strategies may have different emotional consequences, the effectiveness of such strategies for socially driven emotions remains unclear. In this study we investigated the efficacy of different forms of reappraisal on responses to the selfish and altruistic behavior of others in the Dictator Game. In Experiment 1, subjects mentalized the intentions of the other player in one condition, and took distance from the situation in the other. Emotion ratings were recorded after each offer. Compared with a baseline condition, mentalizing led subjects to experience their emotions more positively when receiving both selfish and altruistic proposals, whereas distancing decreased the valence when receiving altruistic offers, but did not affect the perception of selfish behavior. In Experiment 2, subjects played with both computer and human partners while reappraising the meaning of the player’s intentions (with a human partner) or the meaning of the situation (with a computer partner). Results showed that both contexts were effectively modulated by reappraisal, however a stronger effect was observed when the donor was a human partner, as compared to a computer partner. Taken together, these results demonstrate that socially driven emotions can be successfully modulated by reappraisal strategies that focus on the reinterpretation of others’ intentions. PMID:23349645
Integrated design, execution, and analysis of arrayed and pooled CRISPR genome-editing experiments.
Canver, Matthew C; Haeussler, Maximilian; Bauer, Daniel E; Orkin, Stuart H; Sanjana, Neville E; Shalem, Ophir; Yuan, Guo-Cheng; Zhang, Feng; Concordet, Jean-Paul; Pinello, Luca
2018-05-01
CRISPR (clustered regularly interspaced short palindromic repeats) genome-editing experiments offer enormous potential for the evaluation of genomic loci using arrayed single guide RNAs (sgRNAs) or pooled sgRNA libraries. Numerous computational tools are available to help design sgRNAs with optimal on-target efficiency and minimal off-target potential. In addition, computational tools have been developed to analyze deep-sequencing data resulting from genome-editing experiments. However, these tools are typically developed in isolation and oftentimes are not readily translatable into laboratory-based experiments. Here, we present a protocol that describes in detail both the computational and benchtop implementation of an arrayed and/or pooled CRISPR genome-editing experiment. This protocol provides instructions for sgRNA design with CRISPOR (computational tool for the design, evaluation, and cloning of sgRNA sequences), experimental implementation, and analysis of the resulting high-throughput sequencing data with CRISPResso (computational tool for analysis of genome-editing outcomes from deep-sequencing data). This protocol allows for design and execution of arrayed and pooled CRISPR experiments in 4-5 weeks by non-experts, as well as computational data analysis that can be performed in 1-2 d by both computational and noncomputational biologists alike using web-based and/or command-line versions.
Affective assessment of computer users based on processing the pupil diameter signal.
Ren, Peng; Barreto, Armando; Gao, Ying; Adjouadi, Malek
2011-01-01
Detecting affective changes of computer users is a current challenge in human-computer interaction which is being addressed with the help of biomedical engineering concepts. This article presents a new approach to recognize the affective state ("relaxation" vs. "stress") of a computer user from analysis of his/her pupil diameter variations caused by sympathetic activation. Wavelet denoising and Kalman filtering methods are first used to remove abrupt changes in the raw Pupil Diameter (PD) signal. Then three features are extracted from the preprocessed PD signal for the affective state classification. Finally, a random tree classifier is implemented, achieving an accuracy of 86.78%. In these experiments the Eye Blink Frequency (EBF), is also recorded and used for affective state classification, but the results show that the PD is a more promising physiological signal for affective assessment.
An energy-efficient failure detector for vehicular cloud computing.
Liu, Jiaxi; Wu, Zhibo; Dong, Jian; Wu, Jin; Wen, Dongxin
2018-01-01
Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption.
A scalable parallel black oil simulator on distributed memory parallel computers
NASA Astrophysics Data System (ADS)
Wang, Kun; Liu, Hui; Chen, Zhangxin
2015-11-01
This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.
Saliency image of feature building for image quality assessment
NASA Astrophysics Data System (ADS)
Ju, Xinuo; Sun, Jiyin; Wang, Peng
2011-11-01
The purpose and method of image quality assessment are quite different for automatic target recognition (ATR) and traditional application. Local invariant feature detectors, mainly including corner detectors, blob detectors and region detectors etc., are widely applied for ATR. A saliency model of feature was proposed to evaluate feasibility of ATR in this paper. The first step consisted of computing the first-order derivatives on horizontal orientation and vertical orientation, and computing DoG maps in different scales respectively. Next, saliency images of feature were built based auto-correlation matrix in different scale. Then, saliency images of feature of different scales amalgamated. Experiment were performed on a large test set, including infrared images and optical images, and the result showed that the salient regions computed by this model were consistent with real feature regions computed by mostly local invariant feature extraction algorithms.
An energy-efficient failure detector for vehicular cloud computing
Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Wen, Dongxin
2018-01-01
Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption. PMID:29352282
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
Scale Space for Camera Invariant Features.
Puig, Luis; Guerrero, José J; Daniilidis, Kostas
2014-09-01
In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.
Implementation of Steiner point of fuzzy set.
Liang, Jiuzhen; Wang, Dejiang
2014-01-01
This paper deals with the implementation of Steiner point of fuzzy set. Some definitions and properties of Steiner point are investigated and extended to fuzzy set. This paper focuses on establishing efficient methods to compute Steiner point of fuzzy set. Two strategies of computing Steiner point of fuzzy set are proposed. One is called linear combination of Steiner points computed by a series of crisp α-cut sets of the fuzzy set. The other is an approximate method, which is trying to find the optimal α-cut set approaching the fuzzy set. Stability analysis of Steiner point of fuzzy set is also studied. Some experiments on image processing are given, in which the two methods are applied for implementing Steiner point of fuzzy image, and both strategies show their own advantages in computing Steiner point of fuzzy set.
Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy
Schroll, Henning; Hamker, Fred H.
2013-01-01
Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other. PMID:24416002
Aeroelastic Calculations Using CFD for a Typical Business Jet Model
NASA Technical Reports Server (NTRS)
Gibbons, Michael D.
1996-01-01
Two time-accurate Computational Fluid Dynamics (CFD) codes were used to compute several flutter points for a typical business jet model. The model consisted of a rigid fuselage with a flexible semispan wing and was tested in the Transonic Dynamics Tunnel at NASA Langley Research Center where experimental flutter data were obtained from M(sub infinity) = 0.628 to M(sub infinity) = 0.888. The computational results were computed using CFD codes based on the inviscid TSD equation (CAP-TSD) and the Euler/Navier-Stokes equations (CFL3D-AE). Comparisons are made between analytical results and with experiment where appropriate. The results presented here show that the Navier-Stokes method is required near the transonic dip due to the strong viscous effects while the TSD and Euler methods used here provide good results at the lower Mach numbers.
NASA Astrophysics Data System (ADS)
Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe
2017-08-01
Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus <54% in two-choice classification accuracy. Significance. We believe GMMAC will be useful for clinical fNIRS-based brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.
A meta-analysis of outcomes from the use of computer-simulated experiments in science education
NASA Astrophysics Data System (ADS)
Lejeune, John Van
The purpose of this study was to synthesize the findings from existing research on the effects of computer simulated experiments on students in science education. Results from 40 reports were integrated by the process of meta-analysis to examine the effect of computer-simulated experiments and interactive videodisc simulations on student achievement and attitudes. Findings indicated significant positive differences in both low-level and high-level achievement of students who use computer-simulated experiments and interactive videodisc simulations as compared to students who used more traditional learning activities. No significant differences in retention, student attitudes toward the subject, or toward the educational method were found. Based on the findings of this study, computer-simulated experiments and interactive videodisc simulations should be used to enhance students' learning in science, especially in cases where the use of traditional laboratory activities are expensive, dangerous, or impractical.
Experimental violation of multipartite Bell inequalities with trapped ions.
Lanyon, B P; Zwerger, M; Jurcevic, P; Hempel, C; Dür, W; Briegel, H J; Blatt, R; Roos, C F
2014-03-14
We report on the experimental violation of multipartite Bell inequalities by entangled states of trapped ions. First, we consider resource states for measurement-based quantum computation of between 3 and 7 ions and show that all strongly violate a Bell-type inequality for graph states, where the criterion for violation is a sufficiently high fidelity. Second, we analyze Greenberger-Horne-Zeilinger states of up to 14 ions generated in a previous experiment using stronger Mermin-Klyshko inequalities, and show that in this case the violation of local realism increases exponentially with system size. These experiments represent a violation of multipartite Bell-type inequalities of deterministically prepared entangled states. In addition, the detection loophole is closed.
Software Design for Interactive Graphic Radiation Treatment Simulation Systems*
Kalet, Ira J.; Sweeney, Christine; Jacky, Jonathan
1990-01-01
We examine issues in the design of interactive computer graphic simulation programs for radiation treatment planning (RTP), as well as expert system programs that automate parts of the RTP process, in light of ten years of experience at designing, building and using such programs. An experiment in object-oriented design using standard Pascal shows that while some advantage is gained from the design, it is still difficult to achieve modularity and to integrate expert system components. A new design based on the Common LISP Object System (CLOS) is described. This series of designs for RTP software shows that this application benefits in specific ways from object-oriented design methods and appropriate languages and tools.
Cognitive Architecture with Evolutionary Dynamics Solves Insight Problem.
Fedor, Anna; Zachar, István; Szilágyi, András; Öllinger, Michael; de Vladar, Harold P; Szathmáry, Eörs
2017-01-01
In this paper, we show that a neurally implemented a cognitive architecture with evolutionary dynamics can solve the four-tree problem. Our model, called Darwinian Neurodynamics, assumes that the unconscious mechanism of problem solving during insight tasks is a Darwinian process. It is based on the evolution of patterns that represent candidate solutions to a problem, and are stored and reproduced by a population of attractor networks. In our first experiment, we used human data as a benchmark and showed that the model behaves comparably to humans: it shows an improvement in performance if it is pretrained and primed appropriately, just like human participants in Kershaw et al. (2013)'s experiment. In the second experiment, we further investigated the effects of pretraining and priming in a two-by-two design and found a beginner's luck type of effect: solution rate was highest in the condition that was primed, but not pretrained with patterns relevant for the task. In the third experiment, we showed that deficits in computational capacity and learning abilities decreased the performance of the model, as expected. We conclude that Darwinian Neurodynamics is a promising model of human problem solving that deserves further investigation.
Cognitive Architecture with Evolutionary Dynamics Solves Insight Problem
Fedor, Anna; Zachar, István; Szilágyi, András; Öllinger, Michael; de Vladar, Harold P.; Szathmáry, Eörs
2017-01-01
In this paper, we show that a neurally implemented a cognitive architecture with evolutionary dynamics can solve the four-tree problem. Our model, called Darwinian Neurodynamics, assumes that the unconscious mechanism of problem solving during insight tasks is a Darwinian process. It is based on the evolution of patterns that represent candidate solutions to a problem, and are stored and reproduced by a population of attractor networks. In our first experiment, we used human data as a benchmark and showed that the model behaves comparably to humans: it shows an improvement in performance if it is pretrained and primed appropriately, just like human participants in Kershaw et al. (2013)'s experiment. In the second experiment, we further investigated the effects of pretraining and priming in a two-by-two design and found a beginner's luck type of effect: solution rate was highest in the condition that was primed, but not pretrained with patterns relevant for the task. In the third experiment, we showed that deficits in computational capacity and learning abilities decreased the performance of the model, as expected. We conclude that Darwinian Neurodynamics is a promising model of human problem solving that deserves further investigation. PMID:28405191
Outreach programmes to attract girls into computing: how the best laid plans can sometimes fail
NASA Astrophysics Data System (ADS)
Lang, Catherine; Fisher, Julie; Craig, Annemieke; Forgasz, Helen
2015-07-01
This article presents a reflective analysis of an outreach programme called the Digital Divas Club. This curriculum-based programme was delivered in Australian schools with the aim of stimulating junior and middle school girls' interest in computing courses and careers. We believed that we had developed a strong intervention programme based on previous literature and our collective knowledge and experiences. While it was coordinated by university academics, the programme content was jointly created and modified by practicing school teachers. After four years, when the final data were compiled, it showed that our programme produced significant change to student confidence in computing, but the ability to influence a desire to pursue a career path in computing did not fully eventuate. To gain a deeper insight in to why this may be the case, data collected from two of the schools are interrogated in more detail as described in this article. These schools were at the end of the expected programme outcomes. We found that despite designing a programme that delivered a multi-layered positive computing experience, factors beyond our control such as school culture and teacher technical self-efficacy help account for the unanticipated results. Despite our best laid plans, the expectations that this semester long programme would influence students' longer term career outcomes may have been aspirational at best.
2014-01-01
Background Existing instruments for measuring problematic computer and console gaming and internet use are often lengthy and often based on a pathological perspective. The objective was to develop and present a new and short non-clinical measurement tool for perceived problems related to computer use and gaming among adolescents and to study the association between screen time and perceived problems. Methods Cross-sectional school-survey of 11-, 13-, and 15-year old students in thirteen schools in the City of Aarhus, Denmark, participation rate 89%, n = 2100. The main exposure was time spend on weekdays on computer- and console-gaming and internet use for communication and surfing. The outcome measures were three indexes on perceived problems related to computer and console gaming and internet use. Results The three new indexes showed high face validity and acceptable internal consistency. Most schoolchildren with high screen time did not experience problems related to computer use. Still, there was a strong and graded association between time use and perceived problems related to computer gaming, console gaming (only boys) and internet use, odds ratios ranging from 6.90 to 10.23. Conclusion The three new measures of perceived problems related to computer and console gaming and internet use among adolescents are appropriate, reliable and valid for use in non-clinical surveys about young people’s everyday life and behaviour. These new measures do not assess Internet Gaming Disorder as it is listed in the DSM and therefore has no parity with DSM criteria. We found an increasing risk of perceived problems with increasing time spent with gaming and internet use. Nevertheless, most schoolchildren who spent much time with gaming and internet use did not experience problems. PMID:24731270
NASA Astrophysics Data System (ADS)
Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia
2018-03-01
Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.
Han, S; Humphreys, G W; Chen, L
1999-10-01
The role of perceptual grouping and the encoding of closure of local elements in the processing of hierarchical patterns was studied. Experiments 1 and 2 showed a global advantage over the local level for 2 tasks involving the discrimination of orientation and closure, but there was a local advantage for the closure discrimination task relative to the orientation discrimination task. Experiment 3 showed a local precedence effect for the closure discrimination task when local element grouping was weakened by embedding the stimuli from Experiment 1 in a background made up of cross patterns. Experiments 4A and 4B found that dissimilarity of closure between the local elements of hierarchical stimuli and the background figures could facilitate the grouping of closed local elements and enhanced the perception of global structure. Experiment 5 showed that the advantage for detecting the closure of local elements in hierarchical analysis also held under divided- and selective-attention conditions. Results are consistent with the idea that grouping between local elements takes place in parallel and competes with the computation of closure of local elements in determining the selection between global and local levels of hierarchical patterns for response.
ERIC Educational Resources Information Center
Cilesiz, Sebnem
2009-01-01
Computer use is a widespread leisure activity for adolescents. Leisure contexts, such as Internet cafes, constitute specific social environments for computer use and may hold significant educational potential. This article reports a phenomenological study of adolescents' experiences of educational computer use at Internet cafes in Turkey. The…
Computers in Science Education: Can They Go Far Enough? Have We Gone Too Far?
ERIC Educational Resources Information Center
Schrock, John Richard
1984-01-01
Indicates that although computers may churn out creative research, science is still dependent on science education, and that science education consists of increasing human experience. Also considers uses and misuses of computers in the science classroom, examining Edgar Dale's "cone of experience" related to laboratory computer and "extended…
AEC Experiment Establishes Computer Link Between California and Paris
demonstrated that a terminal in Paris could search a computer in California and display the resulting (Copies) AEC EXPERIMENT ESTABLISHES COMPUTER LINK BETWEEN CALIFORNIA AND PARIS The feasibility of a worldwide information retrieval system which would tie a computer base of information to terminals on the
Ontological and Epistemological Issues Regarding Climate Models and Computer Experiments
NASA Astrophysics Data System (ADS)
Vezer, M. A.
2010-12-01
Recent philosophical discussions (Parker 2009; Frigg and Reiss 2009; Winsberg, 2009; Morgon 2002, 2003, 2005; Gula 2002) about the ontology of computer simulation experiments and the epistemology of inferences drawn from them are of particular relevance to climate science as computer modeling and analysis are instrumental in understanding climatic systems. How do computer simulation experiments compare with traditional experiments? Is there an ontological difference between these two methods of inquiry? Are there epistemological considerations that result in one type of inference being more reliable than the other? What are the implications of these questions with respect to climate studies that rely on computer simulation analysis? In this paper, I examine these philosophical questions within the context of climate science, instantiating concerns in the philosophical literature with examples found in analysis of global climate change. I concentrate on Wendy Parker’s (2009) account of computer simulation studies, which offers a treatment of these and other questions relevant to investigations of climate change involving such modelling. Two theses at the center of Parker’s account will be the focus of this paper. The first is that computer simulation experiments ought to be regarded as straightforward material experiments; which is to say, there is no significant ontological difference between computer and traditional experimentation. Parker’s second thesis is that some of the emphasis on the epistemological importance of materiality has been misplaced. I examine both of these claims. First, I inquire as to whether viewing computer and traditional experiments as ontologically similar in the way she does implies that there is no proper distinction between abstract experiments (such as ‘thought experiments’ as well as computer experiments) and traditional ‘concrete’ ones. Second, I examine the notion of materiality (i.e., the material commonality between object and target systems) and some arguments for the claim that materiality entails some inferential advantage to traditional experimentation. I maintain that Parker’s account of the ontology of computer simulations has some interesting though potentially problematic implications regarding conventional distinctions between abstract and concrete methods of inquiry. With respect to her account of materiality, I outline and defend an alternative account, posited by Mary Morgan (2002, 2003, 2005), which holds that ontological similarity between target and object systems confers some epistemological advantage to traditional forms of experimental inquiry.
Fermilab computing at the Intensity Frontier
Group, Craig; Fuess, S.; Gutsche, O.; ...
2015-12-23
The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less onmore » the development of tools and infrastructure.« less
NASA Astrophysics Data System (ADS)
Perez Montes, Diego A.; Añel Cabanelas, Juan A.; Wallom, David C. H.; Arribas, Alberto; Uhe, Peter; Caderno, Pablo V.; Pena, Tomas F.
2017-04-01
Cloud Computing is a technological option that offers great possibilities for modelling in geosciences. We have studied how two different climate models, HadAM3P-HadRM3P and CESM-WACCM, can be adapted in two different ways to run on Cloud Computing Environments from three different vendors: Amazon, Google and Microsoft. Also, we have evaluated qualitatively how the use of Cloud Computing can affect the allocation of resources by funding bodies and issues related to computing security, including scientific reproducibility. Our first experiments were developed using the well known ClimatePrediction.net (CPDN), that uses BOINC, over the infrastructure from two cloud providers, namely Microsoft Azure and Amazon Web Services (hereafter AWS). For this comparison we ran a set of thirteen month climate simulations for CPDN in Azure and AWS using a range of different virtual machines (VMs) for HadRM3P (50 km resolution over South America CORDEX region) nested in the global atmosphere-only model HadAM3P. These simulations were run on a single processor and took between 3 and 5 days to compute depending on the VM type. The last part of our simulation experiments was running WACCM over different VMS on the Google Compute Engine (GCE) and make a comparison with the supercomputer (SC) Finisterrae1 from the Centro de Supercomputacion de Galicia. It was shown that GCE gives better performance than the SC for smaller number of cores/MPI tasks but the model throughput shows clearly how the SC performance is better after approximately 100 cores (related with network speed and latency differences). From a cost point of view, Cloud Computing moves researchers from a traditional approach where experiments were limited by the available hardware resources to monetary resources (how many resources can be afforded). As there is an increasing movement and recommendation for budgeting HPC projects on this technology (budgets can be calculated in a more realistic way) we could see a shift on the trends over the next years to consolidate Cloud as the preferred solution.
WRF4SG: A Scientific Gateway for climate experiment workflows
NASA Astrophysics Data System (ADS)
Blanco, Carlos; Cofino, Antonio S.; Fernandez-Quiruelas, Valvanuz
2013-04-01
The Weather Research and Forecasting model (WRF) is a community-driven and public domain model widely used by the weather and climate communities. As opposite to other application-oriented models, WRF provides a flexible and computationally-efficient framework which allows solving a variety of problems for different time-scales, from weather forecast to climate change projection. Furthermore, WRF is also widely used as a research tool in modeling physics, dynamics, and data assimilation by the research community. Climate experiment workflows based on Weather Research and Forecasting (WRF) are nowadays among the one of the most cutting-edge applications. These workflows are complex due to both large storage and the huge number of simulations executed. In order to manage that, we have developed a scientific gateway (SG) called WRF for Scientific Gateway (WRF4SG) based on WS-PGRADE/gUSE and WRF4G frameworks to ease achieve WRF users needs (see [1] and [2]). WRF4SG provides services for different use cases that describe the different interactions between WRF users and the WRF4SG interface in order to show how to run a climate experiment. As WS-PGRADE/gUSE uses portlets (see [1]) to interact with users, its portlets will support these use cases. A typical experiment to be carried on by a WRF user will consist on a high-resolution regional re-forecast. These re-forecasts are common experiments used as input data form wind power energy and natural hazards (wind and precipitation fields). In the cases below, the user is able to access to different resources such as Grid due to the fact that WRF needs a huge amount of computing resources in order to generate useful simulations: * Resource configuration and user authentication: The first step is to authenticate on users' Grid resources by virtual organizations. After login, the user is able to select which virtual organization is going to be used by the experiment. * Data assimilation: In order to assimilate the data sources, the user has to select them browsing through LFC Portlet. * Design Experiment workflow: In order to configure the experiment, the user will define the type of experiment (i.e. re-forecast), and its attributes to simulate. In this case the main attributes are: the field of interest (wind, precipitation, ...), the start and end date simulation and the requirements of the experiment. * Monitor workflow: In order to monitor the experiment the user will receive notification messages based on events and also the gateway will display the progress of the experiment. * Data storage: Like Data assimilation case, the user is able to browse and view the output data simulations using LFC Portlet. The objectives of WRF4SG can be described by considering two goals. The first goal is to show how WRF4SG facilitates to execute, monitor and manage climate workflows based on the WRF4G framework. And the second goal of WRF4SG is to help WRF users to execute their experiment workflows concurrently using heterogeneous computing resources such as HPC and Grid. [1] Kacsuk, P.: P-GRADE portal family for grid infrastructures. Concurrency and Computation: Practice and Experience. 23, 235-245 (2011). [2] http://www.meteo.unican.es/software/wrf4g
A GPU-based mipmapping method for water surface visualization
NASA Astrophysics Data System (ADS)
Li, Hua; Quan, Wei; Xu, Chao; Wu, Yan
2018-03-01
Visualization of water surface is a hot topic in computer graphics. In this paper, we presented a fast method to generate wide range of water surface with good image quality both near and far from the viewpoint. This method utilized uniform mesh and Fractal Perlin noise to model water surface. Mipmapping technology was enforced to the surface textures, which adjust the resolution with respect to the distance from the viewpoint and reduce the computing cost. Lighting effect was computed based on shadow mapping technology, Snell's law and Fresnel term. The render pipeline utilizes a CPU-GPU shared memory structure, which improves the rendering efficiency. Experiment results show that our approach visualizes water surface with good image quality at real-time frame rates performance.
Multi-scale modeling in cell biology
Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick
2009-01-01
Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808
Mobile computing device as tools for college student education: a case on flashcards application
NASA Astrophysics Data System (ADS)
Kang, Congying
2012-04-01
Traditionally, college students always use flash cards as a tool to remember massive knowledge, such as nomenclature, structures, and reactions in chemistry. Educational and information technology have enabled flashcards viewed on computers, like Slides and PowerPoint, works as tunnels of drilling and feedback for the learners. The current generation of students is more capable of information technology and mobile computing devices. For example, they use their Mobile phones much more intensively everyday day. Trends of using Mobile phone as an educational tool is analyzed and a educational technology initiative is proposed, which use Mobile phone flash cards applications to help students learn biology and chemistry. Experiments show that users responded positively to these mobile flash cards.
Designing Serious Game Interventions for Individuals with Autism.
Whyte, Elisabeth M; Smyth, Joshua M; Scherf, K Suzanne
2015-12-01
The design of "Serious games" that use game components (e.g., storyline, long-term goals, rewards) to create engaging learning experiences has increased in recent years. We examine of the core principles of serious game design and examine the current use of these principles in computer-based interventions for individuals with autism. Participants who undergo these computer-based interventions often show little evidence of the ability to generalize such learning to novel, everyday social communicative interactions. This lack of generalized learning may result, in part, from the limited use of fundamental elements of serious game design that are known to maximize learning. We suggest that future computer-based interventions should consider the full range of serious game design principles that promote generalization of learning.
A computational study on oblique shock wave-turbulent boundary layer interaction
NASA Astrophysics Data System (ADS)
Joy, Md. Saddam Hossain; Rahman, Saeedur; Hasan, A. B. M. Toufique; Ali, M.; Mitsutake, Y.; Matsuo, S.; Setoguchi, T.
2016-07-01
A numerical computation of an oblique shock wave incident on a turbulent boundary layer was performed for free stream flow of air at M∞ = 2.0 and Re1 = 10.5×106 m-1. The oblique shock wave was generated from a 8° wedge. Reynolds averaged Navier-Stokes (RANS) simulation with k-ω SST turbulence model was first utilized for two dimensional (2D) steady case. The results were compared with the experiment at the same flow conditions. Further, to capture the unsteadiness, a 2D Large Eddy Simulation (LES) with sub-grid scale model WMLES was performed which showed the unsteady effects. The frequency of the shock oscillation was computed and was found to be comparable with that of experimental measurement.
Three-dimensional Diffusive Strip Method
NASA Astrophysics Data System (ADS)
Martinez-Ruiz, Daniel; Meunier, Patrice; Duchemin, Laurent; Villermaux, Emmanuel
2016-11-01
The Diffusive Strip Method (DSM) is a near-exact numerical method developed for mixing computations at large Péclet number in two-dimensions. The method consists in following stretched material lines to compute a-posteriori the resulting scalar field is extended here to three-dimensional flows, following surfaces. We describe its 3D peculiarities, and show how it applies to a simple Taylor-Couette configuration with non-rotating boundary conditions at the top end, bottom and outer cylinder. This flow produces an elaborate, although controlled, steady 3D flow which relies on the Ekman pumping arising from the rotation of the inner cylinder is both studied experimentally, and numerically modeled. A recurrent two-cells structure appears formed by stream tubes shaped as nested tori. A scalar blob in the flow experiences a Lagrangian oscillating dynamics with stretchings and compressions, driving the mixing process, and yielding both rapidly-mixed and nearly pure-diffusive regions. A triangulated-surface method is developed to calculate the blob elongation and scalar concentration PDFs through a single variable computation along the advected blob surface, capturing the rich evolution observed in the experiments.
Disgust selectively modulates reciprocal fairness in economic interactions.
Moretti, Laura; di Pellegrino, Giuseppe
2010-04-01
We report two studies aimed at investigating the effects of distinct negative emotions on pairwise economic interactions. In the ultimatum game, a proposer offers a division of a sum of money to a responder who decides whether to accept the split, or reject and leave both players with nothing. In Experiment 1, we investigated whether induced disgust, as compared to sadness and neutral emotion, specifically influences responders' decisions to reject unfair proposals. In Experiment 2, we assessed whether the effects of disgust were selectively related to social contexts by contrasting interactions with a human partner with those involving a computer. Results showed that relative to being in a sad or neutral mood, induced feelings of disgust significantly increased rejection rates of unfair offers. Moreover, we found that when the partner was not responsible for the fairness violation, such as in the computer-offer condition, the disgust induction failed to affect participants' choices. We conclude by focusing on the hypothesis that disgust and social norm violations may share common computational components, both at a psychological and a neural level. Copyright 2010 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
García, Isaías; Benavides, Carmen; Alaiz, Héctor; Alonso, Angel
2013-08-01
This paper describes research on the use of knowledge models (ontologies) for building computer-aided educational software in the field of control engineering. Ontologies are able to represent in the computer a very rich conceptual model of a given domain. This model can be used later for a number of purposes in different software applications. In this study, domain ontology about the field of lead-lag compensator design has been built and used for automatic exercise generation, graphical user interface population and interaction with the user at any level of detail, including explanations about why things occur. An application called Onto-CELE (ontology-based control engineering learning environment) uses the ontology for implementing a learning environment that can be used for self and lifelong learning purposes. The experience has shown that the use of knowledge models as the basis for educational software applications is capable of showing students the whole complexity of the analysis and design processes at any level of detail. A practical experience with postgraduate students has shown the mentioned benefits and possibilities of the approach.
NASA Astrophysics Data System (ADS)
Giovambattista, Nicolas; Starr, Francis W.; Poole, Peter H.
2017-07-01
Experiments and computer simulations of the transformations of amorphous ices display different behaviors depending on sample preparation methods and on the rates of change of temperature and pressure to which samples are subjected. In addition to these factors, simulation results also depend strongly on the chosen water model. Using computer simulations of the ST2 water model, we study how the sharpness of the compression-induced transition from low-density amorphous ice (LDA) to high-density amorphous ice (HDA) is influenced by the preparation of LDA. By studying LDA samples prepared using widely different procedures, we find that the sharpness of the LDA-to-HDA transformation is correlated with the depth of the initial LDA sample in the potential energy landscape (PEL), as characterized by the inherent structure energy. Our results show that the complex phenomenology of the amorphous ices reported in experiments and computer simulations can be understood and predicted in a unified way from knowledge of the PEL of the system.
Galaxy morphology - An unsupervised machine learning approach
NASA Astrophysics Data System (ADS)
Schutter, A.; Shamir, L.
2015-09-01
Structural properties poses valuable information about the formation and evolution of galaxies, and are important for understanding the past, present, and future universe. Here we use unsupervised machine learning methodology to analyze a network of similarities between galaxy morphological types, and automatically deduce a morphological sequence of galaxies. Application of the method to the EFIGI catalog show that the morphological scheme produced by the algorithm is largely in agreement with the De Vaucouleurs system, demonstrating the ability of computer vision and machine learning methods to automatically profile galaxy morphological sequences. The unsupervised analysis method is based on comprehensive computer vision techniques that compute the visual similarities between the different morphological types. Rather than relying on human cognition, the proposed system deduces the similarities between sets of galaxy images in an automatic manner, and is therefore not limited by the number of galaxies being analyzed. The source code of the method is publicly available, and the protocol of the experiment is included in the paper so that the experiment can be replicated, and the method can be used to analyze user-defined datasets of galaxy images.
Local rules simulation of the kinetics of virus capsid self-assembly.
Schwartz, R; Shor, P W; Prevelige, P E; Berger, B
1998-12-01
A computer model is described for studying the kinetics of the self-assembly of icosahedral viral capsids. Solution of this problem is crucial to an understanding of the viral life cycle, which currently cannot be adequately addressed through laboratory techniques. The abstract simulation model employed to address this is based on the local rules theory of. Proc. Natl. Acad. Sci. USA. 91:7732-7736). It is shown that the principle of local rules, generalized with a model of kinetics and other extensions, can be used to simulate complicated problems in self-assembly. This approach allows for a computationally tractable molecular dynamics-like simulation of coat protein interactions while retaining many relevant features of capsid self-assembly. Three simple simulation experiments are presented to illustrate the use of this model. These show the dependence of growth and malformation rates on the energetics of binding interactions, the tolerance of errors in binding positions, and the concentration of subunits in the examples. These experiments demonstrate a tradeoff within the model between growth rate and fidelity of assembly for the three parameters. A detailed discussion of the computational model is also provided.
Using distributed partial memories to improve self-organizing collective movements.
Winder, Ransom; Reggia, James A
2004-08-01
Past self-organizing models of collectively moving "particles" (simulated bird flocks, fish schools, etc.) have typically been based on purely reflexive agents that have no significant memory of past movements. We hypothesized that giving such individual particles a limited distributed memory of past obstacles they encountered could lead to significantly faster travel between goal destinations. Systematic computational experiments using six terrains that had different arrangements of obstacles demonstrated that, at least in some domains, this conjecture is true. Furthermore, these experiments demonstrated that improved performance over time came not only from the avoidance of previously seen obstacles, but also (surprisingly) immediately after first encountering obstacles due to decreased delays in circumventing those obstacles. Simulations also showed that, of the four strategies we tested for removal of remembered obstacles when memory was full and a new obstacle was to be saved, none was better than random selection. These results may be useful in interpreting future experimental research on group movements in biological populations, and in improving existing methodologies for control of collective movements in computer graphics, robotic teams, particle swarm optimization, and computer games.
Clustering molecular dynamics trajectories for optimizing docking experiments.
De Paris, Renata; Quevedo, Christian V; Ruiz, Duncan D; Norberto de Souza, Osmar; Barros, Rodrigo C
2015-01-01
Molecular dynamics simulations of protein receptors have become an attractive tool for rational drug discovery. However, the high computational cost of employing molecular dynamics trajectories in virtual screening of large repositories threats the feasibility of this task. Computational intelligence techniques have been applied in this context, with the ultimate goal of reducing the overall computational cost so the task can become feasible. Particularly, clustering algorithms have been widely used as a means to reduce the dimensionality of molecular dynamics trajectories. In this paper, we develop a novel methodology for clustering entire trajectories using structural features from the substrate-binding cavity of the receptor in order to optimize docking experiments on a cloud-based environment. The resulting partition was selected based on three clustering validity criteria, and it was further validated by analyzing the interactions between 20 ligands and a fully flexible receptor (FFR) model containing a 20 ns molecular dynamics simulation trajectory. Our proposed methodology shows that taking into account features of the substrate-binding cavity as input for the k-means algorithm is a promising technique for accurately selecting ensembles of representative structures tailored to a specific ligand.
Detailed Multidimensional Simulations of the Structure and Dynamics of Flames
NASA Technical Reports Server (NTRS)
Patnaik, G.; Kailasanath, K.
1999-01-01
Numerical simulations in which the various physical and chemical processes can be independently controlled can significantly advance our understanding of the structure, stability, dynamics and extinction of flames. Therefore, our approach has been to use detailed time-dependent, multidimensional, multispecies numerical models to perform carefully designed computational experiments of flames on Earth and in microgravity environments. Some of these computational experiments are complementary to physical experiments performed under the Microgravity Program while others provide a fundamental understanding that cannot be obtained from physical experiments alone. In this report, we provide a brief summary of our recent research highlighting the contributions since the previous microgravity combustion workshop. There are a number of mechanisms that can cause flame instabilities and result in the formation of dynamic multidimensional structures. In the past, we have used numerical simulations to show that it is the thermo-diffusive instability rather than an instability due to preferential diffusion that is the dominant mechanism for the formation of cellular flames in lean hydrogen-air mixtures. Other studies have explored the role of gravity on flame dynamics and extinguishment, multi-step kinetics and radiative losses on flame instabilities in rich hydrogen-air flames, and heat losses on burner-stabilized flames in microgravity. The recent emphasis of our work has been on exploring flame-vortex interactions and further investigating the structure and dynamics of lean hydrogen-air flames in microgravity. These topics are briefly discussed after a brief discussion of our computational approach for solving these problems.
Computer use changes generalization of movement learning.
Wei, Kunlin; Yan, Xiang; Kong, Gaiqing; Yin, Cong; Zhang, Fan; Wang, Qining; Kording, Konrad Paul
2014-01-06
Over the past few decades, one of the most salient lifestyle changes for us has been the use of computers. For many of us, manual interaction with a computer occupies a large portion of our working time. Through neural plasticity, this extensive movement training should change our representation of movements (e.g., [1-3]), just like search engines affect memory [4]. However, how computer use affects motor learning is largely understudied. Additionally, as virtually all participants in studies of perception and actions are computer users, a legitimate question is whether insights from these studies bear the signature of computer-use experience. We compared non-computer users with age- and education-matched computer users in standard motor learning experiments. We found that people learned equally fast but that non-computer users generalized significantly less across space, a difference negated by two weeks of intensive computer training. Our findings suggest that computer-use experience shaped our basic sensorimotor behaviors, and this influence should be considered whenever computer users are recruited as study participants. Copyright © 2014 Elsevier Ltd. All rights reserved.
Milles, J; van der Geest, R J; Jerosch-Herold, M; Reiber, J H C; Lelieveldt, B P F
2007-01-01
This paper presents a novel method for registration of cardiac perfusion MRI. The presented method successfully corrects for breathing motion without any manual interaction using Independent Component Analysis to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of ICA, and used to compute the displacement caused by breathing for each frame. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Validation experiments showed a reduction of the average LV motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. We conclude that this fully automatic ICA-based method shows an excellent accuracy, robustness and computation speed, adequate for use in a clinical environment.
Li, Tiejun; Min, Bin; Wang, Zhiming
2013-03-14
The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.
Using Minimum-Surface Bodies for Iteration Space Partitioning
NASA Technical Reports Server (NTRS)
Frumlin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)
2001-01-01
A number of known techniques for improving cache performance in scientific computations involve the reordering of the iteration space. Some of these reorderings can be considered as coverings of the iteration space with the sets having good surface-to-volume ratio. Use of such sets reduces the number of cache misses in computations of local operators having the iteration space as a domain. We study coverings of iteration spaces represented by structured and unstructured grids. For structured grids we introduce a covering based on successive minima tiles of the interference lattice of the grid. We show that the covering has good surface-to-volume ratio and present a computer experiment showing actual reduction of the cache misses achieved by using these tiles. For unstructured grids no cache efficient covering can be guaranteed. We present a triangulation of a 3-dimensional cube such that any local operator on the corresponding grid has significantly larger number of cache misses than a similar operator on a structured grid.
NASA Astrophysics Data System (ADS)
Kassemi, Mohammad; Kartuzova, Olga; Hylton, Sonya
2018-01-01
This paper examines our computational ability to capture the transport and phase change phenomena that govern cryogenic storage tank pressurization and underscores our strengths and weaknesses in this area in terms of three computational-experimental validation case studies. In the first study, 1g pressurization of a simulant low-boiling point fluid in a small scale transparent tank is considered in the context of the Zero-Boil-Off Tank (ZBOT) Experiment to showcase the relatively strong capability that we have developed in modelling the coupling between the convective transport and stratification in the bulk phases with the interfacial evaporative and condensing heat and mass transfer that ultimately control self-pressurization in the storage tank. Here, we show that computational predictions exhibit excellent temporal and spatial fidelity under the moderate Ra number - high Bo number convective-phase distribution regimes. In the second example, we focus on 1g pressurization and pressure control of the large-scale K-site liquid hydrogen tank experiment where we show that by crossing fluid types and physical scales, we enter into high Bo number - high Ra number flow regimes that challenge our ability to predict turbulent heat and mass transfer and their impact on the tank pressurization correctly, especially, in the vapor domain. In the final example, we examine pressurization results from the small scale simulant fluid Tank Pressure Control Experiment (TCPE) performed in microgravity to underscore the fact that in crossing into a low Ra number - low Bo number regime in microgravity, the temporal evolution of the phase front as affected by the time-dependent residual gravity and impulse accelerations becomes an important consideration. In this case detailed acceleration data are needed to predict the correct rate of tank self-pressurization.
Belle II grid computing: An overview of the distributed data management system.
NASA Astrophysics Data System (ADS)
Bansal, Vikas; Schram, Malachi; Belle Collaboration, II
2017-01-01
The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan, will start physics data taking in 2018 and will accumulate 50/ab of e +e- collision data, about 50 times larger than the data set of the Belle experiment. The computing requirements of Belle II are comparable to those of a Run I LHC experiment. Computing at this scale requires efficient use of the compute grids in North America, Asia and Europe and will take advantage of upgrades to the high-speed global network. We present the architecture of data flow and data handling as a part of the Belle II computing infrastructure.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
Dr. Peter Cavanaugh Explains the Need and Operation of the FOOT Experiment
NASA Technical Reports Server (NTRS)
2003-01-01
This video clip is an interview with Dr. Peter Cavanaugh, principal investigator for the FOOT experiment. He explains the reasoning behind the experiment and shows some video clips of the FOOT experiment being calibrated and conducted in orbit. The heart of the FOOT experiment is an instrumented suit called the Lower Extremity Monitoring Suit (LEMS). This customized garment is a pair of Lycra cycling tights incorporating 20 carefully placed sensors and the associated wiring control units, and amplifiers. LEMS enables the electrical activity of the muscles, the angular motions of the hip, knee, and ankle joints, and the force under both feet to be measured continuously. Measurements are also made on the arm muscles. Information from the sensors can be recorded up to 14 hours on a small, wearable computer.
Senior Surfing: Computer Use, Aging, and Formal Training
ERIC Educational Resources Information Center
Warren-Peace, Paula; Parrish, Elaine; Peace, C. Brian; Xu, Jianzhong
2008-01-01
In this article, we describe data from two case studies of seniors (one younger senior and one older senior) in learning to use computers. The study combined interviews, observations, and documents to take a close look at their experiences with computers, as well as the influences of aging and computer training on their experiences. The study…
Fluid/Structure Interaction Studies of Aircraft Using High Fidelity Equations on Parallel Computers
NASA Technical Reports Server (NTRS)
Guruswamy, Guru; VanDalsem, William (Technical Monitor)
1994-01-01
Abstract Aeroelasticity which involves strong coupling of fluids, structures and controls is an important element in designing an aircraft. Computational aeroelasticity using low fidelity methods such as the linear aerodynamic flow equations coupled with the modal structural equations are well advanced. Though these low fidelity approaches are computationally less intensive, they are not adequate for the analysis of modern aircraft such as High Speed Civil Transport (HSCT) and Advanced Subsonic Transport (AST) which can experience complex flow/structure interactions. HSCT can experience vortex induced aeroelastic oscillations whereas AST can experience transonic buffet associated structural oscillations. Both aircraft may experience a dip in the flutter speed at the transonic regime. For accurate aeroelastic computations at these complex fluid/structure interaction situations, high fidelity equations such as the Navier-Stokes for fluids and the finite-elements for structures are needed. Computations using these high fidelity equations require large computational resources both in memory and speed. Current conventional super computers have reached their limitations both in memory and speed. As a result, parallel computers have evolved to overcome the limitations of conventional computers. This paper will address the transition that is taking place in computational aeroelasticity from conventional computers to parallel computers. The paper will address special techniques needed to take advantage of the architecture of new parallel computers. Results will be illustrated from computations made on iPSC/860 and IBM SP2 computer by using ENSAERO code that directly couples the Euler/Navier-Stokes flow equations with high resolution finite-element structural equations.
Progress in Machine Learning Studies for the CMS Computing Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo
Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.
Progress in Machine Learning Studies for the CMS Computing Infrastructure
Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo; ...
2017-12-06
Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.
NASA Astrophysics Data System (ADS)
Vasil'ev, V. I.; Kardashevsky, A. M.; Popov, V. V.; Prokopev, G. A.
2017-10-01
This article presents results of computational experiment carried out using a finite-difference method for solving the inverse Cauchy problem for a two-dimensional elliptic equation. The computational algorithm involves an iterative determination of the missing boundary condition from the override condition using the conjugate gradient method. The results of calculations are carried out on the examples with exact solutions as well as at specifying an additional condition with random errors are presented. Results showed a high efficiency of the iterative method of conjugate gradients for numerical solution
Shortest path problem on a grid network with unordered intermediate points
NASA Astrophysics Data System (ADS)
Saw, Veekeong; Rahman, Amirah; Eng Ong, Wen
2017-10-01
We consider a shortest path problem with single cost factor on a grid network with unordered intermediate points. A two stage heuristic algorithm is proposed to find a feasible solution path within a reasonable amount of time. To evaluate the performance of the proposed algorithm, computational experiments are performed on grid maps of varying size and number of intermediate points. Preliminary results for the problem are reported. Numerical comparisons against brute forcing show that the proposed algorithm consistently yields solutions that are within 10% of the optimal solution and uses significantly less computation time.
NASA Astrophysics Data System (ADS)
Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru
We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.
Monkey search algorithm for ECE components partitioning
NASA Astrophysics Data System (ADS)
Kuliev, Elmar; Kureichik, Vladimir; Kureichik, Vladimir, Jr.
2018-05-01
The paper considers one of the important design problems – a partitioning of electronic computer equipment (ECE) components (blocks). It belongs to the NP-hard class of problems and has a combinatorial and logic nature. In the paper, a partitioning problem formulation can be found as a partition of graph into parts. To solve the given problem, the authors suggest using a bioinspired approach based on a monkey search algorithm. Based on the developed software, computational experiments were carried out that show the algorithm efficiency, as well as its recommended settings for obtaining more effective solutions in comparison with a genetic algorithm.
Solid harmonic wavelet scattering for predictions of molecule properties
NASA Astrophysics Data System (ADS)
Eickenberg, Michael; Exarchakis, Georgios; Hirn, Matthew; Mallat, Stéphane; Thiry, Louis
2018-06-01
We present a machine learning algorithm for the prediction of molecule properties inspired by ideas from density functional theory (DFT). Using Gaussian-type orbital functions, we create surrogate electronic densities of the molecule from which we compute invariant "solid harmonic scattering coefficients" that account for different types of interactions at different scales. Multilinear regressions of various physical properties of molecules are computed from these invariant coefficients. Numerical experiments show that these regressions have near state-of-the-art performance, even with relatively few training examples. Predictions over small sets of scattering coefficients can reach a DFT precision while being interpretable.
Economic irrationality is optimal during noisy decision making
Moreland, James; Chater, Nick; Usher, Marius; Summerfield, Christopher
2016-01-01
According to normative theories, reward-maximizing agents should have consistent preferences. Thus, when faced with alternatives A, B, and C, an individual preferring A to B and B to C should prefer A to C. However, it has been widely argued that humans can incur losses by violating this axiom of transitivity, despite strong evolutionary pressure for reward-maximizing choices. Here, adopting a biologically plausible computational framework, we show that intransitive (and thus economically irrational) choices paradoxically improve accuracy (and subsequent economic rewards) when decision formation is corrupted by internal neural noise. Over three experiments, we show that humans accumulate evidence over time using a “selective integration” policy that discards information about alternatives with momentarily lower value. This policy predicts violations of the axiom of transitivity when three equally valued alternatives differ circularly in their number of winning samples. We confirm this prediction in a fourth experiment reporting significant violations of weak stochastic transitivity in human observers. Crucially, we show that relying on selective integration protects choices against “late” noise that otherwise corrupts decision formation beyond the sensory stage. Indeed, we report that individuals with higher late noise relied more strongly on selective integration. These findings suggest that violations of rational choice theory reflect adaptive computations that have evolved in response to irreducible noise during neural information processing. PMID:26929353
Economic irrationality is optimal during noisy decision making.
Tsetsos, Konstantinos; Moran, Rani; Moreland, James; Chater, Nick; Usher, Marius; Summerfield, Christopher
2016-03-15
According to normative theories, reward-maximizing agents should have consistent preferences. Thus, when faced with alternatives A, B, and C, an individual preferring A to B and B to C should prefer A to C. However, it has been widely argued that humans can incur losses by violating this axiom of transitivity, despite strong evolutionary pressure for reward-maximizing choices. Here, adopting a biologically plausible computational framework, we show that intransitive (and thus economically irrational) choices paradoxically improve accuracy (and subsequent economic rewards) when decision formation is corrupted by internal neural noise. Over three experiments, we show that humans accumulate evidence over time using a "selective integration" policy that discards information about alternatives with momentarily lower value. This policy predicts violations of the axiom of transitivity when three equally valued alternatives differ circularly in their number of winning samples. We confirm this prediction in a fourth experiment reporting significant violations of weak stochastic transitivity in human observers. Crucially, we show that relying on selective integration protects choices against "late" noise that otherwise corrupts decision formation beyond the sensory stage. Indeed, we report that individuals with higher late noise relied more strongly on selective integration. These findings suggest that violations of rational choice theory reflect adaptive computations that have evolved in response to irreducible noise during neural information processing.
How deep cells feel: Mean-field Computations and Experiments
NASA Astrophysics Data System (ADS)
Buxboim, Amnon; Sen, Shamik; Discher, Dennis E.
2009-03-01
Most cells in solid tissues exert contractile forces that mechanically couple them to elastic surroundings and that significantly influence cell adhesion, cytoskeletal organization and differentiation. However, strains within the depths of matrices are often unclear and are likely relevant to thin matrices, such as basement membranes, relative to cell size as well as to defining how far cells can ``feel.'' We present experimental results for cell spreading on thin, ligand- coated gels and for prestress in stem cells in relation to gel stiffness. Matrix thickness affects cell spread area, focal adhesions and cytoskeleton organization in stem cells, which we will compare to differentiated cells. We introduce a finite element computation to estimate the elastostatic deformations within the matrix on which a cell is placed. Interfacial strains between cell and matrix show large deviations only when soft matrices are a fraction of cell dimensions, proving consistent with experiments. 3-D cell morphologies that model stem cell-derived neurons, myoblasts, and osteoblasts show that a cylinder-shaped myoblast induces the highest strains, consistent with the prominent contractility of muscle. Groups of such cells show a weak crosstalk via matrix strains only when cells are much closer than a cell-width. Cells thus feel on length scales closer to that of adhesions than on cellular scales.
Shen, Wei; Qu, Qingqing; Tong, Xiuhong
2018-05-01
The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.
Experimental and computational investigations on severe slugging in a catenary riser
NASA Astrophysics Data System (ADS)
Duan, Jin-long; Chen, Ke; You, Yun-xiang; Gao, Song
2017-12-01
Severe slugging can occur in a pipeline-riser system at relatively low liquid and gas flow rates during gas-oil transportation, possibly causing unexpected damage to the production facilities. Experiments with air and water are conducted in a horizontal and downward inclined pipeline followed by a catenary riser in order to investigate the mechanism and characteristics of severe slugging. A theoretical model is introduced to compare with the experiments. The results show that the formation mechanism of severe slugging in a catenary riser is different from that in a vertical riser due to the riser geometry and five flow patterns are obtained and analyzed. A gas-liquid mixture slug stage is observed at the beginning of one cycle of severe slugging, which is seldom noticed in previous studies. Based on both experiments and computations, the time period and variation of pressure amplitude of severe slugging are found closely related to the superficial gas velocity, implying that the gas velocity significantly influences the flow patterns in our experiments. Moreover, good agreements between the experimental data and the numerical results are shown in the stability curve and flow regime map, which can be a possible reference for design in an offshore oil-production system.
Computer-Aided Experiment Planning toward Causal Discovery in Neuroscience.
Matiasz, Nicholas J; Wood, Justin; Wang, Wei; Silva, Alcino J; Hsu, William
2017-01-01
Computers help neuroscientists to analyze experimental results by automating the application of statistics; however, computer-aided experiment planning is far less common, due to a lack of similar quantitative formalisms for systematically assessing evidence and uncertainty. While ontologies and other Semantic Web resources help neuroscientists to assimilate required domain knowledge, experiment planning requires not only ontological but also epistemological (e.g., methodological) information regarding how knowledge was obtained. Here, we outline how epistemological principles and graphical representations of causality can be used to formalize experiment planning toward causal discovery. We outline two complementary approaches to experiment planning: one that quantifies evidence per the principles of convergence and consistency, and another that quantifies uncertainty using logical representations of constraints on causal structure. These approaches operationalize experiment planning as the search for an experiment that either maximizes evidence or minimizes uncertainty. Despite work in laboratory automation, humans must still plan experiments and will likely continue to do so for some time. There is thus a great need for experiment-planning frameworks that are not only amenable to machine computation but also useful as aids in human reasoning.
Does Cation Size Affect Occupancy and Electrostatic Screening of the Nucleic Acid Ion Atmosphere?
2016-01-01
Electrostatics are central to all aspects of nucleic acid behavior, including their folding, condensation, and binding to other molecules, and the energetics of these processes are profoundly influenced by the ion atmosphere that surrounds nucleic acids. Given the highly complex and dynamic nature of the ion atmosphere, understanding its properties and effects will require synergy between computational modeling and experiment. Prior computational models and experiments suggest that cation occupancy in the ion atmosphere depends on the size of the cation. However, the computational models have not been independently tested, and the experimentally observed effects were small. Here, we evaluate a computational model of ion size effects by experimentally testing a blind prediction made from that model, and we present additional experimental results that extend our understanding of the ion atmosphere. Giambasu et al. developed and implemented a three-dimensional reference interaction site (3D-RISM) model for monovalent cations surrounding DNA and RNA helices, and this model predicts that Na+ would outcompete Cs+ by 1.8–2.1-fold; i.e., with Cs+ in 2-fold excess of Na+ the ion atmosphere would contain an equal number of each cation (Nucleic Acids Res.2015, 43, 8405). However, our ion counting experiments indicate that there is no significant preference for Na+ over Cs+. There is an ∼25% preferential occupancy of Li+ over larger cations in the ion atmosphere but, counter to general expectations from existing models, no size dependence for the other alkali metal ions. Further, we followed the folding of the P4–P6 RNA and showed that differences in folding with different alkali metal ions observed at high concentration arise from cation–anion interactions and not cation size effects. Overall, our results provide a critical test of a computational prediction, fundamental information about ion atmosphere properties, and parameters that will aid in the development of next-generation nucleic acid computational models. PMID:27479701
Modality-independent coding of spatial layout in the human brain
Wolbers, Thomas; Klatzky, Roberta L.; Loomis, Jack M.; Wutte, Magdalena G.; Giudice, Nicholas A.
2011-01-01
Summary In many non-human species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, as most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two fMRI experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3-D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of information-matched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of high-level abstract spatial information processing in the human brain. PMID:21620708
Real-time simulation of biological soft tissues: a PGD approach.
Niroomandi, S; González, D; Alfaro, I; Bordeu, F; Leygue, A; Cueto, E; Chinesta, F
2013-05-01
We introduce here a novel approach for the numerical simulation of nonlinear, hyperelastic soft tissues at kilohertz feedback rates necessary for haptic rendering. This approach is based upon the use of proper generalized decomposition techniques, a generalization of PODs. Proper generalized decomposition techniques can be considered as a means of a priori model order reduction and provides a physics-based meta-model without the need for prior computer experiments. The suggested strategy is thus composed of an offline phase, in which a general meta-model is computed, and an online evaluation phase in which the results are obtained at real time. Results are provided that show the potential of the proposed technique, together with some benchmark test that shows the accuracy of the method. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Chen, Zhihao; Lau, Doreen; Teo, Ju Teng; Ng, Soon Huat; Yang, Xiufeng; Kei, Pin Lin
2014-05-01
We propose and demonstrate the feasibility of using a highly sensitive microbend multimode fiber optic sensor for simultaneous measurement of breathing rate (BR) and heart rate (HR). The sensing system consists of a transceiver, microbend multimode fiber, and a computer. The transceiver is comprised of an optical transmitter, an optical receiver, and circuits for data communication with the computer via Bluetooth. Comparative experiments conducted between the sensor and predicate commercial physiologic devices showed an accuracy of ±2 bpm for both BR and HR measurement. Our preliminary study of simultaneous measurement of BR and HR in a clinical trial conducted on 11 healthy subjects during magnetic resonance imaging (MRI) also showed very good agreement with measurements obtained from conventional MR-compatible devices.
A Laboratory Experiment on Coupled Non-Identical Pendulums
ERIC Educational Resources Information Center
Li, Ang; Zeng, Jingyi; Yang, Hujiang; Xiao, Jinghua
2011-01-01
In this paper, coupled pendulums with different lengths are studied. Through steel magnets, each pendulum is coupled with others, and a stepping motor is used to drive the whole system. To record the data automatically, we designed a data acquisition system with a CCD camera connected to a computer. The coupled system shows in-phase, locked-phase…
A control problem for Burgers' equation with bounded input/output
NASA Technical Reports Server (NTRS)
Burns, John A.; Kang, Sungkwon
1990-01-01
A stabilization problem for Burgers' equation is considered. Using linearization, various controllers are constructed which minimize certain weighted energy functionals. These controllers produce the desired degree of stability for the closed-loop nonlinear system. A numerical scheme for computing the feedback gain functional is developed and several numerical experiments are performed to show the theoretical results.
Heat simulation via Scilab programming
NASA Astrophysics Data System (ADS)
Hasan, Mohammad Khatim; Sulaiman, Jumat; Karim, Samsul Arifin Abdul
2014-07-01
This paper discussed the used of an open source sofware called Scilab to develop a heat simulator. In this paper, heat equation was used to simulate heat behavior in an object. The simulator was developed using finite difference method. Numerical experiment output show that Scilab can produce a good heat behavior simulation with marvellous visual output with only developing simple computer code.
ERIC Educational Resources Information Center
Ekmekci, Adem; Gulacar, Ozcan
2015-01-01
Science education reform emphasizes innovative and constructivist views of science teaching and learning that promotes active learning environments, dynamic instructions, and authentic science experiments. Technology-based and hands-on instructional designs are among innovative science teaching and learning methods. Research shows that these two…
Decision Making in Computer-Simulated Experiments.
ERIC Educational Resources Information Center
Suits, J. P.; Lagowski, J. J.
A set of interactive, computer-simulated experiments was designed to respond to the large range of individual differences in aptitude and reasoning ability generally exhibited by students enrolled in first-semester general chemistry. These experiments give students direct experience in the type of decision making needed in an experimental setting.…
Effect of Microgravity on Material Undergoing Melting and Freezing: the TES Experiment
NASA Technical Reports Server (NTRS)
Namkoong, David; Jacqmin, David; Szaniszlo, Andrew
1995-01-01
This experiment is the first to melt and freeze a high temperature thermal energy storage (TES) material under an extended duration of microgravity. It is one of a series to validate an analytical computer program that predicts void behavior of substances undergoing phase change under microgravity. Two flight experiments were launched in STS-62. The first, TES-1, containing lithium fluoride in an annular volume, performed flawlessly in the 22 hours of its operation. Results are reported in this paper. A software failure in TES-2 caused its shutdown after 4 seconds. A computer program, TESSIM, for thermal energy storage simulation is being developed to analyze the phenomena occurring within the TES containment vessel. The first order effects, particularly the surface tension forces, have been incorporated into TESSIM. TESSIM validation is based on two types of results. First is the temperature history of various points of the containment structure, and second, upon return from flight, the distribution of the TES material within the containment vessel following the last freeze cycle. The temperature data over the four cycles showed a repetition of results over the third and fourth cycles. This result is a confirmation that any initial conditions prior to the first cycle had been damped out by the third cycle. The TESSIM simulation showed a close comparison with the flight data. The solidified TES material distribution within the containment vessel was obtained by a tomography imaging process. The frozen material was concentrated toward the colder end of the annular volume. The TESSIM prediction showed the same pattern. With the general agreement of TESSIM and the data, a computerized visual representation can be shown which accurately shows the movement and behavior of the void during the entire freezing and melting cycles.
Effect of microgravity on material undergoing melting and freezing: The TES Experiment
NASA Astrophysics Data System (ADS)
Namkoong, David; Jacqmin, David; Szaniszlo, Andrew
1995-01-01
This experiment is the first to melt and freeze a high temperature thermal energy storage (TES) material under an extended duration of microgravity. It is one of a series to validate an analytical computer program that predicts void behavior of substances undergoing phase change under microgravity. Two flight experiments were launched in STS-62. The first, TES-1, containing lithium fluoride in an annular volume, performed flawlessly in the 22 hours of its operation. Results are reported in this paper. A software failure in TES-2 caused its shutdown after 4 seconds. A computer program, TESSIM, for thermal energy storage simulation is being developed to analyze the phenomena occurring within the TES containment vessel. The first order effects, particularly the surface tension forces, have been incorporated into TESSIM. TESSIM validation is based on two types of results. First is the temperature history of various points of the containment structure, and second, upon return from flight, the distribution of the TES material within the containment vessel following the last freeze cycle. The temperature data over the four cycles showed a repetition of results over the third and fourth cycles. This result is a confirmation that any initial conditions prior to the first cycle had been damped out by the third cycle. The TESSIM simulation showed a close comparison with the flight data. The solidified TES material distribution within the containment vessel was obtained by a tomography imaging process. The frozen material was concentrated toward the colder end of the annular volume. The TESSIM prediction showed the same pattern. With the general agreement of TESSIM and the data, a computerized visual representation can be shown which accurately shows the movement and behavior of the void during the entire freezing and melting cycles.
Reactor transient control in support of PFR/TREAT TUCOP experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burrows, D.R.; Larsen, G.R.; Harrison, L.J.
1984-01-01
Unique energy deposition and experiment control requirements posed bythe PFR/TREAT series of transient undercooling/overpower (TUCOP) experiments resulted in equally unique TREAT reactor operations. New reactor control computer algorithms were written and used with the TREAT reactor control computer system to perform such functions as early power burst generation (based on test train flow conditions), burst generation produced by a step insertion of reactivity following a controlled power ramp, and shutdown (SCRAM) initiators based on both test train conditions and energy deposition. Specialized hardware was constructed to simulate test train inputs to the control computer system so that computer algorithms couldmore » be tested in real time without irradiating the experiment.« less
A microprocessor-based automation test system for the experiment of the multi-stage compressor
NASA Astrophysics Data System (ADS)
Zhang, Huisheng; Lin, Chongping
1991-08-01
An automation test system that is controlled by the microprocessor and used in the multistage compressor experiment is described. Based on the analysis of the compressor experiment performances, a complete hardware system structure is set up. It is composed of a IBM PC/XT computer, a large scale sampled data system, the moving machine with three directions, the scanners, the digital instrumentation and some output devices. A program structure of real-time software system is described. The testing results show that this test system can take the measure of many parameter magnitudes in the blade row places and on a boundary layer in different states. The automatic extent and the accuracy of experiment is increased and the experimental cost is reduced.
Testing and Analysis of Sensor Ports
NASA Technical Reports Server (NTRS)
Zhang, M.; Frendi, A.; Thompson, W.; Casiano, M. J.
2016-01-01
This Technical Publication summarizes the work focused on the testing and analysis of sensor ports. The tasks under this contract were divided into three areas: (1) Development of an Analytical Model, (2) Conducting a Set of Experiments, and (3) Obtaining Computational Solutions. Results from the experiment using both short and long sensor ports were obtained using harmonic, random, and frequency sweep plane acoustic waves. An amplification factor of the pressure signal between the port inlet and the back of the port is obtained and compared to models. Comparisons of model and experimental results showed very good agreement.
Roy, Tapta Kanchan; Carrington, Tucker; Gerber, R Benny
2014-08-21
Anharmonic vibrational spectroscopy calculations using MP2 and B3LYP computed potential surfaces are carried out for a series of molecules, and frequencies and intensities are compared with those from experiment. The vibrational self-consistent field with second-order perturbation correction (VSCF-PT2) is used in computing the spectra. The test calculations have been performed for the molecules HNO3, C2H4, C2H4O, H2SO4, CH3COOH, glycine, and alanine. Both MP2 and B3LYP give results in good accord with experimental frequencies, though, on the whole, MP2 gives very slightly better agreement. A statistical analysis of deviations in frequencies from experiment is carried out that gives interesting insights. The most probable percentage deviation from experimental frequencies is about -2% (to the red of the experiment) for B3LYP and +2% (to the blue of the experiment) for MP2. There is a higher probability for relatively large percentage deviations when B3LYP is used. The calculated intensities are also found to be in good accord with experiment, but the percentage deviations are much larger than those for frequencies. The results show that both MP2 and B3LYP potentials, used in VSCF-PT2 calculations, account well for anharmonic effects in the spectroscopy of molecules of the types considered.
Implementation of an object oriented track reconstruction model into multiple LHC experiments*
NASA Astrophysics Data System (ADS)
Gaines, Irwin; Gonzalez, Saul; Qian, Sijin
2001-10-01
An Object Oriented (OO) model (Gaines et al., 1996; 1997; Gaines and Qian, 1998; 1999) for track reconstruction by the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders. The model has been coded in the C++ programming language and has been successfully implemented into the OO computing environments of both the CMS (1994) and ATLAS (1994) experiments at the future Large Hadron Collider (LHC) at CERN. We shall report: how the OO model was adapted, with largely the same code, to different scenarios and serves the different reconstruction aims in different experiments (i.e. the level-2 trigger software for ATLAS and the offline software for CMS); how the OO model has been incorporated into different OO environments with a similar integration structure (demonstrating the ease of re-use of OO program); what are the OO model's performance, including execution time, memory usage, track finding efficiency and ghost rate, etc.; and additional physics performance based on use of the OO tracking model. We shall also mention the experience and lessons learned from the implementation of the OO model into the general OO software framework of the experiments. In summary, our practice shows that the OO technology really makes the software development and the integration issues straightforward and convenient; this may be particularly beneficial for the general non-computer-professional physicists.
Discovering chemistry with an ab initio nanoreactor
Wang, Lee-Ping; Titov, Alexey; McGibbon, Robert; ...
2014-11-02
Chemical understanding is driven by the experimental discovery of new compounds and reactivity, and is supported by theory and computation that provides detailed physical insight. While theoretical and computational studies have generally focused on specific processes or mechanistic hypotheses, recent methodological and computational advances harken the advent of their principal role in discovery. Here we report the development and application of the ab initio nanoreactor – a highly accelerated, first-principles molecular dynamics simulation of chemical reactions that discovers new molecules and mechanisms without preordained reaction coordinates or elementary steps. Using the nanoreactor we show new pathways for glycine synthesis frommore » primitive compounds proposed to exist on the early Earth, providing new insight into the classic Urey-Miller experiment. Ultimately, these results highlight the emergence of theoretical and computational chemistry as a tool for discovery in addition to its traditional role of interpreting experimental findings.« less
Discovering chemistry with an ab initio nanoreactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Lee-Ping; Titov, Alexey; McGibbon, Robert
Chemical understanding is driven by the experimental discovery of new compounds and reactivity, and is supported by theory and computation that provides detailed physical insight. While theoretical and computational studies have generally focused on specific processes or mechanistic hypotheses, recent methodological and computational advances harken the advent of their principal role in discovery. Here we report the development and application of the ab initio nanoreactor – a highly accelerated, first-principles molecular dynamics simulation of chemical reactions that discovers new molecules and mechanisms without preordained reaction coordinates or elementary steps. Using the nanoreactor we show new pathways for glycine synthesis frommore » primitive compounds proposed to exist on the early Earth, providing new insight into the classic Urey-Miller experiment. Ultimately, these results highlight the emergence of theoretical and computational chemistry as a tool for discovery in addition to its traditional role of interpreting experimental findings.« less
Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua
2014-01-01
This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable. PMID:24883353
Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua
2014-01-01
This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.
An approach to computing direction relations between separated object groups
NASA Astrophysics Data System (ADS)
Yan, H.; Wang, Z.; Li, J.
2013-06-01
Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on Gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups; and then it constructs the Voronoi Diagram between the two groups using the triangular network; after this, the normal of each Vornoi edge is calculated, and the quantitative expression of the direction relations is constructed; finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.
An approach to computing direction relations between separated object groups
NASA Astrophysics Data System (ADS)
Yan, H.; Wang, Z.; Li, J.
2013-09-01
Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups, and then it constructs the Voronoi diagram between the two groups using the triangular network. After this, the normal of each Voronoi edge is calculated, and the quantitative expression of the direction relations is constructed. Finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.
Understanding Islamist political violence through computational social simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, Jennifer H; Mackerrow, Edward P; Patelli, Paolo G
Understanding the process that enables political violence is of great value in reducing the future demand for and support of violent opposition groups. Methods are needed that allow alternative scenarios and counterfactuals to be scientifically researched. Computational social simulation shows promise in developing 'computer experiments' that would be unfeasible or unethical in the real world. Additionally, the process of modeling and simulation reveals and challenges assumptions that may not be noted in theories, exposes areas where data is not available, and provides a rigorous, repeatable, and transparent framework for analyzing the complex dynamics of political violence. This paper demonstrates themore » computational modeling process using two simulation techniques: system dynamics and agent-based modeling. The benefits and drawbacks of both techniques are discussed. In developing these social simulations, we discovered that the social science concepts and theories needed to accurately simulate the associated psychological and social phenomena were lacking.« less
Computational modeling of mediator oxidation by oxygen in an amperometric glucose biosensor.
Simelevičius, Dainius; Petrauskas, Karolis; Baronas, Romas; Razumienė, Julija
2014-02-07
In this paper, an amperometric glucose biosensor is modeled numerically. The model is based on non-stationary reaction-diffusion type equations. The model consists of four layers. An enzyme layer lies directly on a working electrode surface. The enzyme layer is attached to an electrode by a polyvinyl alcohol (PVA) coated terylene membrane. This membrane is modeled as a PVA layer and a terylene layer, which have different diffusivities. The fourth layer of the model is the diffusion layer, which is modeled using the Nernst approach. The system of partial differential equations is solved numerically using the finite difference technique. The operation of the biosensor was analyzed computationally with special emphasis on the biosensor response sensitivity to oxygen when the experiment was carried out in aerobic conditions. Particularly, numerical experiments show that the overall biosensor response sensitivity to oxygen is insignificant. The simulation results qualitatively explain and confirm the experimentally observed biosensor behavior.
Computational Modeling of Mediator Oxidation by Oxygen in an Amperometric Glucose Biosensor
Šimelevičius, Dainius; Petrauskas, Karolis; Baronas, Romas; Julija, Razumienė
2014-01-01
In this paper, an amperometric glucose biosensor is modeled numerically. The model is based on non-stationary reaction-diffusion type equations. The model consists of four layers. An enzyme layer lies directly on a working electrode surface. The enzyme layer is attached to an electrode by a polyvinyl alcohol (PVA) coated terylene membrane. This membrane is modeled as a PVA layer and a terylene layer, which have different diffusivities. The fourth layer of the model is the diffusion layer, which is modeled using the Nernst approach. The system of partial differential equations is solved numerically using the finite difference technique. The operation of the biosensor was analyzed computationally with special emphasis on the biosensor response sensitivity to oxygen when the experiment was carried out in aerobic conditions. Particularly, numerical experiments show that the overall biosensor response sensitivity to oxygen is insignificant. The simulation results qualitatively explain and confirm the experimentally observed biosensor behavior. PMID:24514882
NASA Technical Reports Server (NTRS)
Muffoletto, A. J.
1982-01-01
An aerodynamic computer code, capable of predicting unsteady and C sub m values for an airfoil undergoing dynamic stall, is used to predict the amplitudes and frequencies of a wing undergoing torsional stall flutter. The code, developed at United Technologies Research Corporation (UTRC), is an empirical prediction method designed to yield unsteady values of normal force and moment, given the airfoil's static coefficient characteristics and the unsteady aerodynamic values, alpha, A and B. In this experiment, conducted in the PSU 4' x 5' subsonic wind tunnel, the wing's elastic axis, torsional spring constant and initial angle of attack are varied, and the oscillation amplitudes and frequencies of the wing, while undergoing torsional stall flutter, are recorded. These experimental values show only fair comparisons with the predicted responses. Predictions tend to be good at low velocities and rather poor at higher velocities.
Ontology based heterogeneous materials database integration and semantic query
NASA Astrophysics Data System (ADS)
Zhao, Shuai; Qian, Quan
2017-10-01
Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.
Global spectral graph wavelet signature for surface analysis of carpal bones
NASA Astrophysics Data System (ADS)
Masoumi, Majid; Rezaei, Mahsa; Ben Hamza, A.
2018-02-01
Quantitative shape comparison is a fundamental problem in computer vision, geometry processing and medical imaging. In this paper, we present a spectral graph wavelet approach for shape analysis of carpal bones of the human wrist. We employ spectral graph wavelets to represent the cortical surface of a carpal bone via the spectral geometric analysis of the Laplace-Beltrami operator in the discrete domain. We propose global spectral graph wavelet (GSGW) descriptor that is isometric invariant, efficient to compute, and combines the advantages of both low-pass and band-pass filters. We perform experiments on shapes of the carpal bones of ten women and ten men from a publicly-available database of wrist bones. Using one-way multivariate analysis of variance (MANOVA) and permutation testing, we show through extensive experiments that the proposed GSGW framework gives a much better performance compared to the global point signature embedding approach for comparing shapes of the carpal bones across populations.
Global spectral graph wavelet signature for surface analysis of carpal bones.
Masoumi, Majid; Rezaei, Mahsa; Ben Hamza, A
2018-02-05
Quantitative shape comparison is a fundamental problem in computer vision, geometry processing and medical imaging. In this paper, we present a spectral graph wavelet approach for shape analysis of carpal bones of the human wrist. We employ spectral graph wavelets to represent the cortical surface of a carpal bone via the spectral geometric analysis of the Laplace-Beltrami operator in the discrete domain. We propose global spectral graph wavelet (GSGW) descriptor that is isometric invariant, efficient to compute, and combines the advantages of both low-pass and band-pass filters. We perform experiments on shapes of the carpal bones of ten women and ten men from a publicly-available database of wrist bones. Using one-way multivariate analysis of variance (MANOVA) and permutation testing, we show through extensive experiments that the proposed GSGW framework gives a much better performance compared to the global point signature embedding approach for comparing shapes of the carpal bones across populations.
Telehealth for "the digital illiterate"--elderly heart failure patients experiences.
Lind, Leili; Karlsson, Daniel
2014-01-01
Telehealth solutions should be available also for elderly patients with no interest in using, or capacity to use, computers and smartphones. Fourteen elderly, severely ill heart failure patients in home care participated in a telehealth study and used digital pens for daily reporting of their health state--a technology never used before by this patient group. After the study seven patients and two spouses were interviewed face-to-face. A qualitative content analysis of the interview material was performed. The informants had no experience of computers or the Internet and no interest in learning. Still, patients found the digital pen and the health diary form easy to use, thus effortlessly adopting to changes in care provision. They experienced an improved contact with the caregivers and had a sense of increased security despite a multimorbid state. Our study shows that, given that technologies are tailored to specific patient groups, even "the digital illiterate" may use the Internet.
Hoang, Tuan; Tran, Dat; Huang, Xu
2013-01-01
Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.
Spin-neurons: A possible path to energy-efficient neuromorphic computers
NASA Astrophysics Data System (ADS)
Sharad, Mrigank; Fan, Deliang; Roy, Kaushik
2013-12-01
Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.
Liu, Mali; Lu, Chihao; Li, Haifeng; Liu, Xu
2018-02-19
We propose a bifocal computational near eye light field display (bifocal computational display) and structure parameters determination scheme (SPDS) for bifocal computational display that achieves greater depth of field (DOF), high resolution, accommodation and compact form factor. Using a liquid varifocal lens, two single-focal computational light fields are superimposed to reconstruct a virtual object's light field by time multiplex and avoid the limitation on high refresh rate. By minimizing the deviation between reconstructed light field and original light field, we propose a determination framework to determine the structure parameters of bifocal computational light field display. When applied to different objective to SPDS, it can achieve high average resolution or uniform resolution display over scene depth range. To analyze the advantages and limitation of our proposed method, we have conducted simulations and constructed a simple prototype which comprises a liquid varifocal lens, dual-layer LCDs and a uniform backlight. The results of simulation and experiments with our method show that the proposed system can achieve expected performance well. Owing to the excellent performance of our system, we motivate bifocal computational display and SPDS to contribute to a daily-use and commercial virtual reality display.
Spin-neurons: A possible path to energy-efficient neuromorphic computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharad, Mrigank; Fan, Deliang; Roy, Kaushik
Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices.more » Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.« less
Development and Formative Evaluation of Computer Simulated College Chemistry Experiments.
ERIC Educational Resources Information Center
Cavin, Claudia S.; Cavin, E. D.
1978-01-01
This article describes the design, preparation, and initial evaluation of a set of computer-simulated chemistry experiments. The experiments entailed the use of an atomic emission spectroscope and a single-beam visible absorption spectrophometer. (Author/IRT)
Thai Language Sentence Similarity Computation Based on Syntactic Structure and Semantic Vector
NASA Astrophysics Data System (ADS)
Wang, Hongbin; Feng, Yinhan; Cheng, Liang
2018-03-01
Sentence similarity computation plays an increasingly important role in text mining, Web page retrieval, machine translation, speech recognition and question answering systems. Thai language as a kind of resources scarce language, it is not like Chinese language with HowNet and CiLin resources. So the Thai sentence similarity research faces some challenges. In order to solve this problem of the Thai language sentence similarity computation. This paper proposes a novel method to compute the similarity of Thai language sentence based on syntactic structure and semantic vector. This method firstly uses the Part-of-Speech (POS) dependency to calculate two sentences syntactic structure similarity, and then through the word vector to calculate two sentences semantic similarity. Finally, we combine the two methods to calculate two Thai language sentences similarity. The proposed method not only considers semantic, but also considers the sentence syntactic structure. The experiment result shows that this method in Thai language sentence similarity computation is feasible.
Experimental comparison of two quantum computing architectures
Linke, Norbert M.; Maslov, Dmitri; Roetteler, Martin; Debnath, Shantanu; Figgatt, Caroline; Landsman, Kevin A.; Wright, Kenneth; Monroe, Christopher
2017-01-01
We run a selection of algorithms on two state-of-the-art 5-qubit quantum computers that are based on different technology platforms. One is a publicly accessible superconducting transmon device (www.research.ibm.com/ibm-q) with limited connectivity, and the other is a fully connected trapped-ion system. Even though the two systems have different native quantum interactions, both can be programed in a way that is blind to the underlying hardware, thus allowing a comparison of identical quantum algorithms between different physical systems. We show that quantum algorithms and circuits that use more connectivity clearly benefit from a better-connected system of qubits. Although the quantum systems here are not yet large enough to eclipse classical computers, this experiment exposes critical factors of scaling quantum computers, such as qubit connectivity and gate expressivity. In addition, the results suggest that codesigning particular quantum applications with the hardware itself will be paramount in successfully using quantum computers in the future. PMID:28325879
A Fine-Grained and Privacy-Preserving Query Scheme for Fog Computing-Enhanced Location-Based Service
Yin, Fan; Tang, Xiaohu
2017-01-01
Location-based services (LBS), as one of the most popular location-awareness applications, has been further developed to achieve low-latency with the assistance of fog computing. However, privacy issues remain a research challenge in the context of fog computing. Therefore, in this paper, we present a fine-grained and privacy-preserving query scheme for fog computing-enhanced location-based services, hereafter referred to as FGPQ. In particular, mobile users can obtain the fine-grained searching result satisfying not only the given spatial range but also the searching content. Detailed privacy analysis shows that our proposed scheme indeed achieves the privacy preservation for the LBS provider and mobile users. In addition, extensive performance analyses and experiments demonstrate that the FGPQ scheme can significantly reduce computational and communication overheads and ensure the low-latency, which outperforms existing state-of-the art schemes. Hence, our proposed scheme is more suitable for real-time LBS searching. PMID:28696395
Yang, Xue; Yin, Fan; Tang, Xiaohu
2017-07-11
Location-based services (LBS), as one of the most popular location-awareness applications, has been further developed to achieve low-latency with the assistance of fog computing. However, privacy issues remain a research challenge in the context of fog computing. Therefore, in this paper, we present a fine-grained and privacy-preserving query scheme for fog computing-enhanced location-based services, hereafter referred to as FGPQ. In particular, mobile users can obtain the fine-grained searching result satisfying not only the given spatial range but also the searching content. Detailed privacy analysis shows that our proposed scheme indeed achieves the privacy preservation for the LBS provider and mobile users. In addition, extensive performance analyses and experiments demonstrate that the FGPQ scheme can significantly reduce computational and communication overheads and ensure the low-latency, which outperforms existing state-of-the art schemes. Hence, our proposed scheme is more suitable for real-time LBS searching.
Scheduling based on a dynamic resource connection
NASA Astrophysics Data System (ADS)
Nagiyev, A. E.; Botygin, I. A.; Shersntneva, A. I.; Konyaev, P. A.
2017-02-01
The practical using of distributed computing systems associated with many problems, including troubles with the organization of an effective interaction between the agents located at the nodes of the system, with the specific configuration of each node of the system to perform a certain task, with the effective distribution of the available information and computational resources of the system, with the control of multithreading which implements the logic of solving research problems and so on. The article describes the method of computing load balancing in distributed automatic systems, focused on the multi-agency and multi-threaded data processing. The scheme of the control of processing requests from the terminal devices, providing the effective dynamic scaling of computing power under peak load is offered. The results of the model experiments research of the developed load scheduling algorithm are set out. These results show the effectiveness of the algorithm even with a significant expansion in the number of connected nodes and zoom in the architecture distributed computing system.
Oberauer, Klaus; Lewandowsky, Stephan
2016-11-01
The article reports four experiments with complex-span tasks in which encoding of memory items alternates with processing of distractors. The experiments test two assumptions of a computational model of complex span, SOB-CS: (1) distractor processing impairs memory because distractors are encoded into working memory, thereby interfering with memoranda; and (2) free time following distractors is used to remove them from working memory by unbinding their representations from list context. Experiment 1 shows that distractors are erroneously chosen for recall more often than not-presented stimuli, demonstrating that distractors are encoded into memory. Distractor intrusions declined with longer free time, as predicted by distractor removal. Experiment 2 shows these effects even when distractors precede the memory list, ruling out an account based on selective rehearsal of memoranda during free time. Experiments 3 and 4 test the notion that distractors decay over time. Both experiments show that, contrary to the notion of distractor decay, the chance of a distractor intruding at test does not decline with increasing time since encoding of that distractor. Experiment 4 provides additional evidence against the prediction from distractor decay that distractor intrusions decline over an unfilled retention interval. Taken together, the results support SOB-CS and rule out alternative explanations. Data and simulation code are available on Open Science Framework: osf.io/3ewh7. Copyright © 2016 Elsevier B.V. All rights reserved.
Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation
NASA Astrophysics Data System (ADS)
Anisenkov, A. V.
2018-03-01
In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).
Student Engagement in a Computer Rich Science Classroom
NASA Astrophysics Data System (ADS)
Hunter, Jeffrey C.
The purpose of this study was to examine the student lived experience when using computers in a rural science classroom. The overarching question the project sought to examine was: How do rural students relate to computers as a learning tool in comparison to a traditional science classroom? Participant data were collected using a pre-study survey, Experience Sampling during class and post-study interviews. Students want to use computers in their classrooms. Students shared that they overwhelmingly (75%) preferred a computer rich classroom to a traditional classroom (25%). Students reported a higher level of engagement in classes that use technology/computers (83%) versus those that do not use computers (17%). A computer rich classroom increased student control and motivation as reflected by a participant who shared; "by using computers I was more motivated to get the work done" (Maggie, April 25, 2014, survey). The researcher explored a rural school environment. Rural populations represent a large number of students and appear to be underrepresented in current research. The participants, tenth grade Biology students, were sampled in a traditional teacher led class without computers for one week followed by a week using computers daily. Data supported that there is a new gap that separates students, a device divide. This divide separates those who have access to devices that are robust enough to do high level class work from those who do not. Although cellular phones have reduced the number of students who cannot access the Internet, they may have created a false feeling that access to a computer is no longer necessary at home. As this study shows, although most students have Internet access, fewer have access to a device that enables them to complete rigorous class work at home. Participants received little or no training at school in proper, safe use of a computer and the Internet. It is clear that the majorities of students are self-taught or receive guidance from peers resulting in lower self-confidence or the development of misconceptions of their skill or ability.
Trogolo, Daniela; Mishra, Brijesh Kumar; Heeb, Michèle B; von Gunten, Urs; Arey, J Samuel
2015-04-07
During ozonation of drinking water, the fungicide metabolite N,N-dimethylsulfamide (DMS) can be transformed into a highly toxic product, N-nitrosodimethylamine (NDMA). We used quantum chemical computations and stopped-flow experiments to evaluate a chemical mechanism proposed previously to describe this transformation. Stopped-flow experiments indicate a pK(a) = 10.4 for DMS. Experiments show that hypobromous acid (HOBr), generated by ozone oxidation of naturally occurring bromide, brominates the deprotonated DMS(-) anion with a near-diffusion controlled rate constant (7.1 ± 0.6 × 10(8) M(-1) s(-1)), forming Br-DMS(-) anion. According to quantum chemical calculations, Br-DMS has a pK(a) ∼ 9.0 and thus remains partially deprotonated at neutral pH. The anionic Br-DMS(-) bromamine can react with ozone with a high rate constant (10(5 ± 2.5) M(-1) s(-1)), forming the reaction intermediate (BrNO)(SO2)N(CH3)2(-). This intermediate resembles a loosely bound complex between an electrophilic nitrosyl bromide (BrNO) molecule and an electron-rich dimethylaminosulfinate ((SO2)N(CH3)2(-)) fragment, based on inspection of computed natural charges and geometric parameters. This fragile complex undergoes immediate (10(10 ± 2.5) s(-1)) reaction by two branches: an exothermic channel that produces NDMA, and an entropy-driven channel giving non-NDMA products. Computational results bring new insights into the electronic nature, chemical equilibria, and kinetics of the elementary reactions of this pathway, enabled by computed energies of structures that are not possible to access experimentally.
Witzenburg, Colleen M.; Dhume, Rohit Y.; Shah, Sachin B.; Korenczuk, Christopher E.; Wagner, Hallie P.; Alford, Patrick W.; Barocas, Victor H.
2017-01-01
The ascending thoracic aorta is poorly understood mechanically, especially its risk of dissection. To make better predictions of dissection risk, more information about the multidimensional failure behavior of the tissue is needed, and this information must be incorporated into an appropriate theoretical/computational model. Toward the creation of such a model, uniaxial, equibiaxial, peel, and shear lap tests were performed on healthy porcine ascending aorta samples. Uniaxial and equibiaxial tests showed anisotropy with greater stiffness and strength in the circumferential direction. Shear lap tests showed catastrophic failure at shear stresses (150–200 kPa) much lower than uniaxial tests (750–2500 kPa), consistent with the low peel tension (∼60 mN/mm). A novel multiscale computational model, including both prefailure and failure mechanics of the aorta, was developed. The microstructural part of the model included contributions from a collagen-reinforced elastin sheet and interlamellar connections representing fibrillin and smooth muscle. Components were represented as nonlinear fibers that failed at a critical stretch. Multiscale simulations of the different experiments were performed, and the model, appropriately specified, agreed well with all experimental data, representing a uniquely complete structure-based description of aorta mechanics. In addition, our experiments and model demonstrate the very low strength of the aorta in radial shear, suggesting an important possible mechanism for aortic dissection. PMID:27893044
Emotion computing using Word Mover's Distance features based on Ren_CECps.
Ren, Fuji; Liu, Ning
2018-01-01
In this paper, we propose an emotion separated method(SeTF·IDF) to assign the emotion labels of sentences with different values, which has a better visual effect compared with the values represented by TF·IDF in the visualization of a multi-label Chinese emotional corpus Ren_CECps. Inspired by the enormous improvement of the visualization map propelled by the changed distances among the sentences, we being the first group utilizes the Word Mover's Distance(WMD) algorithm as a way of feature representation in Chinese text emotion classification. Our experiments show that both in 80% for training, 20% for testing and 50% for training, 50% for testing experiments of Ren_CECps, WMD features get the best f1 scores and have a greater increase compared with the same dimension feature vectors obtained by dimension reduction TF·IDF method. Compared experiments in English corpus also show the efficiency of WMD features in the cross-language field.
NASA Astrophysics Data System (ADS)
Reuter, Matthew; Tschudi, Stephen
When investigating the electrical response properties of molecules, experiments often measure conductance whereas computation predicts transmission probabilities. Although the Landauer-Büttiker theory relates the two in the limit of coherent scattering through the molecule, a direct comparison between experiment and computation can still be difficult. Experimental data (specifically that from break junctions) is statistical and computational results are deterministic. Many studies compare the most probable experimental conductance with computation, but such an analysis discards almost all of the experimental statistics. In this work we develop tools to decipher the Landauer-Büttiker transmission function directly from experimental statistics and then apply them to enable a fairer comparison between experimental and computational results.
Efficient experimental design for uncertainty reduction in gene regulatory networks.
Dehghannasiri, Roozbeh; Yoon, Byung-Jun; Dougherty, Edward R
2015-01-01
An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/.
Efficient experimental design for uncertainty reduction in gene regulatory networks
2015-01-01
Background An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. Results The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Conclusions Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/. PMID:26423515
Understanding the Conductance of Single-Molecule Junctions from First Principles
NASA Astrophysics Data System (ADS)
Quek, Su Ying
2008-03-01
Discovering the anatomy of single-molecule junctions, in order to exploit their transport behavior, poses fundamental challenges to nanoscience. First-principles calculations based on density-functional theory (DFT) can, together with experiment, provide detailed atomic-scale insights into the transport properties, and their relation to junction structure and electronic properties. Here, a DFT scattering state approach [1] is used to explore the single-molecule conductance of two prototypical junctions as a function of junction geometry, in the context of recent experiments. First, the computed conductance of 15 distinct benzene-diamine-Au junctions is compared to a large robust experimental data set [2]. The amine-gold bonding is shown to be highly selective, but flexible, resulting in a conductance that is insensitive to other details of the junction structure. The range of computed conductance corresponds well to the narrow distribution in experiment, although the average calculated conductance is approximately 7 times larger. This discrepancy is attributed to the absence of many-electron corrections in the DFT molecular orbital energies; a simple physically-motivated estimate for the self-energy corrections results in a conductance that is much closer to experiment [3]. Second, similar first-principles techniques are applied to a range of bipyridine-Au junctions. The extent to which Au-pyridine link bonding is affected by the constraints of forming bipyridine-Au junctions is investigated. In some contrast to the amine case, the computed conductance shows a strong sensitivity to the tilt of the bipyridine rings relative to the Au surfaces. Experiments probing the conductance of bipyridine-Au junctions are discussed in the context of these findings. [1] H. J. Choi et al, Phys Rev B, 76, 155420 (2007) [2] L. Venkataraman et al, Nano Lett 6, 458 (2006) [3] S. Y. Quek et al, Nano Lett. 7, 3477 (2007)
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brabec, Jiri; Lin, Lin; Shao, Meiyue
We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate themore » density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.« less
Theoretical Investigation of oxides for batteries and fuel cell applications
NASA Astrophysics Data System (ADS)
Ganesh, Panchapakesan; Lubimtsev, Andrew A.; Balachandran, Janakiraman
I will present theoretical studies of Li-ion and proton-conducting oxides using a combination of theory and computations that involve Density Functional Theory based atomistic modeling, cluster-expansion based studies, global optimization, high-throughput computations and machine learning based investigation of ionic transport in oxide materials. In Li-ion intercalated oxides, we explain the experimentally observed (Nature Materials 12, 518-522 (2013)) 'intercalation pseudocapacitance' phenomenon, and explain why Nb2O5 is special to show this behavior when Li-ions are intercalated (J. Mater. Chem. A, 2013,1, 14951-14956), but not when Na-ions are used. In addition, we explore Li-ion intercalation theoretically in VO2 (B) phase, which is somewhat structurally similar to Nb2O5 and predict an interesting role of site-trapping on the voltage and capacity of the material, validated by ongoing experiments. Computations of proton conducting oxides explain why Y-doped BaZrO3 , one of the fastest proton conducting oxide, shows a decrease in conductivity above 20% Y-doping. Further, using high throughput computations and machine learning tools we discover general principles to improve proton conductivity. Acknowledgements: LDRD at ORNL and CNMS at ORNL
CERN Computing in Commercial Clouds
NASA Astrophysics Data System (ADS)
Cordeiro, C.; Field, L.; Garrido Bear, B.; Giordano, D.; Jones, B.; Keeble, O.; Manzi, A.; Martelli, E.; McCance, G.; Moreno-García, D.; Traylen, S.
2017-10-01
By the end of 2016 more than 10 Million core-hours of computing resources have been delivered by several commercial cloud providers to the four LHC experiments to run their production workloads, from simulation to full chain processing. In this paper we describe the experience gained at CERN in procuring and exploiting commercial cloud resources for the computing needs of the LHC experiments. The mechanisms used for provisioning, monitoring, accounting, alarming and benchmarking will be discussed, as well as the involvement of the LHC collaborations in terms of managing the workflows of the experiments within a multicloud environment.
The influence of performance on action-effect integration in sense of agency.
Wen, Wen; Yamashita, Atsushi; Asama, Hajime
2017-08-01
Sense of agency refers to the subjective feeling of being able to control an outcome through one's own actions or will. Prior studies have shown that both sensory processing (e.g., comparisons between sensory feedbacks and predictions basing on one's motor intentions) and high-level cognitive/constructive processes (e.g., inferences based on one's performance or the consequences of one's actions) contribute to judgments of sense of agency. However, it remains unclear how these two types of processes interact, which is important for clarifying the mechanisms underlying sense of agency. Thus, we examined whether performance-based inferences influence action-effect integration in sense of agency using a delay detection paradigm in two experiments. In both experiments, participants pressed left and right arrow keys to control the direction in which a moving dot was travelling. The dot's response delay was manipulated randomly on 7 levels (0-480ms) between the trials; for each trial, participants were asked to judge whether the dot response was delayed and to rate their level of agency over the dot. In Experiment 1, participants tried to direct the dot to reach a destination on the screen as quickly as possible. Furthermore, the computer assisted participants by ignoring erroneous commands for half of the trials (assisted condition), while in the other half, all of the participants' commands were executed (self-control condition). In Experiment 2, participants directed the dot as they pleased (without a specific goal), but, in half of the trials, the computer randomly ignored 32% of their commands (disturbed condition) rather than assisted them. The results from the two experiments showed that performance enhanced action-effect integration. Specifically, when task performance was improved through the computer's assistance in Experiment 1, delay detection was reduced in the 480-ms delay condition, despite the fact that 32% of participants' commands were ignored. Conversely, when no feedback on task performance was given (as in Experiment 2), the participants reported greater delay when some of their commands were randomly ignored. Furthermore, the results of a logistic regression analysis showed that the threshold of delay detection was greater in the assisted condition than in the self-control condition in Experiment 1, which suggests a wider time window for action-effect integration. A multivariate analysis also revealed that assistance was related to reduced delay detection via task performance, while reduced delay detection was directly correlated with a better sense of agency. These results indicate an association between the implicit and explicit aspects of sense of agency. Copyright © 2017 Elsevier Inc. All rights reserved.
PC_Eyewitness and the sequential superiority effect: computer-based lineup administration.
MacLin, Otto H; Zimmerman, Laura A; Malpass, Roy S
2005-06-01
Computer technology has become an increasingly important tool for conducting eyewitness identifications. In the area of lineup identifications, computerized administration offers several advantages for researchers and law enforcement. PC_Eyewitness is designed specifically to administer lineups. To assess this new lineup technology, two studies were conducted in order to replicate the results of previous studies comparing simultaneous and sequential lineups. One hundred twenty university students participated in each experiment. Experiment 1 used traditional paper-and-pencil lineup administration methods to compare simultaneous to sequential lineups. Experiment 2 used PC_Eyewitness to administer simultaneous and sequential lineups. The results of these studies were compared to the meta-analytic results reported by N. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001). No differences were found between paper-and-pencil and PC_Eyewitness lineup administration methods. The core findings of the N. Steblay et al. (2001) meta-analysis were replicated by both administration procedures. These results show that computerized lineup administration using PC_Eyewitness is an effective means for gathering eyewitness identification data.
A Supersonic Argon/Air Coaxial Jet Experiment for Computational Fluid Dynamics Code Validation
NASA Technical Reports Server (NTRS)
Clifton, Chandler W.; Cutler, Andrew D.
2007-01-01
A non-reacting experiment is described in which data has been acquired for the validation of CFD codes used to design high-speed air-breathing engines. A coaxial jet-nozzle has been designed to produce pressure-matched exit flows of Mach 1.8 at 1 atm in both a center jet of argon and a coflow jet of air, creating a supersonic, incompressible mixing layer. The flowfield was surveyed using total temperature, gas composition, and Pitot probes. The data set was compared to CFD code predictions made using Vulcan, a structured grid Navier-Stokes code, as well as to data from a previous experiment in which a He-O2 mixture was used instead of argon in the center jet of the same coaxial jet assembly. Comparison of experimental data from the argon flowfield and its computational prediction shows that the CFD produces an accurate solution for most of the measured flowfield. However, the CFD prediction deviates from the experimental data in the region downstream of x/D = 4, underpredicting the mixing-layer growth rate.
Kinetic energy budgets in areas of convection
NASA Technical Reports Server (NTRS)
Fuelberg, H. E.
1979-01-01
Synoptic scale budgets of kinetic energy are computed using 3 and 6 h data from three of NASA's Atmospheric Variability Experiments (AVE's). Numerous areas of intense convection occurred during the three experiments. Large kinetic energy variability, with periods as short as 6 h, is observed in budgets computed over each entire experiment area and over limited volumes that barely enclose the convection and move with it. Kinetic energy generation and transport processes in the smaller volumes are often a maximum when the enclosed storms are near peak intensity, but the nature of the various energy processes differs between storm cases and seems closely related to the synoptic conditions. A commonly observed energy budget for peak storm intensity indicates that generation of kinetic energy by cross-contour flow is the major energy source while dissipation to subgrid scales is the major sink. Synoptic scale vertical motion transports kinetic energy from lower to upper levels of the atmosphere while low-level horizontal flux convergence and upper-level horizontal divergence also occur. Spatial fields of the energy budget terms show that the storm environment is a major center of energy activity for the entire area.
NASA Astrophysics Data System (ADS)
Medl'a, Matej; Mikula, Karol; Čunderlík, Róbert; Macák, Marek
2018-01-01
The paper presents a numerical solution of the oblique derivative boundary value problem on and above the Earth's topography using the finite volume method (FVM). It introduces a novel method for constructing non-uniform hexahedron 3D grids above the Earth's surface. It is based on an evolution of a surface, which approximates the Earth's topography, by mean curvature. To obtain optimal shapes of non-uniform 3D grid, the proposed evolution is accompanied by a tangential redistribution of grid nodes. Afterwards, the Laplace equation is discretized using FVM developed for such a non-uniform grid. The oblique derivative boundary condition is treated as a stationary advection equation, and we derive a new upwind type discretization suitable for non-uniform 3D grids. The discretization of the Laplace equation together with the discretization of the oblique derivative boundary condition leads to a linear system of equations. The solution of this system gives the disturbing potential in the whole computational domain including the Earth's surface. Numerical experiments aim to show properties and demonstrate efficiency of the developed FVM approach. The first experiments study an experimental order of convergence of the method. Then, a reconstruction of the harmonic function on the Earth's topography, which is generated from the EGM2008 or EIGEN-6C4 global geopotential model, is presented. The obtained FVM solutions show that refining of the computational grid leads to more precise results. The last experiment deals with local gravity field modelling in Slovakia using terrestrial gravity data. The GNSS-levelling test shows accuracy of the obtained local quasigeoid model.
DIRAC in Large Particle Physics Experiments
NASA Astrophysics Data System (ADS)
Stagni, F.; Tsaregorodtsev, A.; Arrabito, L.; Sailer, A.; Hara, T.; Zhang, X.; Consortium, DIRAC
2017-10-01
The DIRAC project is developing interware to build and operate distributed computing systems. It provides a development framework and a rich set of services for both Workload and Data Management tasks of large scientific communities. A number of High Energy Physics and Astrophysics collaborations have adopted DIRAC as the base for their computing models. DIRAC was initially developed for the LHCb experiment at LHC, CERN. Later, the Belle II, BES III and CTA experiments as well as the linear collider detector collaborations started using DIRAC for their computing systems. Some of the experiments built their DIRAC-based systems from scratch, others migrated from previous solutions, ad-hoc or based on different middlewares. Adaptation of DIRAC for a particular experiment was enabled through the creation of extensions to meet their specific requirements. Each experiment has a heterogeneous set of computing and storage resources at their disposal that were aggregated through DIRAC into a coherent pool. Users from different experiments can interact with the system in different ways depending on their specific tasks, expertise level and previous experience using command line tools, python APIs or Web Portals. In this contribution we will summarize the experience of using DIRAC in particle physics collaborations. The problems of migration to DIRAC from previous systems and their solutions will be presented. An overview of specific DIRAC extensions will be given. We hope that this review will be useful for experiments considering an update, or for those designing their computing models.
NASA Astrophysics Data System (ADS)
Degtyarev, Alexander; Khramushin, Vasily
2016-02-01
The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a) realization of tensor representations of numerical schemes for direct simulation; b) realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile); c) computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.
NASA Astrophysics Data System (ADS)
Neff, John A.
1989-12-01
Experiments originating from Gestalt psychology have shown that representing information in a symbolic form provides a more effective means to understanding. Computer scientists have been struggling for the last two decades to determine how best to create, manipulate, and store collections of symbolic structures. In the past, much of this struggling led to software innovations because that was the path of least resistance. For example, the development of heuristics for organizing the searching through knowledge bases was much less expensive than building massively parallel machines that could search in parallel. That is now beginning to change with the emergence of parallel architectures which are showing the potential for handling symbolic structures. This paper will review the relationships between symbolic computing and parallel computing architectures, and will identify opportunities for optics to significantly impact the performance of such computing machines. Although neural networks are an exciting subset of massively parallel computing structures, this paper will not touch on this area since it is receiving a great deal of attention in the literature. That is, the concepts presented herein do not consider the distributed representation of knowledge.
A Computational Experiment on Single-Walled Carbon Nanotubes
ERIC Educational Resources Information Center
Simpson, Scott; Lonie, David C.; Chen, Jiechen; Zurek, Eva
2013-01-01
A computational experiment that investigates single-walled carbon nanotubes (SWNTs) has been developed and employed in an upper-level undergraduate physical chemistry laboratory course. Computations were carried out to determine the electronic structure, radial breathing modes, and the influence of the nanotube's diameter on the…
Computer-Based vs Paper-Based Examinations: Perceptions of University Teachers
ERIC Educational Resources Information Center
Jamil, Mubashrah; Tariq, R. H.; Shami, P. A.
2012-01-01
This research reported teachers' perceptions about computer-based (CB) vs. paper-based (PB) examinations. Teachers were divided into 7 major categories i.e., gender, departments, designations, qualifications, teaching experiences, computer training certifications and CB examination experiences, which were the key factors to be observed and…
Effects of LED-backlit computer screen and emotional selfregulation on human melatonin production.
Sroykham, Watchara; Wongsawat, Yodchanan
2013-01-01
Melatonin is a circadian hormone transmitted via suprachiasmatic nucleus (SCN) in the hypothalamus and sympathetic nervous system to the pineal gland. It is a hormone necessary to many human functions such as immune, cardiovascular, neuron and sleep/awake functions. Since melatonin enhancement or suppression is reported to be closely related to the photic information from retina, in this paper, we aim further to study both the lighting condition and the emotional self-regulation in different lighting conditions together with their effects on the production of human melatonin. In this experiment, five participants are in three light exposure conditions by LED backlit computer screen (No light, Red light (∼650nm) and Blue light (∼470nm)) for 30 minute (8-8:30pm), then they are collected saliva both before and after the experiments. After the experiment, the participants are also asked to answer the emotional self-regulation questionnaire of PANAS and BRUMS regarding each light exposure condition. These results show that positive mood mean difference of PANAS between no light and red light is significant with p=0.001. Tension, depression, fatigue, confusion and vigor from BRUMS are not significantly changed while we can observe the significant change in anger mood. Finally, we can also report that the blue light of LED-backlit computer screen significantly suppress melatonin production (91%) more than red light (78%) and no light (44%).
Localization of Non-Linearly Modeled Autonomous Mobile Robots Using Out-of-Sequence Measurements
Besada-Portas, Eva; Lopez-Orozco, Jose A.; Lanillos, Pablo; de la Cruz, Jesus M.
2012-01-01
This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost. PMID:22736962
Localization of non-linearly modeled autonomous mobile robots using out-of-sequence measurements.
Besada-Portas, Eva; Lopez-Orozco, Jose A; Lanillos, Pablo; de la Cruz, Jesus M
2012-01-01
This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost.
NASA Astrophysics Data System (ADS)
Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław
2018-02-01
In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.
A Paperless Lab Manual - Lessons Learned
NASA Astrophysics Data System (ADS)
Hatten, Daniel L.; Hatten, Maggie W.
1999-10-01
Every freshman entering Rose-Hulman Institute of Technology is equipped with a laptop computer and a software package that allow classroom and laboratory instructors the freedom to make computer-based assignments, publish course materials in electronic form, etc. All introductory physics laboratories and many of our classrooms are networked, and students routinely take their laptop computers to class/lab. The introductory physics laboratory manual was converted to HTML in the summer of 1997 and was made available to students over the Internet vice printing a paper manual during the 1998-99 school year. The aim was to reduce paper costs and allow timely updates of the laboratory experiments. A poll conducted at the end of the school year showed a generally positive student response to the online laboratory manual, with some reservations.
Singular value decomposition for collaborative filtering on a GPU
NASA Astrophysics Data System (ADS)
Kato, Kimikazu; Hosino, Tikara
2010-06-01
A collaborative filtering predicts customers' unknown preferences from known preferences. In a computation of the collaborative filtering, a singular value decomposition (SVD) is needed to reduce the size of a large scale matrix so that the burden for the next phase computation will be decreased. In this application, SVD means a roughly approximated factorization of a given matrix into smaller sized matrices. Webb (a.k.a. Simon Funk) showed an effective algorithm to compute SVD toward a solution of an open competition called "Netflix Prize". The algorithm utilizes an iterative method so that the error of approximation improves in each step of the iteration. We give a GPU version of Webb's algorithm. Our algorithm is implemented in the CUDA and it is shown to be efficient by an experiment.
Experimental magic state distillation for fault-tolerant quantum computing.
Souza, Alexandre M; Zhang, Jingfu; Ryan, Colm A; Laflamme, Raymond
2011-01-25
Any physical quantum device for quantum information processing (QIP) is subject to errors in implementation. In order to be reliable and efficient, quantum computers will need error-correcting or error-avoiding methods. Fault-tolerance achieved through quantum error correction will be an integral part of quantum computers. Of the many methods that have been discovered to implement it, a highly successful approach has been to use transversal gates and specific initial states. A critical element for its implementation is the availability of high-fidelity initial states, such as |0〉 and the 'magic state'. Here, we report an experiment, performed in a nuclear magnetic resonance (NMR) quantum processor, showing sufficient quantum control to improve the fidelity of imperfect initial magic states by distilling five of them into one with higher fidelity.
NASA Astrophysics Data System (ADS)
Chen, Chui-Zhen; Xie, Ying-Ming; Liu, Jie; Lee, Patrick A.; Law, K. T.
2018-03-01
Quantum anomalous Hall insulator/superconductor heterostructures emerged as a competitive platform to realize topological superconductors with chiral Majorana edge states as shown in recent experiments [He et al. Science 357, 294 (2017), 10.1126/science.aag2792]. However, chiral Majorana modes, being extended, cannot be used for topological quantum computation. In this work, we show that quasi-one-dimensional quantum anomalous Hall structures exhibit a large topological regime (much larger than the two-dimensional case) which supports localized Majorana zero energy modes. The non-Abelian properties of a cross-shaped quantum anomalous Hall junction is shown explicitly by time-dependent calculations. We believe that the proposed quasi-one-dimensional quantum anomalous Hall structures can be easily fabricated for scalable topological quantum computation.
NASA Astrophysics Data System (ADS)
Aishah Syed Ali, Sharifah
2017-09-01
This paper considers economic lot sizing problem in remanufacturing with separate setup (ELSRs), where remanufactured and new products are produced on dedicated production lines. Since this problem is NP-hard in general, which leads to computationally inefficient and low-quality of solutions, we present (a) a multicommodity formulation and (b) a strengthened formulation based on a priori addition of valid inequalities in the space of original variables, which are then compared with the Wagner-Whitin based formulation available in the literature. Computational experiments on a large number of test data sets are performed to evaluate the different approaches. The numerical results show that our strengthened formulation outperforms all the other tested approaches in terms of linear relaxation bounds. Finally, we conclude with future research directions.
An experiment for determining the Euler load by direct computation
NASA Technical Reports Server (NTRS)
Thurston, Gaylen A.; Stein, Peter A.
1986-01-01
A direct algorithm is presented for computing the Euler load of a column from experimental data. The method is based on exact inextensional theory for imperfect columns, which predicts two distinct deflected shapes at loads near the Euler load. The bending stiffness of the column appears in the expression for the Euler load along with the column length, therefore the experimental data allows a direct computation of bending stiffness. Experiments on graphite-epoxy columns of rectangular cross-section are reported in the paper. The bending stiffness of each composite column computed from experiment is compared with predictions from laminated plate theory.
ERIC Educational Resources Information Center
Ursavas, Omer Faruk; Karal, Hasan
2009-01-01
In this study it is aimed to determine the level of pre-service teachers' computer phobia. Whether or not computer phobia meaningfully varies statistically according to gender and computer experience has been tested in the study. The study was performed on 430 pre-service teachers at the Education Faculty in Rize/Turkey. Data in the study were…