Body image dissatisfaction, physical activity and screen-time in Spanish adolescents.
Añez, Elizabeth; Fornieles-Deu, Albert; Fauquet-Ars, Jordi; López-Guimerà, Gemma; Puntí-Vidal, Joaquim; Sánchez-Carracedo, David
2018-01-01
This cross-sectional study contributes to the literature on whether body dissatisfaction is a barrier/facilitator to engaging in physical activity and to investigate the impact of mass-media messages via computer-time on body dissatisfaction. High-school students ( N = 1501) reported their physical activity, computer-time (homework/leisure) and body dissatisfaction. Researchers measured students' weight and height. Analyses revealed that body dissatisfaction was negatively associated with physical activity on both genders, whereas computer-time was associated only with girls' body dissatisfaction. Specifically, as computer-homework increased, body dissatisfaction decreased; as computer-leisure increased, body dissatisfaction increased. Weight-related interventions should improve body image and physical activity simultaneously, while critical consumption of mass-media interventions should include a computer component.
Arranging computer architectures to create higher-performance controllers
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
1988-01-01
Techniques for integrating microprocessors, array processors, and other intelligent devices in control systems are reviewed, with an emphasis on the (re)arrangement of components to form distributed or parallel processing systems. Consideration is given to the selection of the host microprocessor, increasing the power and/or memory capacity of the host, multitasking software for the host, array processors to reduce computation time, the allocation of real-time and non-real-time events to different computer subsystems, intelligent devices to share the computational burden for real-time events, and intelligent interfaces to increase communication speeds. The case of a helicopter vibration-suppression and stabilization controller is analyzed as an example, and significant improvements in computation and throughput rates are demonstrated.
The impact of home computer use on children's activities and development.
Subrahmanyam, K; Kraut, R E; Greenfield, P M; Gross, E F
2000-01-01
The increasing amount of time children are spending on computers at home and school has raised questions about how the use of computer technology may make a difference in their lives--from helping with homework to causing depression to encouraging violent behavior. This article provides an overview of the limited research on the effects of home computer use on children's physical, cognitive, and social development. Initial research suggests, for example, that access to computers increases the total amount of time children spend in front of a television or computer screen at the expense of other activities, thereby putting them at risk for obesity. At the same time, cognitive research suggests that playing computer games can be an important building block to computer literacy because it enhances children's ability to read and visualize images in three-dimensional space and track multiple images simultaneously. The limited evidence available also indicates that home computer use is linked to slightly better academic performance. The research findings are more mixed, however, regarding the effects on children's social development. Although little evidence indicates that the moderate use of computers to play games has a negative impact on children's friendships and family relationships, recent survey data show that increased use of the Internet may be linked to increases in loneliness and depression. Of most concern are the findings that playing violent computer games may increase aggressiveness and desensitize a child to suffering, and that the use of computers may blur a child's ability to distinguish real life from simulation. The authors conclude that more systematic research is needed in these areas to help parents and policymakers maximize the positive effects and to minimize the negative effects of home computers in children's lives.
Organization of the secure distributed computing based on multi-agent system
NASA Astrophysics Data System (ADS)
Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera
2018-04-01
Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.
High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs
NASA Technical Reports Server (NTRS)
Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.
2014-01-01
This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.
Kautiainen, S; Koivusilta, L; Lintonen, T; Virtanen, S M; Rimpelä, A
2005-08-01
The prevalence of overweight and obesity has increased among children and adolescents, as well as among adults, and television viewing has been suggested as one cause. Playing digital games (video, computer and console games), or using computer may be other sedentary behaviors related to the development of overweight and obesity. To study the relationships of times spent on viewing television, playing digital games and using computer to overweight among Finnish adolescents. Mailed cross-sectional survey. Nationally representative samples of 14-, 16-, and 18-y-old (N=6515, response rate 70%) in 2001. Overweight and obesity were assessed by body mass index (BMI). The respondents reported times spent daily on viewing television, playing digital games (video, computer and console games) and using computer (for e-mail, writing and surfing). Data on timing of biological maturation, intensity of weekly physical activity and family's socio economic status were taken into account in the statistical analyses. Increased times spent on viewing television and using computer were associated with increased prevalence of overweight (obesity inclusive) among girls: compared to girls viewing television <1 h daily, the adjusted odds ratio (OR) for being overweight was 1.4 when spending 1-3 h, and 2.0 when spending > or =4 h daily on viewing television. In girls using computer > or =1 h daily, the OR for being overweight was 1.5 compared to girls using computer <1 h daily. The results were similar in boys, although not statistically significant. Time spent on playing digital games was not associated with overweight. Overweight was associated with using information and communication technology (ICT), but only with certain forms of ICT. Increased use of ICT may be one factor explaining the increased prevalence of overweight and obesity at the population level, at least in girls. Playing digital games was not related to overweight, perhaps by virtue of game playing being less sedentary or related to a different lifestyle than viewing television and using computer.
Explore the Future: Will Books Have a Place in the Computer Classroom?
ERIC Educational Resources Information Center
Jobe, Ronald A.
The question of the place of books in a classroom using computers appears to be simple, yet it is one of vital concern to teachers. The availability of programs (few of which focus on literary appreciation), the mesmerizing qualities of the computer, its distortion of time, the increasing power of computers over teacher time, and the computer's…
Minimization search method for data inversion
NASA Technical Reports Server (NTRS)
Fymat, A. L.
1975-01-01
Technique has been developed for determining values of selected subsets of independent variables in mathematical formulations. Required computation time increases with first power of the number of variables. This is in contrast with classical minimization methods for which computational time increases with third power of the number of variables.
A multi-GPU real-time dose simulation software framework for lung radiotherapy.
Santhanam, A P; Min, Y; Neelakkantan, H; Papp, N; Meeks, S L; Kupelian, P A
2012-09-01
Medical simulation frameworks facilitate both the preoperative and postoperative analysis of the patient's pathophysical condition. Of particular importance is the simulation of radiation dose delivery for real-time radiotherapy monitoring and retrospective analyses of the patient's treatment. In this paper, a software framework tailored for the development of simulation-based real-time radiation dose monitoring medical applications is discussed. A multi-GPU-based computational framework coupled with inter-process communication methods is introduced for simulating the radiation dose delivery on a deformable 3D volumetric lung model and its real-time visualization. The model deformation and the corresponding dose calculation are allocated among the GPUs in a task-specific manner and is performed in a pipelined manner. Radiation dose calculations are computed on two different GPU hardware architectures. The integration of this computational framework with a front-end software layer and back-end patient database repository is also discussed. Real-time simulation of the dose delivered is achieved at once every 120 ms using the proposed framework. With a linear increase in the number of GPU cores, the computational time of the simulation was linearly decreased. The inter-process communication time also improved with an increase in the hardware memory. Variations in the delivered dose and computational speedup for variations in the data dimensions are investigated using D70 and D90 as well as gEUD as metrics for a set of 14 patients. Computational speed-up increased with an increase in the beam dimensions when compared with a CPU-based commercial software while the error in the dose calculation was <1%. Our analyses show that the framework applied to deformable lung model-based radiotherapy is an effective tool for performing both real-time and retrospective analyses.
Computational Methods for Stability and Control (COMSAC): The Time Has Come
NASA Technical Reports Server (NTRS)
Hall, Robert M.; Biedron, Robert T.; Ball, Douglas N.; Bogue, David R.; Chung, James; Green, Bradford E.; Grismer, Matthew J.; Brooks, Gregory P.; Chambers, Joseph R.
2005-01-01
Powerful computational fluid dynamics (CFD) tools have emerged that appear to offer significant benefits as an adjunct to the experimental methods used by the stability and control community to predict aerodynamic parameters. The decreasing costs for and increasing availability of computing hours are making these applications increasingly viable as time goes on and the cost of computing continues to drop. This paper summarizes the efforts of four organizations to utilize high-end computational fluid dynamics (CFD) tools to address the challenges of the stability and control arena. General motivation and the backdrop for these efforts will be summarized as well as examples of current applications.
An Upgrade of the Aeroheating Software ''MINIVER''
NASA Technical Reports Server (NTRS)
Louderback, Pierce
2013-01-01
Detailed computational modeling: CFO often used to create and execute computational domains. Increasing complexity when moving from 20 to 30 geometries. Computational time increased as finer grids are used (accuracy). Strong tool, but takes time to set up and run. MINIVER: Uses theoretical and empirical correlations. Orders of magnitude faster to set up and run. Not as accurate as CFO, but gives reasonable estimations. MINIVER's Drawbacks: Rigid command-line interface. Lackluster, unorganized documentation. No central control; multiple versions exist and have diverged.
NASA Astrophysics Data System (ADS)
Wang, Jinting; Lu, Liqiao; Zhu, Fei
2018-01-01
Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.
Influence of computer work under time pressure on cardiac activity.
Shi, Ping; Hu, Sijung; Yu, Hongliu
2015-03-01
Computer users are often under stress when required to complete computer work within a required time. Work stress has repeatedly been associated with an increased risk for cardiovascular disease. The present study examined the effects of time pressure workload during computer tasks on cardiac activity in 20 healthy subjects. Heart rate, time domain and frequency domain indices of heart rate variability (HRV) and Poincaré plot parameters were compared among five computer tasks and two rest periods. Faster heart rate and decreased standard deviation of R-R interval were noted in response to computer tasks under time pressure. The Poincaré plot parameters showed significant differences between different levels of time pressure workload during computer tasks, and between computer tasks and the rest periods. In contrast, no significant differences were identified for the frequency domain indices of HRV. The results suggest that the quantitative Poincaré plot analysis used in this study was able to reveal the intrinsic nonlinear nature of the autonomically regulated cardiac rhythm. Specifically, heightened vagal tone occurred during the relaxation computer tasks without time pressure. In contrast, the stressful computer tasks with added time pressure stimulated cardiac sympathetic activity. Copyright © 2015 Elsevier Ltd. All rights reserved.
Daily computer usage correlated with undergraduate students' musculoskeletal symptoms.
Chang, Che-Hsu Joe; Amick, Benjamin C; Menendez, Cammie Chaumont; Katz, Jeffrey N; Johnson, Peter W; Robertson, Michelle; Dennerlein, Jack Tigh
2007-06-01
A pilot prospective study was performed to examine the relationships between daily computer usage time and musculoskeletal symptoms on undergraduate students. For three separate 1-week study periods distributed over a semester, 27 students reported body part-specific musculoskeletal symptoms three to five times daily. Daily computer usage time for the 24-hr period preceding each symptom report was calculated from computer input device activities measured directly by software loaded on each participant's primary computer. General Estimating Equation models tested the relationships between daily computer usage and symptom reporting. Daily computer usage longer than 3 hr was significantly associated with an odds ratio 1.50 (1.01-2.25) of reporting symptoms. Odds of reporting symptoms also increased with quartiles of daily exposure. These data suggest a potential dose-response relationship between daily computer usage time and musculoskeletal symptoms.
Symplectic molecular dynamics simulations on specially designed parallel computers.
Borstnik, Urban; Janezic, Dusanka
2005-01-01
We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.
20 CFR 225.35 - When a PIA used in computing a retirement annuity can be increased for DRC's.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false When a PIA used in computing a retirement... Credits § 225.35 When a PIA used in computing a retirement annuity can be increased for DRC's. Delayed retirement credits earned at different times are added to the PIA used in computing a retirement annuity as...
Machine vision for real time orbital operations
NASA Technical Reports Server (NTRS)
Vinz, Frank L.
1988-01-01
Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).
Sung, Jung Hye; Lee, Ji-Young; Lee, Jae Eun
2017-01-01
Introduction Although numerous studies have examined the association between playing video games and cognitive skills, aggression, and depression, few studies have examined how these associations differ by sex. The objective of our study was to determine differences by sex in association between video gaming or other nonacademic computer use and depressive symptoms, suicidal behavior, and being bullied among adolescents in the United States. Methods We used data from the 2015 Youth Risk Behavior Survey on 15,624 US high school students. Rao–Scott χ2 tests, which were adjusted for the complex sampling design, were conducted to assess differences by sex in the association of mental health with video gaming or other nonacademic computer use. Results Approximately one-fifth (19.4%) of adolescents spent 5 or more hours daily on video gaming or other nonacademic computer use, and 17.9% did not spend any time in those activities. A greater percentage of female adolescents than male adolescents reported spending no time (22.1% and 14.0%, respectively) or 5 hours or more (21.3% and 17.5%, respectively) in gaming and other nonacademic computer use (P < .001). The association between mental problems and video gaming or other nonacademic computer use differed by sex. Among female adolescents, prevalence of mental problems increased steadily in association with increased time spent, whereas the pattern for male adolescents followed a J-shaped curve, decreasing initially, increasing slowly, and then increasing rapidly beginning at 4 hours or more. Conclusion Female adolescents were more likely to have all 3 mental health problems than male adolescents were. Spending no time or 5 hours or more daily on video gaming or other nonacademic computer use was associated with increased mental problems among both sexes. As suggested by the J-shaped relationship, 1 hour or less spent on video gaming or other nonacademic computer use may reduce depressive symptoms, suicidal behavior, and being bullied compared with no use or excessive use. PMID:29166250
Lee, Hogan H; Sung, Jung Hye; Lee, Ji-Young; Lee, Jae Eun
2017-11-22
Although numerous studies have examined the association between playing video games and cognitive skills, aggression, and depression, few studies have examined how these associations differ by sex. The objective of our study was to determine differences by sex in association between video gaming or other nonacademic computer use and depressive symptoms, suicidal behavior, and being bullied among adolescents in the United States. We used data from the 2015 Youth Risk Behavior Survey on 15,624 US high school students. Rao-Scott χ 2 tests, which were adjusted for the complex sampling design, were conducted to assess differences by sex in the association of mental health with video gaming or other nonacademic computer use. Approximately one-fifth (19.4%) of adolescents spent 5 or more hours daily on video gaming or other nonacademic computer use, and 17.9% did not spend any time in those activities. A greater percentage of female adolescents than male adolescents reported spending no time (22.1% and 14.0%, respectively) or 5 hours or more (21.3% and 17.5%, respectively) in gaming and other nonacademic computer use (P < .001). The association between mental problems and video gaming or other nonacademic computer use differed by sex. Among female adolescents, prevalence of mental problems increased steadily in association with increased time spent, whereas the pattern for male adolescents followed a J-shaped curve, decreasing initially, increasing slowly, and then increasing rapidly beginning at 4 hours or more. Female adolescents were more likely to have all 3 mental health problems than male adolescents were. Spending no time or 5 hours or more daily on video gaming or other nonacademic computer use was associated with increased mental problems among both sexes. As suggested by the J-shaped relationship, 1 hour or less spent on video gaming or other nonacademic computer use may reduce depressive symptoms, suicidal behavior, and being bullied compared with no use or excessive use.
Incorporating the gas analyzer response time in gas exchange computations.
Mitchell, R R
1979-11-01
A simple method for including the gas analyzer response time in the breath-by-breath computation of gas exchange rates is described. The method uses a difference equation form of a model for the gas analyzer in the computation of oxygen uptake and carbon dioxide production and avoids a numerical differentiation required to correct the gas fraction wave forms. The effect of not accounting for analyzer response time is shown to be a 20% underestimation in gas exchange rate. The present method accurately measures gas exchange rate, is relatively insensitive to measurement errors in the analyzer time constant, and does not significantly increase the computation time.
Mikkelsen, Sigurd; Vilstrup, Imogen; Lassen, Christina Funch; Kryger, Ann Isabel; Thomsen, Jane Frølund; Andersen, Johan Hviid
2007-01-01
Objective To examine the validity and potential biases in self‐reports of computer, mouse and keyboard usage times, compared with objective recordings. Methods A study population of 1211 people was asked in a questionnaire to estimate the average time they had worked with computer, mouse and keyboard during the past four working weeks. During the same period, a software program recorded these activities objectively. The study was part of a one‐year follow‐up study from 2000–1 of musculoskeletal outcomes among Danish computer workers. Results Self‐reports on computer, mouse and keyboard usage times were positively associated with objectively measured activity, but the validity was low. Self‐reports explained only between a quarter and a third of the variance of objectively measured activity, and were even lower for one measure (keyboard time). Self‐reports overestimated usage times. Overestimation was large at low levels and declined with increasing levels of objectively measured activity. Mouse usage time proportion was an exception with a near 1:1 relation. Variability in objectively measured activity, arm pain, gender and age influenced self‐reports in a systematic way, but the effects were modest and sometimes in different directions. Conclusion Self‐reported durations of computer activities are positively associated with objective measures but they are quite inaccurate. Studies using self‐reports to establish relations between computer work times and musculoskeletal pain could be biased and lead to falsely increased or decreased risk estimates. PMID:17387136
Mikkelsen, Sigurd; Vilstrup, Imogen; Lassen, Christina Funch; Kryger, Ann Isabel; Thomsen, Jane Frølund; Andersen, Johan Hviid
2007-08-01
To examine the validity and potential biases in self-reports of computer, mouse and keyboard usage times, compared with objective recordings. A study population of 1211 people was asked in a questionnaire to estimate the average time they had worked with computer, mouse and keyboard during the past four working weeks. During the same period, a software program recorded these activities objectively. The study was part of a one-year follow-up study from 2000-1 of musculoskeletal outcomes among Danish computer workers. Self-reports on computer, mouse and keyboard usage times were positively associated with objectively measured activity, but the validity was low. Self-reports explained only between a quarter and a third of the variance of objectively measured activity, and were even lower for one measure (keyboard time). Self-reports overestimated usage times. Overestimation was large at low levels and declined with increasing levels of objectively measured activity. Mouse usage time proportion was an exception with a near 1:1 relation. Variability in objectively measured activity, arm pain, gender and age influenced self-reports in a systematic way, but the effects were modest and sometimes in different directions. Self-reported durations of computer activities are positively associated with objective measures but they are quite inaccurate. Studies using self-reports to establish relations between computer work times and musculoskeletal pain could be biased and lead to falsely increased or decreased risk estimates.
Gilson, Nicholas D; Ng, Norman; Pavey, Toby G; Ryde, Gemma C; Straker, Leon; Brown, Wendy J
2016-11-01
This efficacy study assessed the added impact real time computer prompts had on a participatory approach to reduce occupational sedentary exposure and increase physical activity. Quasi-experimental. 57 Australian office workers (mean [SD]; age=47 [11] years; BMI=28 [5]kg/m 2 ; 46 men) generated a menu of 20 occupational 'sit less and move more' strategies through participatory workshops, and were then tasked with implementing strategies for five months (July-November 2014). During implementation, a sub-sample of workers (n=24) used a chair sensor/software package (Sitting Pad) that gave real time prompts to interrupt desk sitting. Baseline and intervention sedentary behaviour and physical activity (GENEActiv accelerometer; mean work time percentages), and minutes spent sitting at desks (Sitting Pad; mean total time and longest bout) were compared between non-prompt and prompt workers using a two-way ANOVA. Workers spent close to three quarters of their work time sedentary, mostly sitting at desks (mean [SD]; total desk sitting time=371 [71]min/day; longest bout spent desk sitting=104 [43]min/day). Intervention effects were four times greater in workers who used real time computer prompts (8% decrease in work time sedentary behaviour and increase in light intensity physical activity; p<0.01). Respective mean differences between baseline and intervention total time spent sitting at desks, and the longest bout spent desk sitting, were 23 and 32min/day lower in prompt than in non-prompt workers (p<0.01). In this sample of office workers, real time computer prompts facilitated the impact of a participatory approach on reductions in occupational sedentary exposure, and increases in physical activity. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Computer Games and Instruction
ERIC Educational Resources Information Center
Tobias, Sigmund, Ed.; Fletcher, J. D., Ed.
2011-01-01
There is intense interest in computer games. A total of 65 percent of all American households play computer games, and sales of such games increased 22.9 percent last year. The average amount of game playing time was found to be 13.2 hours per week. The popularity and market success of games is evident from both the increased earnings from games,…
Development of soft-sphere contact models for thermal heat conduction in granular flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, A. B.; Pannala, S.; Ma, Z.
2016-06-08
Conductive heat transfer to flowing particles occurs when two particles (or a particle and wall) come into contact. The direct conduction between the two bodies depends on the collision dynamics, namely the size of the contact area and the duration of contact. For soft-sphere discrete-particle simulations, it is computationally expensive to resolve the true collision time because doing so would require a restrictively small numerical time step. To improve the computational speed, it is common to increase the 'softness' of the material to artificially increase the collision time, but doing so affects the heat transfer. In this work, two physically-basedmore » correction terms are derived to compensate for the increased contact area and time stemming from artificial particle softening. By including both correction terms, the impact that artificial softening has on the conductive heat transfer is removed, thus enabling simulations at greatly reduced computational times without sacrificing physical accuracy.« less
The expanded role of computers in Space Station Freedom real-time operations
NASA Technical Reports Server (NTRS)
Crawford, R. Paul; Cannon, Kathleen V.
1990-01-01
The challenges that NASA and its international partners face in their real-time operation of the Space Station Freedom necessitate an increased role on the part of computers. In building the operational concepts concerning the role of the computer, the Space Station program is using lessons learned experience from past programs, knowledge of the needs of future space programs, and technical advances in the computer industry. The computer is expected to contribute most significantly in real-time operations by forming a versatile operating architecture, a responsive operations tool set, and an environment that promotes effective and efficient utilization of Space Station Freedom resources.
Your Career in Computer Programming.
ERIC Educational Resources Information Center
Seligsohn, I. J.
This book offers the career-minded young reader insight into computers and computer-programming, by describing the nature of the work, the actual workings of the machines, the language of computers, their history, and their far-reading and increasing applications in business, industry, science, education, defense, and government. At the same time,…
NASA Technical Reports Server (NTRS)
Fijany, A.; Roberts, J. A.; Jain, A.; Man, G. K.
1993-01-01
Part 1 of this paper presented the requirements for the real-time simulation of Cassini spacecraft along with some discussion of the DARTS algorithm. Here, in Part 2 we discuss the development and implementation of parallel/vectorized DARTS algorithm and architecture for real-time simulation. Development of the fast algorithms and architecture for real-time hardware-in-the-loop simulation of spacecraft dynamics is motivated by the fact that it represents a hard real-time problem, in the sense that the correctness of the simulation depends on both the numerical accuracy and the exact timing of the computation. For a given model fidelity, the computation should be computed within a predefined time period. Further reduction in computation time allows increasing the fidelity of the model (i.e., inclusion of more flexible modes) and the integration routine.
2014-01-01
Background Existing instruments for measuring problematic computer and console gaming and internet use are often lengthy and often based on a pathological perspective. The objective was to develop and present a new and short non-clinical measurement tool for perceived problems related to computer use and gaming among adolescents and to study the association between screen time and perceived problems. Methods Cross-sectional school-survey of 11-, 13-, and 15-year old students in thirteen schools in the City of Aarhus, Denmark, participation rate 89%, n = 2100. The main exposure was time spend on weekdays on computer- and console-gaming and internet use for communication and surfing. The outcome measures were three indexes on perceived problems related to computer and console gaming and internet use. Results The three new indexes showed high face validity and acceptable internal consistency. Most schoolchildren with high screen time did not experience problems related to computer use. Still, there was a strong and graded association between time use and perceived problems related to computer gaming, console gaming (only boys) and internet use, odds ratios ranging from 6.90 to 10.23. Conclusion The three new measures of perceived problems related to computer and console gaming and internet use among adolescents are appropriate, reliable and valid for use in non-clinical surveys about young people’s everyday life and behaviour. These new measures do not assess Internet Gaming Disorder as it is listed in the DSM and therefore has no parity with DSM criteria. We found an increasing risk of perceived problems with increasing time spent with gaming and internet use. Nevertheless, most schoolchildren who spent much time with gaming and internet use did not experience problems. PMID:24731270
Veldhuis, Lydian; van Grieken, Amy; Renders, Carry M; Hirasing, Remy A; Raat, Hein
2014-01-01
The global increase in childhood overweight and obesity has been ascribed partly to increases in children's screen time. Parents have a large influence on their children's screen time. Studies investigating parenting and early childhood screen time are limited. In this study, we investigated associations of parenting style and the social and physical home environment on watching TV and using computers or game consoles among 5-year-old children. This study uses baseline data concerning 5-year-old children (n = 3067) collected for the 'Be active, eat right' study. Children of parents with a higher score on the parenting style dimension involvement, were more likely to spend >30 min/day on computers or game consoles. Overall, families with an authoritative or authoritarian parenting style had lower percentages of children's screen time compared to families with an indulgent or neglectful style, but no significant difference in OR was found. In families with rules about screen time, children were less likely to watch TV>2 hrs/day and more likely to spend >30 min/day on computers or game consoles. The number of TVs and computers or game consoles in the household was positively associated with screen time, and children with a TV or computer or game console in their bedroom were more likely to watch TV>2 hrs/day or spend >30 min/day on computers or game consoles. The magnitude of the association between parenting style and screen time of 5-year-olds was found to be relatively modest. The associations found between the social and physical environment and children's screen time are independent of parenting style. Interventions to reduce children's screen time might be most effective when they support parents specifically with introducing family rules related to screen time and prevent the presence of a TV or computer or game console in the child's room.
Veldhuis, Lydian; van Grieken, Amy; Renders, Carry M.; HiraSing, Remy A.; Raat, Hein
2014-01-01
Introduction The global increase in childhood overweight and obesity has been ascribed partly to increases in children's screen time. Parents have a large influence on their children's screen time. Studies investigating parenting and early childhood screen time are limited. In this study, we investigated associations of parenting style and the social and physical home environment on watching TV and using computers or game consoles among 5-year-old children. Methods This study uses baseline data concerning 5-year-old children (n = 3067) collected for the ‘Be active, eat right’ study. Results Children of parents with a higher score on the parenting style dimension involvement, were more likely to spend >30 min/day on computers or game consoles. Overall, families with an authoritative or authoritarian parenting style had lower percentages of children's screen time compared to families with an indulgent or neglectful style, but no significant difference in OR was found. In families with rules about screen time, children were less likely to watch TV>2 hrs/day and more likely to spend >30 min/day on computers or game consoles. The number of TVs and computers or game consoles in the household was positively associated with screen time, and children with a TV or computer or game console in their bedroom were more likely to watch TV>2 hrs/day or spend >30 min/day on computers or game consoles. Conclusion The magnitude of the association between parenting style and screen time of 5-year-olds was found to be relatively modest. The associations found between the social and physical environment and children's screen time are independent of parenting style. Interventions to reduce children's screen time might be most effective when they support parents specifically with introducing family rules related to screen time and prevent the presence of a TV or computer or game console in the child's room. PMID:24533092
Is Computer Science Compatible with Technological Literacy?
ERIC Educational Resources Information Center
Buckler, Chris; Koperski, Kevin; Loveland, Thomas R.
2018-01-01
Although technology education evolved over time, and pressure increased to infuse more engineering principles and increase links to STEM (science technology, engineering, and mathematics) initiatives, there has never been an official alignment between technology and engineering education and computer science. There is movement at the federal level…
Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brown, Douglas L.
1994-01-01
In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.
Use of high performance networks and supercomputers for real-time flight simulation
NASA Technical Reports Server (NTRS)
Cleveland, Jeff I., II
1993-01-01
In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.
Parallel approach in RDF query processing
NASA Astrophysics Data System (ADS)
Vajgl, Marek; Parenica, Jan
2017-07-01
Parallel approach is nowadays a very cheap solution to increase computational power due to possibility of usage of multithreaded computational units. This hardware became typical part of nowadays personal computers or notebooks and is widely spread. This contribution deals with experiments how evaluation of computational complex algorithm of the inference over RDF data can be parallelized over graphical cards to decrease computational time.
High performance real-time flight simulation at NASA Langley
NASA Technical Reports Server (NTRS)
Cleveland, Jeff I., II
1994-01-01
In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be deterministic and be completed in as short a time as possible. This includes simulation mathematical model computational and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, personnel at NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to a standard input/output system to provide for high bandwidth, low latency data acquisition and distribution. The Computer Automated Measurement and Control technology (IEEE standard 595) was extended to meet the performance requirements for real-time simulation. This technology extension increased the effective bandwidth by a factor of ten and increased the performance of modules necessary for simulator communications. This technology is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications of this technology are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC have completed the development of the use of supercomputers for simulation mathematical model computational to support real-time flight simulation. This includes the development of a real-time operating system and the development of specialized software and hardware for the CAMAC simulator network. This work, coupled with the use of an open systems software architecture, has advanced the state of the art in real time flight simulation. The data acquisition technology innovation and experience with recent developments in this technology are described.
Optimized Laplacian image sharpening algorithm based on graphic processing unit
NASA Astrophysics Data System (ADS)
Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah
2014-12-01
In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.
Home Media and Children’s Achievement and Behavior
Hofferth, Sandra L.
2010-01-01
This study provides a national picture of the time American 6–12 year olds spent playing video games, using the computer, and watching television at home in 1997 and 2003 and the association of early use with their achievement and behavior as adolescents. Girls benefited from computers more than boys and Black children’s achievement benefited more from greater computer use than did that of White children. Greater computer use in middle childhood was associated with increased achievement for White and Black girls and Black boys, but not White boys. Greater computer play was also associated with a lower risk of becoming socially isolated among girls. Computer use does not crowd out positive learning-related activities, whereas video game playing does. Consequently, increased video game play had both positive and negative associations with the achievement of girls but not boys. For boys, increased video game play was linked to increased aggressive behavior problems. PMID:20840243
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Owen, Jeffrey E.
1988-01-01
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.
Hollingworth, William; Devine, Emily Beth; Hansen, Ryan N; Lawless, Nathan M; Comstock, Bryan A; Wilson-Norton, Jennifer L; Tharp, Kathleen L; Sullivan, Sean D
2007-01-01
Electronic prescribing has improved the quality and safety of care. One barrier preventing widespread adoption is the potential detrimental impact on workflow. We used time-motion techniques to compare prescribing times at three ambulatory care sites that used paper-based prescribing, desktop, or laptop e-prescribing. An observer timed all prescriber (n = 27) and staff (n = 42) tasks performed during a 4-hour period. At the sites with optional e-prescribing >75% of prescription-related events were performed electronically. Prescribers at e-prescribing sites spent less time writing, but time-savings were offset by increased computer tasks. After adjusting for site, prescriber and prescription type, e-prescribing tasks took marginally longer than hand written prescriptions (12.0 seconds; -1.6, 25.6 CI). Nursing staff at the e-prescribing sites spent longer on computer tasks (5.4 minutes/hour; 0.0, 10.7 CI). E-prescribing was not associated with an increase in combined computer and writing time for prescribers. If carefully implemented, e-prescribing will not greatly disrupt workflow.
Enhancing PC Cluster-Based Parallel Branch-and-Bound Algorithms for the Graph Coloring Problem
NASA Astrophysics Data System (ADS)
Taoka, Satoshi; Takafuji, Daisuke; Watanabe, Toshimasa
A branch-and-bound algorithm (BB for short) is the most general technique to deal with various combinatorial optimization problems. Even if it is used, computation time is likely to increase exponentially. So we consider its parallelization to reduce it. It has been reported that the computation time of a parallel BB heavily depends upon node-variable selection strategies. And, in case of a parallel BB, it is also necessary to prevent increase in communication time. So, it is important to pay attention to how many and what kind of nodes are to be transferred (called sending-node selection strategy). In this paper, for the graph coloring problem, we propose some sending-node selection strategies for a parallel BB algorithm by adopting MPI for parallelization and experimentally evaluate how these strategies affect computation time of a parallel BB on a PC cluster network.
The electromagnetic modeling of thin apertures using the finite-difference time-domain technique
NASA Technical Reports Server (NTRS)
Demarest, Kenneth R.
1987-01-01
A technique which computes transient electromagnetic responses of narrow apertures in complex conducting scatterers was implemented as an extension of previously developed Finite-Difference Time-Domain (FDTD) computer codes. Although these apertures are narrow with respect to the wavelengths contained within the power spectrum of excitation, this technique does not require significantly more computer resources to attain the increased resolution at the apertures. In the report, an analytical technique which utilizes Babinet's principle to model the apertures is developed, and an FDTD computer code which utilizes this technique is described.
Raptou, Elena; Papastefanou, Georgios; Mattas, Konstadinos
2017-01-01
The present study explored the influence of eating habits, body weight and television programme preference on television viewing time and domestic computer usage, after adjusting for sociodemographic characteristics and home media environment indicators. In addition, potential substitution or complementarity in screen time was investigated. Individual level data were collected via questionnaires that were administered to a random sample of 2,946 Germans. The econometric analysis employed a seemingly unrelated bivariate ordered probit model to conjointly estimate television viewing time and time engaged in domestic computer usage. Television viewing and domestic computer usage represent two independent behaviours in both genders and across all age groups. Dietary habits have a significant impact on television watching with less healthy food choices associated with increasing television viewing time. Body weight is found to be positively correlated with television screen time in both men and women, and overweight individuals have a higher propensity for heavy television viewing. Similar results were obtained for age groups where an increasing body mass index (BMI) in adults over 24 years old is more likely to be positively associated with a higher duration of television watching. With respect to dietary habits of domestic computer users, participants aged over 24 years of both genders seem to adopt more healthy dietary patterns. A downward trend in the BMI of domestic computer users was observed in women and adults aged 25-60 years. On the contrary, young domestic computer users 18-24 years old have a higher body weight than non-users. Television programme preferences also affect television screen time with clear differences to be observed between genders and across different age groups. In order to reduce total screen time, health interventions should target different types of screen viewing audiences separately.
Finite-Element Methods for Real-Time Simulation of Surgery
NASA Technical Reports Server (NTRS)
Basdogan, Cagatay
2003-01-01
Two finite-element methods have been developed for mathematical modeling of the time-dependent behaviors of deformable objects and, more specifically, the mechanical responses of soft tissues and organs in contact with surgical tools. These methods may afford the computational efficiency needed to satisfy the requirement to obtain computational results in real time for simulating surgical procedures as described in Simulation System for Training in Laparoscopic Surgery (NPO-21192) on page 31 in this issue of NASA Tech Briefs. Simulation of the behavior of soft tissue in real time is a challenging problem because of the complexity of soft-tissue mechanics. The responses of soft tissues are characterized by nonlinearities and by spatial inhomogeneities and rate and time dependences of material properties. Finite-element methods seem promising for integrating these characteristics of tissues into computational models of organs, but they demand much central-processing-unit (CPU) time and memory, and the demand increases with the number of nodes and degrees of freedom in a given finite-element model. Hence, as finite-element models become more realistic, it becomes more difficult to compute solutions in real time. In both of the present methods, one uses approximate mathematical models trading some accuracy for computational efficiency and thereby increasing the feasibility of attaining real-time up36 NASA Tech Briefs, October 2003 date rates. The first of these methods is based on modal analysis. In this method, one reduces the number of differential equations by selecting only the most significant vibration modes of an object (typically, a suitable number of the lowest-frequency modes) for computing deformations of the object in response to applied forces.
Zhang, Yeqing; Wang, Meiling; Li, Yafeng
2018-01-01
For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301
Zhang, Yeqing; Wang, Meiling; Li, Yafeng
2018-02-24
For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90-94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7-5.6% per millisecond, with most satellites acquired successfully.
Adaptive time steps in trajectory surface hopping simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spörkel, Lasse, E-mail: spoerkel@kofo.mpg.de; Thiel, Walter, E-mail: thiel@kofo.mpg.de
2016-05-21
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energymore » surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.« less
Adaptive time steps in trajectory surface hopping simulations
NASA Astrophysics Data System (ADS)
Spörkel, Lasse; Thiel, Walter
2016-05-01
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.
A transient response analysis of the space shuttle vehicle during liftoff
NASA Technical Reports Server (NTRS)
Brunty, J. A.
1990-01-01
A proposed transient response method is formulated for the liftoff analysis of the space shuttle vehicles. It uses a power series approximation with unknown coefficients for the interface forces between the space shuttle and mobile launch platform. This allows the equation of motion of the two structures to be solved separately with the unknown coefficients at the end of each step. These coefficients are obtained by enforcing the interface compatibility conditions between the two structures. Once the unknown coefficients are determined, the total response is computed for that time step. The method is validated by a numerical example of a cantilevered beam and by the liftoff analysis of the space shuttle vehicles. The proposed method is compared to an iterative transient response analysis method used by Martin Marietta for their space shuttle liftoff analysis. It is shown that the proposed method uses less computer time than the iterative method and does not require as small a time step for integration. The space shuttle vehicle model is reduced using two different types of component mode synthesis (CMS) methods, the Lanczos method and the Craig and Bampton CMS method. By varying the cutoff frequency in the Craig and Bampton method it was shown that the space shuttle interface loads can be computed with reasonable accuracy. Both the Lanczos CMS method and Craig and Bampton CMS method give similar results. A substantial amount of computer time is saved using the Lanczos CMS method over that of the Craig and Bampton method. However, when trying to compute a large number of Lanczos vectors, input/output computer time increased and increased the overall computer time. The application of several liftoff release mechanisms that can be adapted to the proposed method are discussed.
Liu, Gangjun; Zhang, Jun; Yu, Lingfeng; Xie, Tuqiang; Chen, Zhongping
2010-01-01
With the increase of the A-line speed of optical coherence tomography (OCT) systems, real-time processing of acquired data has become a bottleneck. The shared-memory parallel computing technique is used to process OCT data in real time. The real-time processing power of a quad-core personal computer (PC) is analyzed. It is shown that the quad-core PC could provide real-time OCT data processing ability of more than 80K A-lines per second. A real-time, fiber-based, swept source polarization-sensitive OCT system with 20K A-line speed is demonstrated with this technique. The real-time 2D and 3D polarization-sensitive imaging of chicken muscle and pig tendon is also demonstrated. PMID:19904337
NASA Astrophysics Data System (ADS)
Lehman, Donald Clifford
Today's medical laboratories are dealing with cost containment health care policies and unfilled laboratory positions. Because there may be fewer experienced clinical laboratory scientists, students graduating from clinical laboratory science (CLS) programs are expected by their employers to perform accurately in entry-level positions with minimal training. Information in the CLS field is increasing at a dramatic rate, and instructors are expected to teach more content in the same amount of time with the same resources. With this increase in teaching obligations, instructors could use a tool to facilitate grading. The research question was, "Can computer-assisted assessment evaluate students in an accurate and time efficient way?" A computer program was developed to assess CLS students' ability to evaluate peripheral blood smears. Automated grading permits students to get results quicker and allows the laboratory instructor to devote less time to grading. This computer program could improve instruction by providing more time to students and instructors for other activities. To be valuable, the program should provide the same quality of grading as the instructor. These benefits must outweigh potential problems such as the time necessary to develop and maintain the program, monitoring of student progress by the instructor, and the financial cost of the computer software and hardware. In this study, surveys of students and an interview with the laboratory instructor were performed to provide a formative evaluation of the computer program. In addition, the grading accuracy of the computer program was examined. These results will be used to improve the program for use in future courses.
Adaptive-optics optical coherence tomography processing using a graphics processing unit.
Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T
2014-01-01
Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.
Multiscale Space-Time Computational Methods for Fluid-Structure Interactions
2015-09-13
prescribed fully or partially, is from an actual locust, extracted from high-speed, multi-camera video recordings of the locust in a wind tunnel . We use...With creative methods for coupling the fluid and structure, we can increase the scope and efficiency of the FSI modeling . Multiscale methods, which now...play an important role in computational mathematics, can also increase the accuracy and efficiency of the computer modeling techniques. The main
Design and implementation of a UNIX based distributed computing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Love, J.S.; Michael, M.W.
1994-12-31
We have designed, implemented, and are running a corporate-wide distributed processing batch queue on a large number of networked workstations using the UNIX{reg_sign} operating system. Atlas Wireline researchers and scientists have used the system for over a year. The large increase in available computer power has greatly reduced the time required for nuclear and electromagnetic tool modeling. Use of remote distributed computing has simultaneously reduced computation costs and increased usable computer time. The system integrates equipment from different manufacturers, using various CPU architectures, distinct operating system revisions, and even multiple processors per machine. Various differences between the machines have tomore » be accounted for in the master scheduler. These differences include shells, command sets, swap spaces, memory sizes, CPU sizes, and OS revision levels. Remote processing across a network must be performed in a manner that is seamless from the users` perspective. The system currently uses IBM RISC System/6000{reg_sign}, SPARCstation{sup TM}, HP9000s700, HP9000s800, and DEC Alpha AXP{sup TM} machines. Each CPU in the network has its own speed rating, allowed working hours, and workload parameters. The system if designed so that all of the computers in the network can be optimally scheduled without adversely impacting the primary users of the machines. The increase in the total usable computational capacity by means of distributed batch computing can change corporate computing strategy. The integration of disparate computer platforms eliminates the need to buy one type of computer for computations, another for graphics, and yet another for day-to-day operations. It might be possible, for example, to meet all research and engineering computing needs with existing networked computers.« less
Parallelization of a hydrological model using the message passing interface
Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji
2013-01-01
With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.
NASA Technical Reports Server (NTRS)
Weinberg, B. C.; Mcdonald, H.
1982-01-01
A numerical scheme is developed for solving the time dependent, three dimensional compressible viscous flow equations to be used as an aid in the design of helicopter rotors. In order to further investigate the numerical procedure, the computer code developed to solve an approximate form of the three dimensional unsteady Navier-Stokes equations employing a linearized block implicit technique in conjunction with a QR operator scheme is tested. Results of calculations are presented for several two dimensional boundary layer flows including steady turbulent and unsteady laminar cases. A comparison of fourth order and second order solutions indicate that increased accuracy can be obtained without any significant increases in cost (run time). The results of the computations also indicate that the computer code can be applied to more complex flows such as those encountered on rotating airfoils. The geometry of a symmetric NACA four digit airfoil is considered and the appropriate geometrical properties are computed.
RighTime: A real time clock correcting program for MS-DOS-based computer systems
NASA Technical Reports Server (NTRS)
Becker, G. Thomas
1993-01-01
A computer program is described which effectively eliminates the misgivings of the DOS system clock in PC/AT-class computers. RighTime is a small, sophisticated memory-resident program that automatically corrects both the DOS system clock and the hardware 'CMOS' real time clock (RTC) in real time. RighTime learns what corrections are required without operator interaction beyond the occasional accurate time set. Both warm (power on) and cool (power off) errors are corrected, usually yielding better than one part per million accuracy in the typical desktop computer with no additional hardware, and RighTime increases the system clock resolution from approximately 0.0549 second to 0.01 second. Program tools are also available which allow visualization of RighTime's actions, verification of its performance, display of its history log, and which provide data for graphing of the system clock behavior. The program has found application in a wide variety of industries, including astronomy, satellite tracking, communications, broadcasting, transportation, public utilities, manufacturing, medicine, and the military.
The Increasing Effects of Computers on Education.
ERIC Educational Resources Information Center
Gannon, John F.
Predicting that the teaching-learning process in American higher education is about to change drastically because of continuing innovations in computer-assisted technology, this paper argues that this change will be driven by inexpensive but powerful computer technology, and that it will manifest itself by reducing the traditional timing of…
Computer Academy. Western Michigan University: Summer 1985-Present.
ERIC Educational Resources Information Center
Kramer, Jane E.
The Computer Academy at Western Michigan University (Kalamazoo) is a series of intensive, one-credit-hour workshops to assist professionals in increasing their level of computer competence. At the time they were initiated, in 1985, the workshops targeted elementary and secondary school teachers and administrators, were offered on Apple IIe…
Using Computer-Assisted Instruction to Enhance Achievement of English Language Learners
ERIC Educational Resources Information Center
Keengwe, Jared; Hussein, Farhan
2014-01-01
Computer-assisted instruction (CAI) in English-Language environments offer practice time, motivates students, enhance student learning, increase authentic materials that students can study, and has the potential to encourage teamwork between students. The findings from this particular study suggested that students who used computer assisted…
ERIC Educational Resources Information Center
Gaffey, Adam John
2014-01-01
As computing technology continued to grow in the lives of secondary students from 2002 to 2006, researchers failed to identify the influence using computers would have on the highest level of education students attempted. During the early part of the century schools moved towards increasing the usage of computers. Numerous stakeholders were unsure…
Addressing the computational cost of large EIT solutions.
Boyle, Alistair; Borsic, Andrea; Adler, Andy
2012-05-01
Electrical impedance tomography (EIT) is a soft field tomography modality based on the application of electric current to a body and measurement of voltages through electrodes at the boundary. The interior conductivity is reconstructed on a discrete representation of the domain using a finite-element method (FEM) mesh and a parametrization of that domain. The reconstruction requires a sequence of numerically intensive calculations. There is strong interest in reducing the cost of these calculations. An improvement in the compute time for current problems would encourage further exploration of computationally challenging problems such as the incorporation of time series data, wide-spread adoption of three-dimensional simulations and correlation of other modalities such as CT and ultrasound. Multicore processors offer an opportunity to reduce EIT computation times but may require some restructuring of the underlying algorithms to maximize the use of available resources. This work profiles two EIT software packages (EIDORS and NDRM) to experimentally determine where the computational costs arise in EIT as problems scale. Sparse matrix solvers, a key component for the FEM forward problem and sensitivity estimates in the inverse problem, are shown to take a considerable portion of the total compute time in these packages. A sparse matrix solver performance measurement tool, Meagre-Crowd, is developed to interface with a variety of solvers and compare their performance over a range of two- and three-dimensional problems of increasing node density. Results show that distributed sparse matrix solvers that operate on multiple cores are advantageous up to a limit that increases as the node density increases. We recommend a selection procedure to find a solver and hardware arrangement matched to the problem and provide guidance and tools to perform that selection.
Social correlates of leisure-time sedentary behaviours in Canadian adults.
Huffman, S; Szafron, M
2017-03-01
Research on the correlates of sedentary behaviour among adults is needed to design health interventions to modify this behaviour. This study explored the associations of social correlates with leisure-time sedentary behaviour of Canadian adults, and whether these associations differ between different types of sedentary behaviour. A sample of 12,021 Canadian adults was drawn from the 2012 Canadian Community Health Survey, and analyzed using binary logistic regression to model the relationships that marital status, the presence of children in the household, and social support have with overall time spent sitting, using a computer, playing video games, watching television, and reading during leisure time. Covariates included gender, age, education, income, employment status, perceived health, physical activity level, body mass index (BMI), and province or territory of residence. Extensive computer time was primarily negatively related to being in a common law relationship, and primarily positively related to being single/never married. Being single/never married was positively associated with extensive sitting time in men only. Having children under 12 in the household was protective against extensive video game and reading times. Increasing social support was negatively associated with extensive computer time in men and women, while among men increasing social support was positively associated with extensive sitting time. Computer, video game, television, and reading time have unique correlates among Canadian adults. Marital status, the presence of children in the household, and social support should be considered in future analyses of sedentary activities in adults.
Aspects of GPU perfomance in algorithms with random memory access
NASA Astrophysics Data System (ADS)
Kashkovsky, Alexander V.; Shershnev, Anton A.; Vashchenkov, Pavel V.
2017-10-01
The numerical code for solving the Boltzmann equation on the hybrid computational cluster using the Direct Simulation Monte Carlo (DSMC) method showed that on Tesla K40 accelerators computational performance drops dramatically with increase of percentage of occupied GPU memory. Testing revealed that memory access time increases tens of times after certain critical percentage of memory is occupied. Moreover, it seems to be the common problem of all NVidia's GPUs arising from its architecture. Few modifications of the numerical algorithm were suggested to overcome this problem. One of them, based on the splitting the memory into "virtual" blocks, resulted in 2.5 times speed up.
SU-E-J-91: FFT Based Medical Image Registration Using a Graphics Processing Unit (GPU).
Luce, J; Hoggarth, M; Lin, J; Block, A; Roeske, J
2012-06-01
To evaluate the efficiency gains obtained from using a Graphics Processing Unit (GPU) to perform a Fourier Transform (FT) based image registration. Fourier-based image registration involves obtaining the FT of the component images, and analyzing them in Fourier space to determine the translations and rotations of one image set relative to another. An important property of FT registration is that by enlarging the images (adding additional pixels), one can obtain translations and rotations with sub-pixel resolution. The expense, however, is an increased computational time. GPUs may decrease the computational time associated with FT image registration by taking advantage of their parallel architecture to perform matrix computations much more efficiently than a Central Processor Unit (CPU). In order to evaluate the computational gains produced by a GPU, images with known translational shifts were utilized. A program was written in the Interactive Data Language (IDL; Exelis, Boulder, CO) to performCPU-based calculations. Subsequently, the program was modified using GPU bindings (Tech-X, Boulder, CO) to perform GPU-based computation on the same system. Multiple image sizes were used, ranging from 256×256 to 2304×2304. The time required to complete the full algorithm by the CPU and GPU were benchmarked and the speed increase was defined as the ratio of the CPU-to-GPU computational time. The ratio of the CPU-to- GPU time was greater than 1.0 for all images, which indicates the GPU is performing the algorithm faster than the CPU. The smallest improvement, a 1.21 ratio, was found with the smallest image size of 256×256, and the largest speedup, a 4.25 ratio, was observed with the largest image size of 2304×2304. GPU programming resulted in a significant decrease in computational time associated with a FT image registration algorithm. The inclusion of the GPU may provide near real-time, sub-pixel registration capability. © 2012 American Association of Physicists in Medicine.
GPU-accelerated computing for Lagrangian coherent structures of multi-body gravitational regimes
NASA Astrophysics Data System (ADS)
Lin, Mingpei; Xu, Ming; Fu, Xiaoyu
2017-04-01
Based on a well-established theoretical foundation, Lagrangian Coherent Structures (LCSs) have elicited widespread research on the intrinsic structures of dynamical systems in many fields, including the field of astrodynamics. Although the application of LCSs in dynamical problems seems straightforward theoretically, its associated computational cost is prohibitive. We propose a block decomposition algorithm developed on Compute Unified Device Architecture (CUDA) platform for the computation of the LCSs of multi-body gravitational regimes. In order to take advantage of GPU's outstanding computing properties, such as Shared Memory, Constant Memory, and Zero-Copy, the algorithm utilizes a block decomposition strategy to facilitate computation of finite-time Lyapunov exponent (FTLE) fields of arbitrary size and timespan. Simulation results demonstrate that this GPU-based algorithm can satisfy double-precision accuracy requirements and greatly decrease the time needed to calculate final results, increasing speed by approximately 13 times. Additionally, this algorithm can be generalized to various large-scale computing problems, such as particle filters, constellation design, and Monte-Carlo simulation.
Accelerating the discovery of space-time patterns of infectious diseases using parallel computing.
Hohl, Alexander; Delmelle, Eric; Tang, Wenwu; Casas, Irene
2016-11-01
Infectious diseases have complex transmission cycles, and effective public health responses require the ability to monitor outbreaks in a timely manner. Space-time statistics facilitate the discovery of disease dynamics including rate of spread and seasonal cyclic patterns, but are computationally demanding, especially for datasets of increasing size, diversity and availability. High-performance computing reduces the effort required to identify these patterns, however heterogeneity in the data must be accounted for. We develop an adaptive space-time domain decomposition approach for parallel computation of the space-time kernel density. We apply our methodology to individual reported dengue cases from 2010 to 2011 in the city of Cali, Colombia. The parallel implementation reaches significant speedup compared to sequential counterparts. Density values are visualized in an interactive 3D environment, which facilitates the identification and communication of uneven space-time distribution of disease events. Our framework has the potential to enhance the timely monitoring of infectious diseases. Copyright © 2016 Elsevier Ltd. All rights reserved.
Recent Scientific Evidence and Technical Developments in Cardiovascular Computed Tomography.
Marcus, Roy; Ruff, Christer; Burgstahler, Christof; Notohamiprodjo, Mike; Nikolaou, Konstantin; Geisler, Tobias; Schroeder, Stephen; Bamberg, Fabian
2016-05-01
In recent years, coronary computed tomography angiography has become an increasingly safe and noninvasive modality for the evaluation of the anatomical structure of the coronary artery tree with diagnostic benefits especially in patients with a low-to-intermediate pretest probability of disease. Currently, increasing evidence from large randomized diagnostic trials is accumulating on the diagnostic impact of computed tomography angiography for the management of patients with acute and stable chest pain syndrome. At the same time, technical advances have substantially reduced adverse effects and limiting factors, such as radiation exposure, the amount of iodinated contrast agent, and scanning time, rendering the technique appropriate for broader clinical applications. In this work, we review the latest developments in computed tomography technology and describe the scientific evidence on the use of cardiac computed tomography angiography to evaluate patients with acute and stable chest pain syndrome. Copyright © 2016 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
NASA Technical Reports Server (NTRS)
Gorospe, George E., Jr.; Daigle, Matthew J.; Sankararaman, Shankar; Kulkarni, Chetan S.; Ng, Eley
2017-01-01
Prognostic methods enable operators and maintainers to predict the future performance for critical systems. However, these methods can be computationally expensive and may need to be performed each time new information about the system becomes available. In light of these computational requirements, we have investigated the application of graphics processing units (GPUs) as a computational platform for real-time prognostics. Recent advances in GPU technology have reduced cost and increased the computational capability of these highly parallel processing units, making them more attractive for the deployment of prognostic software. We present a survey of model-based prognostic algorithms with considerations for leveraging the parallel architecture of the GPU and a case study of GPU-accelerated battery prognostics with computational performance results.
Basic BASIC; An Introduction to Computer Programming in BASIC Language.
ERIC Educational Resources Information Center
Coan, James S.
With the increasing availability of computer access through remote terminals and time sharing, more and more schools and colleges are able to introduce programing to substantial numbers of students. This book is an attempt to incorporate computer programming, using BASIC language, and the teaching of mathematics. The general approach of the book…
A Study of Young Children's Metaknowing Talk: Learning Experiences with Computers
ERIC Educational Resources Information Center
Choi, Ji-Young
2010-01-01
This research project was undertaken in a time of increasing emphasis on the exploration of young children's learning and thinking at computers. The purpose of this study was to describe and interpret the characteristics of metaknowing talk that occurred during learning experiences with computers in a kindergarten community of learners. This…
Constructing Precisely Computing Networks with Biophysical Spiking Neurons.
Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T
2015-07-15
While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks including irregular, Poisson-like spike times, and a tight balance between excitation and inhibition. These results significantly increase the biological plausibility of the spike-based approach to network computation, and uncover how several components of biological networks may work together to efficiently carry out computation. Copyright © 2015 the authors 0270-6474/15/3510112-23$15.00/0.
Space-time VMS computation of wind-turbine rotor and tower aerodynamics
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; McIntyre, Spenser; Kostov, Nikolay; Kolesar, Ryan; Habluetzel, Casey
2014-01-01
We present the space-time variational multiscale (ST-VMS) computation of wind-turbine rotor and tower aerodynamics. The rotor geometry is that of the NREL 5MW offshore baseline wind turbine. We compute with a given wind speed and a specified rotor speed. The computation is challenging because of the large Reynolds numbers and rotating turbulent flows, and computing the correct torque requires an accurate and meticulous numerical approach. The presence of the tower increases the computational challenge because of the fast, rotational relative motion between the rotor and tower. The ST-VMS method is the residual-based VMS version of the Deforming-Spatial-Domain/Stabilized ST (DSD/SST) method, and is also called "DSD/SST-VMST" method (i.e., the version with the VMS turbulence model). In calculating the stabilization parameters embedded in the method, we are using a new element length definition for the diffusion-dominated limit. The DSD/SST method, which was introduced as a general-purpose moving-mesh method for computation of flows with moving interfaces, requires a mesh update method. Mesh update typically consists of moving the mesh for as long as possible and remeshing as needed. In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths. In addition, temporal NURBS basis functions are used in representation of the motion and deformation of the volume meshes computed and also in remeshing. We name this "ST/NURBS Mesh Update Method (STNMUM)." The STNMUM increases computational efficiency in terms of computer time and storage, and computational flexibility in terms of being able to change the time-step size of the computation. We use layers of thin elements near the blade surfaces, which undergo rigid-body motion with the rotor. We compare the results from computations with and without tower, and we also compare using NURBS and linear finite element basis functions in temporal representation of the mesh motion.
Space-Time VMS Computation of Wind-Turbine Rotor and Tower Aerodynamics
NASA Astrophysics Data System (ADS)
McIntyre, Spenser W.
This thesis is on the space{time variational multiscale (ST-VMS) computation of wind-turbine rotor and tower aerodynamics. The rotor geometry is that of the NREL 5MW offshore baseline wind turbine. We compute with a given wind speed and a specified rotor speed. The computation is challenging because of the large Reynolds numbers and rotating turbulent ows, and computing the correct torque requires an accurate and meticulous numerical approach. The presence of the tower increases the computational challenge because of the fast, rotational relative motion between the rotor and tower. The ST-VMS method is the residual-based VMS version of the Deforming-Spatial-Domain/Stabilized ST (DSD/SST) method, and is also called "DSD/SST-VMST" method (i.e., the version with the VMS turbulence model). In calculating the stabilization parameters embedded in the method, we are using a new element length definition for the diffusion-dominated limit. The DSD/SST method, which was introduced as a general-purpose moving-mesh method for computation of ows with moving interfaces, requires a mesh update method. Mesh update typically consists of moving the mesh for as long as possible and remeshing as needed. In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths. In addition, temporal NURBS basis functions are used in representation of the motion and deformation of the volume meshes computed and also in remeshing. We name this "ST/NURBS Mesh Update Method (STNMUM)." The STNMUM increases computational efficiency in terms of computer time and storage, and computational exibility in terms of being able to change the time-step size of the computation. We use layers of thin elements near the blade surfaces, which undergo rigid-body motion with the rotor. We compare the results from computations with and without tower, and we also compare using NURBS and linear finite element basis functions in temporal representation of the mesh motion.
Software Accelerates Computing Time for Complex Math
NASA Technical Reports Server (NTRS)
2014-01-01
Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.
Mamykina, Lena; Vawdrey, David K.; Hripcsak, George
2016-01-01
Purpose To understand how much time residents spend using computers as compared with other activities, and what residents use computers for. Method This time and motion study was conducted in June and July 2010 at NewYork-Presbyterian/Columbia University Medical Center with seven residents (first-, second-, and third-year) on the general medicine service. An experienced observer shadowed residents during a single day shift, captured all their activities using an iPad application, and took field notes. The activities were captured using a validated taxonomy of clinical activities, expanded to describe computer-based activities with a greater level of detail. Results Residents spent 364.5 minutes (50.6%) of their shift time using computers, compared with 67.8 minutes (9.4%) interacting with patients. In addition, they spent 292.3 minutes (40.6%) talking with others in person, 186.0 minutes (25.8%) handling paper notes, 79.7 minutes (11.1%) in rounds, 80.0 minutes (11.1%) walking or waiting, and 54.0 minutes (7.5%) talking on the phone. Residents spent 685 minutes (59.6%) multitasking. Computer-based documentation activities amounted to 189.9 minutes (52.1%) of all computer-based activities time, with 128.7 minutes (35.3%) spent writing notes and 27.3 minutes (7.5%) reading notes composed by others. Conclusions The study showed residents spent considerably more time interacting with computers (over 50% of their shift time), than in direct contact with patients (less than 10% of their shift time). Some of this may be due to an increasing reliance on computing systems for access to patient data, further exacerbated by inefficiencies in the design of the electronic health record. PMID:27028026
Mamykina, Lena; Vawdrey, David K; Hripcsak, George
2016-06-01
To understand how much time residents spend using computers compared with other activities, and what residents use computers for. This time and motion study was conducted in June and July 2010 at NewYork-Presbyterian/Columbia University Medical Center with seven residents (first-, second-, and third-year) on the general medicine service. An experienced observer shadowed residents during a single day shift, captured all their activities using an iPad application, and took field notes. The activities were captured using a validated taxonomy of clinical activities, expanded to describe computer-based activities with a greater level of detail. Residents spent 364.5 minutes (50.6%) of their shift time using computers, compared with 67.8 minutes (9.4%) interacting with patients. In addition, they spent 292.3 minutes (40.6%) talking with others in person, 186.0 minutes (25.8%) handling paper notes, 79.7 minutes (11.1%) in rounds, 80.0 minutes (11.1%) walking or waiting, and 54.0 minutes (7.5%) talking on the phone. Residents spent 685 minutes (59.6%) multitasking. Computer-based documentation activities amounted to 189.9 minutes (52.1%) of all computer-based activities time, with 128.7 minutes (35.3%) spent writing notes and 27.3 minutes (7.5%) reading notes composed by others. The study showed that residents spent considerably more time interacting with computers (over 50% of their shift time) than in direct contact with patients (less than 10% of their shift time). Some of this may be due to an increasing reliance on computing systems for access to patient data, further exacerbated by inefficiencies in the design of the electronic health record.
Speedup computation of HD-sEMG signals using a motor unit-specific electrical source model.
Carriou, Vincent; Boudaoud, Sofiane; Laforet, Jeremy
2018-01-23
Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.
Fast neuromimetic object recognition using FPGA outperforms GPU implementations.
Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph
2013-08-01
Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.
1994-09-01
IIssue Computers, information systems, and communication systems are being increasingly used in transportation, warehousing, order processing , materials...inventory levels, reduced order processing times, reduced order processing costs, and increased customer satisfaction. While purchasing and transportation...process, the speed in which crders are processed would increase significantly. Lowering the order processing time in turn lowers the lead time, which in
Experience using radio frequency laptops to access the electronic medical record in exam rooms.
Dworkin, L. A.; Krall, M.; Chin, H.; Robertson, N.; Harris, J.; Hughes, J.
1999-01-01
Kaiser Permanente, Northwest, evaluated the use of laptop computers to access our existing comprehensive Electronic Medical Record in exam rooms via a wireless radiofrequency (RF) network. Eleven of 22 clinicians who were offered the laptops successfully adopted their use in the exam room. These clinicians were able to increase their exam room time with the patient by almost 4 minutes (25%), apparently without lengthening their overall work day. Patient response to exam room computing was overwhelmingly positive. The RF network response time was similar to the hardwired network. Problems cited by some laptop users and many of the eleven non-adopters included battery issues, different equipment layout and function, and inadequate training. IT support needs for the RF laptops were two to four times greater than for hardwired desktops. Addressing the reliability and training issues should increase clinician acceptance, making a successful general roll-out for exam room computing more likely. PMID:10566458
Performance comparison analysis library communication cluster system using merge sort
NASA Astrophysics Data System (ADS)
Wulandari, D. A. R.; Ramadhan, M. E.
2018-04-01
Begins by using a single processor, to increase the speed of computing time, the use of multi-processor was introduced. The second paradigm is known as parallel computing, example cluster. The cluster must have the communication potocol for processing, one of it is message passing Interface (MPI). MPI have many library, both of them OPENMPI and MPICH2. Performance of the cluster machine depend on suitable between performance characters of library communication and characters of the problem so this study aims to analyze the comparative performances libraries in handling parallel computing process. The case study in this research are MPICH2 and OpenMPI. This case research execute sorting’s problem to know the performance of cluster system. The sorting problem use mergesort method. The research method is by implementing OpenMPI and MPICH2 on a Linux-based cluster by using five computer virtual then analyze the performance of the system by different scenario tests and three parameters for to know the performance of MPICH2 and OpenMPI. These performances are execution time, speedup and efficiency. The results of this study showed that the addition of each data size makes OpenMPI and MPICH2 have an average speed-up and efficiency tend to increase but at a large data size decreases. increased data size doesn’t necessarily increased speed up and efficiency but only execution time example in 100000 data size. OpenMPI has a execution time greater than MPICH2 example in 1000 data size average execution time with MPICH2 is 0,009721 and OpenMPI is 0,003895 OpenMPI can customize communication needs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry
The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce themore » required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.« less
Media use as a reason for meal skipping and fast eating in secondary school children.
Van den Bulck, J; Eggermont, S
2006-04-01
This study examined self-reported meal skipping and eating faster than usual with the goal of watching television or playing computer games. Respondents reported their media use and indicated how often they skipped a meal to watch a favourite television programme or to play a computer game, and how often they ate faster than usual in order to watch television or play a computer game. Respondents were 2546 adolescents of 13 (first year of secondary school) and 16 years (fourth year of secondary school) of age. About one respondent in 10 skipped at least one meal every week for either television viewing or computer game playing. Weekly meal skipping for television viewing occurs more regularly in boys and first-year students, but particularly in teenagers who view 5 h or more daily (15% of the sample). The category of teenagers who play computer games four times a week or more (25.3% of the sample) is at increased risk of meal skipping; those who play more than four times a week are 10 times more likely weekly to skip a meal. A quarter of the adolescents eat faster at least once a week to be able to watch television or play a computer game. Regardless of gender and school year, teenagers' risk of eating faster progressively increases with their use of the media. Those who watch 4 h or more daily are about seven times more likely to skip a meal for television and those who play computer games at least four times a week are nine times more likely weekly to skip a meal. Unhealthy eating habits can be a side effect of heavy or excessive media use. Teenagers' use of television or game computers during nonworking or out-of-school hours partly displaces the amount of time that needs to be spent at meals. Practitioners and educators may try to encourage or restore a pattern of healthful meal consumption habits by reducing the amount of media use, and by supporting parental rule-making regarding children's eating habits and media use.
Enabling Wide-Scale Computer Science Education through Improved Automated Assessment Tools
NASA Astrophysics Data System (ADS)
Boe, Bryce A.
There is a proliferating demand for newly trained computer scientists as the number of computer science related jobs continues to increase. University programs will only be able to train enough new computer scientists to meet this demand when two things happen: when there are more primary and secondary school students interested in computer science, and when university departments have the resources to handle the resulting increase in enrollment. To meet these goals, significant effort is being made to both incorporate computational thinking into existing primary school education, and to support larger university computer science class sizes. We contribute to this effort through the creation and use of improved automated assessment tools. To enable wide-scale computer science education we do two things. First, we create a framework called Hairball to support the static analysis of Scratch programs targeted for fourth, fifth, and sixth grade students. Scratch is a popular building-block language utilized to pique interest in and teach the basics of computer science. We observe that Hairball allows for rapid curriculum alterations and thus contributes to wide-scale deployment of computer science curriculum. Second, we create a real-time feedback and assessment system utilized in university computer science classes to provide better feedback to students while reducing assessment time. Insights from our analysis of student submission data show that modifications to the system configuration support the way students learn and progress through course material, making it possible for instructors to tailor assignments to optimize learning in growing computer science classes.
Does a Computer Have an Arrow of Time?
NASA Astrophysics Data System (ADS)
Maroney, Owen J. E.
2010-02-01
Schulman (Entropy 7(4):221-233, 2005) has argued that Boltzmann’s intuition, that the psychological arrow of time is necessarily aligned with the thermodynamic arrow, is correct. Schulman gives an explicit physical mechanism for this connection, based on the brain being representable as a computer, together with certain thermodynamic properties of computational processes. Hawking (Physical Origins of Time Asymmetry, Cambridge University Press, Cambridge, 1994) presents similar, if briefer, arguments. The purpose of this paper is to critically examine the support for the link between thermodynamics and an arrow of time for computers. The principal arguments put forward by Schulman and Hawking will be shown to fail. It will be shown that any computational process that can take place in an entropy increasing universe, can equally take place in an entropy decreasing universe. This conclusion does not automatically imply a psychological arrow can run counter to the thermodynamic arrow. Some alternative possible explanations for the alignment of the two arrows will be briefly discussed.
Real-time emergency forecasting technique for situation management systems
NASA Astrophysics Data System (ADS)
Kopytov, V. V.; Kharechkin, P. V.; Naumenko, V. V.; Tretyak, R. S.; Tebueva, F. B.
2018-05-01
The article describes the real-time emergency forecasting technique that allows increasing accuracy and reliability of forecasting results of any emergency computational model applied for decision making in situation management systems. Computational models are improved by the Improved Brown’s method applying fractal dimension to forecast short time series data being received from sensors and control systems. Reliability of emergency forecasting results is ensured by the invalid sensed data filtering according to the methods of correlation analysis.
Change Detection Algorithms for Information Assurance of Computer Networks
2002-01-01
original document contains color images. 14. ABSTRACT see report 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18...number of computer attacks increases steadily per year. At the time of this writing the Internet Security Systems’ baseline assessment is that a new...across a network by exploiting security flaws in widely-used services offered by vulnerable computers. In order to locate the vulnerable computers, the
Comparability of Computer Delivered versus Traditional Paper and Pencil Testing
ERIC Educational Resources Information Center
Strader, Douglas A.
2012-01-01
There are many advantages supporting the use of computers as an alternate mode of delivery for high stakes testing: cost savings, increased test security, flexibility in test administrations, innovations in items, and reduced scoring time. The purpose of this study was to determine if the use of computers as the mode of delivery had any…
Inclusion of Mobility-Impaired Children in the One-to-One Computing Era: A Case Study
ERIC Educational Resources Information Center
Mangiatordi, Andrea
2012-01-01
In recent times many developing countries have adopted a one-to-one model for distributing computers in classrooms. Among the various effects that such an approach could imply, it surely increases the availability of computer-related Assistive Technology at school and provides higher resources for empowering disabled children in their learning and…
ERIC Educational Resources Information Center
Kautz, Karlheinz; Kofoed, Uffe
2004-01-01
Teachers at universities are facing an increasing disparity in students' prior IT knowledge and, at the same time, experience a growing disengagement of the students with regard to involvement in study activities. As computer science teachers in a joint programme in computer science and business administration, we made a number of similar…
Impact of increasing social media use on sitting time and body mass index.
Alley, Stephanie; Wellens, Pauline; Schoeppe, Stephanie; de Vries, Hein; Rebar, Amanda L; Short, Camille E; Duncan, Mitch J; Vandelanotte, Corneel
2017-08-01
Issue addressed Sedentary behaviours, in particular sitting, increases the risk of cardiovascular disease, type 2 diabetes, obesity and poorer mental health status. In Australia, 70% of adults sit for more than 8h per day. The use of social media applications (e.g. Facebook, Twitter, and Instagram) is on the rise; however, no studies have explored the association of social media use with sitting time and body mass index (BMI). Methods Cross-sectional self-report data on demographics, BMI and sitting time were collected from 1140 participants in the 2013 Queensland Social Survey. Generalised linear models were used to estimate associations of a social media score calculated from social media use, perceived importance of social media, and number of social media contacts with sitting time and BMI. Results Participants with a high social media score had significantly greater sitting times while using a computer in leisure time and significantly greater total sitting time on non-workdays. However, no associations were found between social media score and sitting to view TV, use motorised transport, work or participate in other leisure activities; or total workday, total sitting time or BMI. Conclusions These results indicate that social media use is associated with increased sitting time while using a computer, and total sitting time on non-workdays. So what? The rise in social media use may have a negative impact on health by contributing to computer sitting and total sitting time on non-workdays. Future longitudinal research with a representative sample and objective sitting measures is needed to confirm findings.
Polynomial complexity despite the fermionic sign
NASA Astrophysics Data System (ADS)
Rossi, R.; Prokof'ev, N.; Svistunov, B.; Van Houcke, K.; Werner, F.
2017-04-01
It is commonly believed that in unbiased quantum Monte Carlo approaches to fermionic many-body problems, the infamous sign problem generically implies prohibitively large computational times for obtaining thermodynamic-limit quantities. We point out that for convergent Feynman diagrammatic series evaluated with a recently introduced Monte Carlo algorithm (see Rossi R., arXiv:1612.05184), the computational time increases only polynomially with the inverse error on thermodynamic-limit quantities.
Fault-tolerant arithmetic via time-shared TMR
NASA Astrophysics Data System (ADS)
Swartzlander, Earl E.
1999-11-01
Fault tolerance is increasingly important as society has come to depend on computers for more and more aspects of daily life. The current concern about the Y2K problems indicates just how much we depend on accurate computers. This paper describes work on time- shared TMR, a technique which is used to provide arithmetic operations that produce correct results in spite of circuit faults.
Efficient Computation Of Manipulator Inertia Matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1991-01-01
Improved method for computation of manipulator inertia matrix developed, based on concept of spatial inertia of composite rigid body. Required for implementation of advanced dynamic-control schemes as well as dynamic simulation of manipulator motion. Motivated by increasing demand for fast algorithms to provide real-time control and simulation capability and, particularly, need for faster-than-real-time simulation capability, required in many anticipated space teleoperation applications.
Enabling fast, stable and accurate peridynamic computations using multi-time-step integration
Lindsay, P.; Parks, M. L.; Prakash, A.
2016-04-13
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less
Umari, A.M.; Gorelick, S.M.
1986-01-01
It is possible to obtain analytic solutions to the groundwater flow and solute transport equations if space variables are discretized but time is left continuous. From these solutions, hydraulic head and concentration fields for any future time can be obtained without ' marching ' through intermediate time steps. This analytical approach involves matrix exponentiation and is referred to as the Matrix Exponential Time Advancement (META) method. Two algorithms are presented for the META method, one for symmetric and the other for non-symmetric exponent matrices. A numerical accuracy indicator, referred to as the matrix condition number, was defined and used to determine the maximum number of significant figures that may be lost in the META method computations. The relative computational and storage requirements of the META method with respect to the time marching method increase with the number of nodes in the discretized problem. The potential greater accuracy of the META method and the associated greater reliability through use of the matrix condition number have to be weighed against this increased relative computational and storage requirements of this approach as the number of nodes becomes large. For a particular number of nodes, the META method may be computationally more efficient than the time-marching method, depending on the size of time steps used in the latter. A numerical example illustrates application of the META method to a sample ground-water-flow problem. (Author 's abstract)
Yang, Su-Jin; Stewart, Robert; Lee, Ju-Yeon; Kim, Jae-Min; Kim, Sung-Wan; Shin, Il-Seon; Yoon, Jin-Sang
2014-01-01
To measure the prevalence of and factors associated with online inappropriate sexual exposure, cyber-bullying victimisation, and computer-using time in early adolescence. A two-year, prospective school survey was performed with 1,173 children aged 13 at baseline. Data collected included demographic factors, bullying experience, depression, anxiety, coping strategies, self-esteem, psychopathology, attention-deficit hyperactivity disorder symptoms, and school performance. These factors were investigated in relation to problematic Internet experiences and computer-using time at age 15. The prevalence of online inappropriate sexual exposure, cyber-bullying victimisation, academic-purpose computer overuse, and game-purpose computer overuse was 31.6%, 19.2%, 8.5%, and 21.8%, respectively, at age 15. Having older siblings, more weekly pocket money, depressive symptoms, anxiety symptoms, and passive coping strategy were associated with reported online sexual harassment. Male gender, depressive symptoms, and anxiety symptoms were associated with reported cyber-bullying victimisation. Female gender was associated with academic-purpose computer overuse, while male gender, lower academic level, increased height, and having older siblings were associated with game-purpose computer-overuse. Different environmental and psychological factors predicted different aspects of problematic Internet experiences and computer-using time. This knowledge is important for framing public health interventions to educate adolescents about, and prevent, internet-derived problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Shujia; Duffy, Daniel; Clune, Thomas
The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratiomore » of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.« less
Scientific Discovery through Advanced Computing in Plasma Science
NASA Astrophysics Data System (ADS)
Tang, William
2005-03-01
Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to the computational science area.
NASA Technical Reports Server (NTRS)
Sen, Syamal K.; AliShaykhian, Gholam
2010-01-01
We present a simple multi-dimensional exhaustive search method to obtain, in a reasonable time, the optimal solution of a nonlinear programming problem. It is more relevant in the present day non-mainframe computing scenario where an estimated 95% computing resources remains unutilized and computing speed touches petaflops. While the processor speed is doubling every 18 months, the band width is doubling every 12 months, and the hard disk space is doubling every 9 months. A randomized search algorithm or, equivalently, an evolutionary search method is often used instead of an exhaustive search algorithm. The reason is that a randomized approach is usually polynomial-time, i.e., fast while an exhaustive search method is exponential-time i.e., slow. We discuss the increasing importance of exhaustive search in optimization with the steady increase of computing power for solving many real-world problems of reasonable size. We also discuss the computational error and complexity of the search algorithm focusing on the fact that no measuring device can usually measure a quantity with an accuracy greater than 0.005%. We stress the fact that the quality of solution of the exhaustive search - a deterministic method - is better than that of randomized search. In 21 st century computing environment, exhaustive search cannot be left aside as an untouchable and it is not always exponential. We also describe a possible application of these algorithms in improving the efficiency of solar cells - a real hot topic - in the current energy crisis. These algorithms could be excellent tools in the hands of experimentalists and could save not only large amount of time needed for experiments but also could validate the theory against experimental results fast.
NASA Astrophysics Data System (ADS)
Siddiquee, Abu Nayem Md. Asraf
A parametric modeling study has been carried out to assess the impact of change in operating parameters on the performance of Vanadium Redox Flow Battery (VRFB). The objective of this research is to develop a computer program to predict the dynamic behavior of VRFB combining fluid mechanics, reaction kinetics, and electric circuit. The computer program was developed using Maple 2015 and calculations were made at different operating parameters. Modeling results show that the discharging time increases from 2.2 hours to 6.7 hours when the concentration of V2+ in electrolytes increases from 1M to 3M. The operation time during the charging cycle decreases from 6.9 hours to 3.3 hours with the increase of applied current from 1.85A to 3.85A. The modeling results represent that the charging and discharging time were found to increase from 4.5 hours to 8.2 hours with the increase in tank to cell ratio from 5:1 to 10:1.
Fast Dynamic Simulation-Based Small Signal Stability Assessment and Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acharya, Naresh; Baone, Chaitanya; Veda, Santosh
2014-12-31
Power grid planning and operation decisions are made based on simulation of the dynamic behavior of the system. Enabling substantial energy savings while increasing the reliability of the aging North American power grid through improved utilization of existing transmission assets hinges on the adoption of wide-area measurement systems (WAMS) for power system stabilization. However, adoption of WAMS alone will not suffice if the power system is to reach its full entitlement in stability and reliability. It is necessary to enhance predictability with "faster than real-time" dynamic simulations that will enable the dynamic stability margins, proactive real-time control, and improve gridmore » resiliency to fast time-scale phenomena such as cascading network failures. Present-day dynamic simulations are performed only during offline planning studies, considering only worst case conditions such as summer peak, winter peak days, etc. With widespread deployment of renewable generation, controllable loads, energy storage devices and plug-in hybrid electric vehicles expected in the near future and greater integration of cyber infrastructure (communications, computation and control), monitoring and controlling the dynamic performance of the grid in real-time would become increasingly important. The state-of-the-art dynamic simulation tools have limited computational speed and are not suitable for real-time applications, given the large set of contingency conditions to be evaluated. These tools are optimized for best performance of single-processor computers, but the simulation is still several times slower than real-time due to its computational complexity. With recent significant advances in numerical methods and computational hardware, the expectations have been rising towards more efficient and faster techniques to be implemented in power system simulators. This is a natural expectation, given that the core solution algorithms of most commercial simulators were developed decades ago, when High Performance Computing (HPC) resources were not commonly available.« less
An efficient and accurate approach to MTE-MART for time-resolved tomographic PIV
NASA Astrophysics Data System (ADS)
Lynch, K. P.; Scarano, F.
2015-03-01
The motion-tracking-enhanced MART (MTE-MART; Novara et al. in Meas Sci Technol 21:035401, 2010) has demonstrated the potential to increase the accuracy of tomographic PIV by the combined use of a short sequence of non-simultaneous recordings. A clear bottleneck of the MTE-MART technique has been its computational cost. For large datasets comprising time-resolved sequences, MTE-MART becomes unaffordable and has been barely applied even for the analysis of densely seeded tomographic PIV datasets. A novel implementation is proposed for tomographic PIV image sequences, which strongly reduces the computational burden of MTE-MART, possibly below that of regular MART. The method is a sequential algorithm that produces a time-marching estimation of the object intensity field based on an enhanced guess, which is built upon the object reconstructed at the previous time instant. As the method becomes effective after a number of snapshots (typically 5-10), the sequential MTE-MART (SMTE) is most suited for time-resolved sequences. The computational cost reduction due to SMTE simply stems from the fewer MART iterations required for each time instant. Moreover, the method yields superior reconstruction quality and higher velocity field measurement precision when compared with both MART and MTE-MART. The working principle is assessed in terms of computational effort, reconstruction quality and velocity field accuracy with both synthetic time-resolved tomographic images of a turbulent boundary layer and two experimental databases documented in the literature. The first is the time-resolved data of flow past an airfoil trailing edge used in the study of Novara and Scarano (Exp Fluids 52:1027-1041, 2012); the second is a swirling jet in a water flow. In both cases, the effective elimination of ghost particles is demonstrated in number and intensity within a short temporal transient of 5-10 frames, depending on the seeding density. The increased value of the velocity space-time correlation coefficient demonstrates the increased velocity field accuracy of SMTE compared with MART.
Challenges in reducing the computational time of QSTS simulations for distribution system analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.
The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less
Ocean Models and Proper Orthogonal Decomposition
NASA Astrophysics Data System (ADS)
Salas-de-Leon, D. A.
2007-05-01
The increasing computational developments and the better understanding of mathematical and physical systems resulted in an increasing number of ocean models. Long time ago, modelers were like a secret organization and recognize each other by using secret codes and languages that only a select group of people was able to recognize and understand. The access to computational systems was reduced, on one hand equipment and the using time of computers were expensive and restricted, and on the other hand, they required an advance computational languages that not everybody wanted to learn. Now a days most college freshman own a personal computer (PC or laptop), and/or have access to more sophisticated computational systems than those available for research in the early 80's. The resource availability resulted in a mayor access to all kind models. Today computer speed and time and the algorithms does not seem to be a problem, even though some models take days to run in small computational systems. Almost every oceanographic institution has their own model, what is more, in the same institution from one office to the next there are different models for the same phenomena, developed by different research member, the results does not differ substantially since the equations are the same, and the solving algorithms are similar. The algorithms and the grids, constructed with algorithms, can be found in text books and/or over the internet. Every year more sophisticated models are constructed. The Proper Orthogonal Decomposition is a technique that allows the reduction of the number of variables to solve keeping the model properties, for which it can be a very useful tool in diminishing the processes that have to be solved using "small" computational systems, making sophisticated models available for a greater community.
Noise-constrained switching times for heteroclinic computing
NASA Astrophysics Data System (ADS)
Neves, Fabio Schittler; Voit, Maximilian; Timme, Marc
2017-03-01
Heteroclinic computing offers a novel paradigm for universal computation by collective system dynamics. In such a paradigm, input signals are encoded as complex periodic orbits approaching specific sequences of saddle states. Without inputs, the relevant states together with the heteroclinic connections between them form a network of states—the heteroclinic network. Systems of pulse-coupled oscillators or spiking neurons naturally exhibit such heteroclinic networks of saddles, thereby providing a substrate for general analog computations. Several challenges need to be resolved before it becomes possible to effectively realize heteroclinic computing in hardware. The time scales on which computations are performed crucially depend on the switching times between saddles, which in turn are jointly controlled by the system's intrinsic dynamics and the level of external and measurement noise. The nonlinear dynamics of pulse-coupled systems often strongly deviate from that of time-continuously coupled (e.g., phase-coupled) systems. The factors impacting switching times in pulse-coupled systems are still not well understood. Here we systematically investigate switching times in dependence of the levels of noise and intrinsic dissipation in the system. We specifically reveal how local responses to pulses coact with external noise. Our findings confirm that, like in time-continuous phase-coupled systems, piecewise-continuous pulse-coupled systems exhibit switching times that transiently increase exponentially with the number of switches up to some order of magnitude set by the noise level. Complementarily, we show that switching times may constitute a good predictor for the computation reliability, indicating how often an input signal must be reiterated. By characterizing switching times between two saddles in conjunction with the reliability of a computation, our results provide a first step beyond the coding of input signal identities toward a complementary coding for the intensity of those signals. The results offer insights on how future heteroclinic computing systems may operate under natural, and thus noisy, conditions.
Aligner optimization increases accuracy and decreases compute times in multi-species sequence data.
Robinson, Kelly M; Hawkins, Aziah S; Santana-Cruz, Ivette; Adkins, Ricky S; Shetty, Amol C; Nagaraj, Sushma; Sadzewicz, Lisa; Tallon, Luke J; Rasko, David A; Fraser, Claire M; Mahurkar, Anup; Silva, Joana C; Dunning Hotopp, Julie C
2017-09-01
As sequencing technologies have evolved, the tools to analyze these sequences have made similar advances. However, for multi-species samples, we observed important and adverse differences in alignment specificity and computation time for bwa- mem (Burrows-Wheeler aligner-maximum exact matches) relative to bwa-aln. Therefore, we sought to optimize bwa-mem for alignment of data from multi-species samples in order to reduce alignment time and increase the specificity of alignments. In the multi-species cases examined, there was one majority member (i.e. Plasmodium falciparum or Brugia malayi ) and one minority member (i.e. human or the Wolbachia endosymbiont w Bm) of the sequence data. Increasing bwa-mem seed length from the default value reduced the number of read pairs from the majority sequence member that incorrectly aligned to the reference genome of the minority sequence member. Combining both source genomes into a single reference genome increased the specificity of mapping, while also reducing the central processing unit (CPU) time. In Plasmodium , at a seed length of 18 nt, 24.1 % of reads mapped to the human genome using 1.7±0.1 CPU hours, while 83.6 % of reads mapped to the Plasmodium genome using 0.2±0.0 CPU hours (total: 107.7 % reads mapping; in 1.9±0.1 CPU hours). In contrast, 97.1 % of the reads mapped to a combined Plasmodium- human reference in only 0.7±0.0 CPU hours. Overall, the results suggest that combining all references into a single reference database and using a 23 nt seed length reduces the computational time, while maximizing specificity. Similar results were found for simulated sequence reads from a mock metagenomic data set. We found similar improvements to computation time in a publicly available human-only data set.
A Primer on High-Throughput Computing for Genomic Selection
Wu, Xiao-Lin; Beissinger, Timothy M.; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J. M.; Weigel, Kent A.; Gatti, Natalia de Leon; Gianola, Daniel
2011-01-01
High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized genetic gain). Eventually, HTC may change our view of data analysis as well as decision-making in the post-genomic era of selection programs in animals and plants, or in the study of complex diseases in humans. PMID:22303303
Orientation/Time Management Skill Training Lesson: Development and Evaluation
1979-07-01
instructional environment. This Orientation/ Time Management lesson provides students with appropriate role models for increasing acceptance of their...time savings can be obtained by a combination of this type of orientation and time management skill training with a computer-based progress targeting
The energy expenditure of using a "walk-and-work" desk for office workers with obesity.
Levine, James A; Miller, Jennifer M
2007-09-01
For many people, most of the working day is spent sitting in front of a computer screen. Approaches for obesity treatment and prevention are being sought to increase workplace physical activity because low levels of physical activity are associated with obesity. Our hypothesis was that a vertical workstation that allows an obese individual to work while walking would be associated with significant and substantial increases in energy expenditure over seated work. The vertical workstation is a workstation that allows an office worker to use a standard personal computer while walking on a treadmill at a self-selected velocity. 15 sedentary individuals with obesity (14 women, one man; 43 (7.5) years, 86 (9.6) kg; body mass index 32 (2.6) kg/m(2)) underwent measurements of energy expenditure at rest, seated working in an office chair, standing and while walking at a self-selected speed using the vertical workstation. Body composition was measured using dual x ray absorptiometry. The mean (SD) energy expenditure while seated at work in an office chair was 72 (10) kcal/h, whereas the energy expenditure while walking and working at a self-selected velocity of 1.1 (0.4) mph was 191 (29) kcal/h. The mean (SD) increase in energy expenditure for walking-and-working over sitting was 119 (25) kcal/h. If sitting computer-time were replaced by walking-and-working, energy expenditure could increase by 100 kcal/h. Thus, if obese individuals were to replace time spent sitting at the computer with walking computer time by 2-3 h/day, and if other components of energy balance were constant, a weight loss of 20-30 kg/year could occur.
Control Law Design in a Computational Aeroelasticity Environment
NASA Technical Reports Server (NTRS)
Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.
2003-01-01
A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.
Modeling of urban solid waste management system: The case of Dhaka city
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sufian, M.A.; Bala, B.K.
2007-07-01
This paper presents a system dynamics computer model to predict solid waste generation, collection capacity and electricity generation from solid waste and to assess the needs for waste management of the urban city of Dhaka, Bangladesh. Simulated results show that solid waste generation, collection capacity and electricity generation potential from solid waste increase with time. Population, uncleared waste, untreated waste, composite index and public concern are projected to increase with time for Dhaka city. Simulated results also show that increasing the budget for collection capacity alone does not improve environmental quality; rather an increased budget is required for both collectionmore » and treatment of solid wastes of Dhaka city. Finally, this model can be used as a computer laboratory for urban solid waste management (USWM) policy analysis.« less
How parents can affect excessive spending of time on screen-based activities.
Brindova, Daniela; Pavelka, Jan; Ševčikova, Anna; Žežula, Ivan; van Dijk, Jitse P; Reijneveld, Sijmen A; Geckova, Andrea Madarasova
2014-12-12
The aim of this study is to explore the association between family-related factors and excessive time spent on screen-based activities among school-aged children. A cross-sectional survey using the methodology of the Health Behaviour in School-aged Children study was performed in 2013, with data collected from Slovak (n = 258) and Czech (n = 406) 11- and 15-year-old children. The effects of age, gender, availability of a TV or computer in the bedroom, parental rules on time spent watching TV or working on a computer, parental rules on the content of TV programmes and computer work and watching TV together with parents on excessive time spent with screen-based activities were explored using logistic regression models. Two-thirds of respondents watch TV or play computer games at least two hours a day. Older children have a 1.80-times higher chance of excessive TV watching (CI: 1.30-2.51) and a 3.91-times higher chance of excessive computer use (CI: 2.82-5.43) in comparison with younger children. More than half of children have a TV (53%) and a computer (73%) available in their bedroom, which increases the chance of excessive TV watching by 1.59 times (CI: 1.17-2.16) and of computer use by 2.25 times (CI: 1.59-3.20). More than half of parents rarely or never apply rules on the length of TV watching (64%) or time spent on computer work (56%), and their children have a 1.76-times higher chance of excessive TV watching (CI: 1.26-2.46) and a 1.50-times greater chance of excessive computer use (CI: 1.07-2.08). A quarter of children reported that they are used to watching TV together with their parents every day, and these have a 1.84-times higher chance of excessive TV watching (1.25-2.70). Reducing time spent watching TV by applying parental rules or a parental role model might help prevent excessive time spent on screen-based activities.
NASA Technical Reports Server (NTRS)
Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas
2008-01-01
A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.
Lacher, D.; Nelson, E.; Bylsma, W.; Spena, R.
2000-01-01
The American College of Physicians-American Society of Internal Medicine conducted a membership survey in late 1998 to assess their activities, needs, and attitudes. A total of 9,466 members (20.9% response rate) reported on 198 items related to computer use and needs of internists. Eighty-two percent of the respondents reported that they use computers for personal or professional reasons. Physicians younger than 50 years old who had full- or part-time academic affiliation reported using computers more frequently for medical applications. About two thirds of respondents who had access to computers connected to the Internet at least weekly, with most using the Internet from home for e-mail and nonmedical uses. Physicians expressed concerns about Internet security, confidentiality, and accuracy, and the lack of time to browse the Internet. In practice settings, internists used computers for administrative and financial functions. Less than 19% of respondents had partial or complete electronic clinical functions in their offices. Less than 7% of respondents exchanged e-mail with their patients on a weekly or daily basis. Also, less than 15% of respondents used computers for continuing medical education (CME). Respondents reported they wanted to increase their general computer skills and enhance their knowledge of computer-based information sources for patient care, electronic medical record systems, computer-based CME, and telemedicine While most respondents used computers and connected to the Internet, few physicians utilized computers for clinical management. Medical organizations face the challenge of increasing physician use of clinical systems and electronic CME. PMID:11079924
Lacher, D; Nelson, E; Bylsma, W; Spena, R
2000-01-01
The American College of Physicians-American Society of Internal Medicine conducted a membership survey in late 1998 to assess their activities, needs, and attitudes. A total of 9,466 members (20.9% response rate) reported on 198 items related to computer use and needs of internists. Eighty-two percent of the respondents reported that they use computers for personal or professional reasons. Physicians younger than 50 years old who had full- or part-time academic affiliation reported using computers more frequently for medical applications. About two thirds of respondents who had access to computers connected to the Internet at least weekly, with most using the Internet from home for e-mail and nonmedical uses. Physicians expressed concerns about Internet security, confidentiality, and accuracy, and the lack of time to browse the Internet. In practice settings, internists used computers for administrative and financial functions. Less than 19% of respondents had partial or complete electronic clinical functions in their offices. Less than 7% of respondents exchanged e-mail with their patients on a weekly or daily basis. Also, less than 15% of respondents used computers for continuing medical education (CME). Respondents reported they wanted to increase their general computer skills and enhance their knowledge of computer-based information sources for patient care, electronic medical record systems, computer-based CME, and telemedicine While most respondents used computers and connected to the Internet, few physicians utilized computers for clinical management. Medical organizations face the challenge of increasing physician use of clinical systems and electronic CME.
Alerhand, Stephen; Meltzer, James; Tay, Ee Tein
2017-08-01
Ultrasound scan has gained attention for diagnosing appendicitis due to its avoidance of ionizing radiation. However, studies show that ultrasound scan carries inferior sensitivity to computed tomography scan. A non-diagnostic ultrasound scan could increase the time to diagnosis and appendicectomy, particularly if follow-up computed tomography scan is needed. Some studies suggest that delaying appendicectomy increases the risk of perforation. To investigate the risk of appendiceal perforation when using ultrasound scan as the initial diagnostic imaging modality in children with suspected appendicitis. We retrospectively reviewed 1411 charts of children ≤17 years old diagnosed with appendicitis at two urban academic medical centers. Patients who underwent ultrasound scan first were compared to those who underwent computed tomography scan first. In the sub-group analysis, patients who only received ultrasound scan were compared to those who received initial ultrasound scan followed by computed tomography scan. Main outcome measures were appendiceal perforation rate and time from triage to appendicectomy. In 720 children eligible for analysis, there was no significant difference in perforation rate between those who had initial ultrasound scan and those who had initial computed tomography scan (7.3% vs. 8.9%, p = 0.44), nor in those who had ultrasound scan only and those who had initial ultrasound scan followed by computed tomography scan (8.0% vs. 5.6%, p = 0.42). Those patients who had ultrasound scan first had a shorter triage-to-incision time than those who had computed tomography scan first (9.2 (IQR: 5.9, 14.0) vs. 10.2 (IQR: 7.3, 14.3) hours, p = 0.03), whereas those who had ultrasound scan followed by computed tomography scan took longer than those who had ultrasound scan only (7.8 (IQR: 5.3, 11.6) vs. 15.1 (IQR: 10.6, 20.6), p < 0.001). Children < 12 years old receiving ultrasound scan first had lower perforation rate (p = 0.01) and shorter triage-to-incision time (p = 0.003). Children with suspected appendicitis receiving ultrasound scan as the initial diagnostic imaging modality do not have increased risk of perforation compared to those receiving computed tomography scan first. We recommend that children <12 years of age receive ultrasound scan first.
Apollo experience report: Real-time auxiliary computing facility development
NASA Technical Reports Server (NTRS)
Allday, C. E.
1972-01-01
The Apollo real time auxiliary computing function and facility were an extension of the facility used during the Gemini Program. The facility was expanded to include support of all areas of flight control, and computer programs were developed for mission and mission-simulation support. The scope of the function was expanded to include prime mission support functions in addition to engineering evaluations, and the facility became a mandatory mission support facility. The facility functioned as a full scale mission support activity until after the first manned lunar landing mission. After the Apollo 11 mission, the function and facility gradually reverted to a nonmandatory, offline, on-call operation because the real time program flexibility was increased and verified sufficiently to eliminate the need for redundant computations. The evaluation of the facility and function and recommendations for future programs are discussed in this report.
NASA Astrophysics Data System (ADS)
Acernese, Fausto; Barone, Fabrizio; De Rosa, Rosario; Eleuteri, Antonio; Milano, Leopoldo; Pardi, Silvio; Ricciardi, Iolanda; Russo, Guido
2004-09-01
One of the main requirements of a digital system for the control of interferometric detectors of gravitational waves is the computing power, that is a direct consequence of the increasing complexity of the digital algorithms necessary for the control signals generation. For this specific task many specialized non standard real-time architectures have been developed, often very expensive and difficult to upgrade. On the other hand, such computing power is generally fully available for off-line applications on standard Pc based systems. Therefore, a possible and obvious solution may be provided by the integration of both the real-time and off-line architecture resulting in a hybrid control system architecture based on standards available components, trying to get both the advantages of the perfect data synchronization provided by the real-time systems and by the large computing power available on Pc based systems. Such integration may be provided by the implementation of the link between the two different architectures through the standard Ethernet network, whose data transfer speed is largely increasing in these years, using the TCP/IP, UDP and raw Ethernet protocols. In this paper we describe the architecture of an hybrid Ethernet based real-time control system prototype we implemented in Napoli, discussing its characteristics and performances. Finally we discuss a possible application to the real-time control of a suspended mass of the mode cleaner of the 3m prototype optical interferometer for gravitational wave detection (IDGW-3P) operational in Napoli.
Khalifa, Mohamed
2016-01-01
This study aims at evaluating hospital information systems (HIS) acceptance factors among nurses, in order to provide suggestions for successful HIS implementation. The study used mainly quantitative survey methods to collect data directly from nurses through a questionnaire. The availability of computers in the hospital was one of the most influential factors, with a special emphasis on the unavailability of laptop computers and computers on wheels to facilitate immediate data entry and retrieval when nurses are at the point of care. Nurses believed that HIS might frequently slow down the process of care delivery and increase the time spent by patients inside the hospital especially during slow performance and responsiveness phases. Recommendations were classified into three main areas; improving system performance and availability of computers in the hospital, increasing organizational support in the form of providing training and protected time for nurses' to learn and enhancing users' feedback by listening to their complaints and considering their suggestions.
2005-03-28
consequently users are torn between taking advantage of increasingly pervasive computing systems, and the price (in attention and skill) that they have to... advantage of the surrounding computing environments; and (c) that it is usable by non-experts. Second, from a software architect’s perspective, we...take full advantage of the computing systems accessible to them, much as they take advantage of the furniture in each physical space. In the example
The Next Frontier in Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarrao, John
2016-11-16
Exascale computing refers to computing systems capable of at least one exaflop or a billion calculations per second (1018). That is 50 times faster than the most powerful supercomputers being used today and represents a thousand-fold increase over the first petascale computer that came into operation in 2008. How we use these large-scale simulation resources is the key to solving some of today’s most pressing problems, including clean energy production, nuclear reactor lifetime extension and nuclear stockpile aging.
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Youngblood, John N.; Saha, Aindam
1987-01-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, C.C.; Youngblood, J.N.; Saha, A.
1987-12-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processingmore » elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.« less
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
Lopes, Adair S; Silva, Kelly S; Barbosa Filho, Valter C; Bezerra, Jorge; de Oliveira, Elusa S A; Nahas, Markus V
2014-12-01
Economic and technological improvements can help increase screen time use among adolescents, but evidence in developing countries is scarce. The aim of this study was to examine changes in TV watching and computer/video game use patterns on week and weekend days after a decade (2001 and 2011), among students in Santa Catarina, southern Brazil. A comparative analysis of two cross-sectional surveys that included 5 028 and 6 529 students in 2001 and 2011, respectively, aged 15-19 years. The screen time use indicators were self-reported. 95% Confidence intervals were used to compare the prevalence rates. All analyses were separated by gender. After a decade, there was a significant increase in computer/video game use. Inversely, a significant reduction in TV watching was observed, with a similar magnitude to the change in computer/video game use. The worst trends were identified on weekend days. The decrease in TV watching after a decade appears to be compensated by the increase in computer/video game use, both in boys and girls. Interventions are needed to reduce the negative impact of technological improvements in the lifestyles of young people, especially on weekend days. © The Author 2014. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density
NASA Astrophysics Data System (ADS)
Hohl, A.; Delmelle, E. M.; Tang, W.
2015-07-01
Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.
Word aligned bitmap compression method, data structure, and apparatus
Wu, Kesheng; Shoshani, Arie; Otoo, Ekow
2004-12-14
The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.
Virtual aluminum castings: An industrial application of ICME
NASA Astrophysics Data System (ADS)
Allison, John; Li, Mei; Wolverton, C.; Su, Xuming
2006-11-01
The automotive product design and manufacturing community is continually besieged by Hercule an engineering, timing, and cost challenges. Nowhere is this more evident than in the development of designs and manufacturing processes for cast aluminum engine blocks and cylinder heads. Increasing engine performance requirements coupled with stringent weight and packaging constraints are pushing aluminum alloys to the limits of their capabilities. To provide high-quality blocks and heads at the lowest possible cost, manufacturing process engineers are required to find increasingly innovative ways to cast and heat treat components. Additionally, to remain competitive, products and manufacturing methods must be developed and implemented in record time. To bridge the gaps between program needs and engineering reality, the use of robust computational models in up-front analysis will take on an increasingly important role. This article describes just such a computational approach, the Virtual Aluminum Castings methodology, which was developed and implemented at Ford Motor Company and demonstrates the feasibility and benefits of integrated computational materials engineering.
[Economic efficiency of computer monitoring of health].
Il'icheva, N P; Stazhadze, L L
2001-01-01
Presents the method of computer monitoring of health, based on utilization of modern information technologies in public health. The method helps organize preventive activities of an outpatient clinic at a high level and essentially decrease the time and money loss. Efficiency of such preventive measures, increased number of computer and Internet users suggests that such methods are promising and further studies in this field are needed.
Television viewing, computer use and total screen time in Canadian youth.
Mark, Amy E; Boyce, William F; Janssen, Ian
2006-11-01
Research has linked excessive television viewing and computer use in children and adolescents to a variety of health and social problems. Current recommendations are that screen time in children and adolescents should be limited to no more than 2 h per day. To determine the percentage of Canadian youth meeting the screen time guideline recommendations. The representative study sample consisted of 6942 Canadian youth in grades 6 to 10 who participated in the 2001/2002 World Health Organization Health Behaviour in School-Aged Children survey. Only 41% of girls and 34% of boys in grades 6 to 10 watched 2 h or less of television per day. Once the time of leisure computer use was included and total daily screen time was examined, only 18% of girls and 14% of boys met the guidelines. The prevalence of those meeting the screen time guidelines was higher in girls than boys. Fewer than 20% of Canadian youth in grades 6 to 10 met the total screen time guidelines, suggesting that increased public health interventions are needed to reduce the number of leisure time hours that Canadian youth spend watching television and using the computer.
Computational methods for aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Peeters, M. F.
1983-01-01
Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.
Real-time computer treatment of THz passive device images with the high image quality
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2012-06-01
We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.
QRS detection based ECG quality assessment.
Hayn, Dieter; Jammerbund, Bernhard; Schreier, Günter
2012-09-01
Although immediate feedback concerning ECG signal quality during recording is useful, up to now not much literature describing quality measures is available. We have implemented and evaluated four ECG quality measures. Empty lead criterion (A), spike detection criterion (B) and lead crossing point criterion (C) were calculated from basic signal properties. Measure D quantified the robustness of QRS detection when applied to the signal. An advanced Matlab-based algorithm combining all four measures and a simplified algorithm for Android platforms, excluding measure D, were developed. Both algorithms were evaluated by taking part in the Computing in Cardiology Challenge 2011. Each measure's accuracy and computing time was evaluated separately. During the challenge, the advanced algorithm correctly classified 93.3% of the ECGs in the training-set and 91.6 % in the test-set. Scores for the simplified algorithm were 0.834 in event 2 and 0.873 in event 3. Computing time for measure D was almost five times higher than for other measures. Required accuracy levels depend on the application and are related to computing time. While our simplified algorithm may be accurate for real-time feedback during ECG self-recordings, QRS detection based measures can further increase the performance if sufficient computing power is available.
Parallel computing method for simulating hydrological processesof large rivers under climate change
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.
2016-12-01
Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, P.; Pham, K.
1995-12-31
Under emergency conditions, a bare overhead conductor can carry an increased amount of current that is well in excess of its normal rating. When there is this increase in current flow on a bare overhead conductor, the temperature does not rise instantaneously. but increases along a curve determined by the current, the conductor properties and the ambient conditions. The conductor temperature at the end of a short-time overload period must be restricted to its maximum design value. This paper presents a simplified approach in analyzing the dynamic performance for bare overhead conductors during short-time overload condition. A computer program wasmore » developed to calculate the short-time ratings for bare overhead conductors. The following parameters: current induced heating. solar load, convective/conductive cooling, radiative cooling, altitude, wind velocity and ampacity of the bare conductor were considered. Several sample graphical output lots are included with the paper.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael; Fuchs, Marcus; Nouidui, Thierry
This paper discusses design decisions for exporting Modelica thermofluid flow components as Functional Mockup Units. The purpose is to provide guidelines that will allow building energy simulation programs and HVAC equipment manufacturers to effectively use FMUs for modeling of HVAC components and systems. We provide an analysis for direct input-output dependencies of such components and discuss how these dependencies can lead to algebraic loops that are formed when connecting thermofluid flow components. Based on this analysis, we provide recommendations that increase the computing efficiency of such components and systems that are formed by connecting multiple components. We explain what codemore » optimizations are lost when providing thermofluid flow components as FMUs rather than Modelica code. We present an implementation of a package for FMU export of such components, explain the rationale for selecting the connector variables of the FMUs and finally provide computing benchmarks for different design choices. It turns out that selecting temperature rather than specific enthalpy as input and output signals does not lead to a measurable increase in computing time, but selecting nine small FMUs rather than a large FMU increases computing time by 70%.« less
Computer use at work is associated with self-reported depressive and anxiety disorder.
Kim, Taeshik; Kang, Mo-Yeol; Yoo, Min-Sang; Lee, Dongwook; Hong, Yun-Chul
2016-01-01
With the development of technology, extensive use of computers in the workplace is prevalent and increases efficiency. However, computer users are facing new harmful working conditions with high workloads and longer hours. This study aimed to investigate the association between computer use at work and self-reported depressive and anxiety disorder (DAD) in a nationally representative sample of South Korean workers. This cross-sectional study was based on the third Korean Working Conditions Survey (2011), and 48,850 workers were analyzed. Information about computer use and DAD was obtained from a self-administered questionnaire. We investigated the relation between computer use at work and DAD using logistic regression. The 12-month prevalence of DAD in computer-using workers was 1.46 %. After adjustment for socio-demographic factors, the odds ratio for DAD was higher in workers using computers more than 75 % of their workday (OR 1.69, 95 % CI 1.30-2.20) than in workers using computers less than 50 % of their shift. After stratifying by working hours, computer use for over 75 % of the work time was significantly associated with increased odds of DAD in 20-39, 41-50, 51-60, and over 60 working hours per week. After stratifying by occupation, education, and job status, computer use for more than 75 % of the work time was related with higher odds of DAD in sales and service workers, those with high school and college education, and those who were self-employed and employers. A high proportion of computer use at work may be associated with depressive and anxiety disorder. This finding suggests the necessity of a work guideline to help the workers suffering from high computer use at work.
Goyal, N; Jain, N; Rachapalli, V
2009-02-01
The use of computers is increasing in every field of medicine, especially radiology. Filmless radiology departments, speech recognition software, electronic request forms and teleradiology are some of the recent developments that have substantially increased the amount of time a radiologist spends in front of a computer monitor. Computers are also needed for searching literature on the internet, communicating via e-mails, and preparing for lectures and presentations. It is well known that regular computer users can suffer musculoskeletal injuries due to repetitive stress. The role of ergonomics in radiology is to ensure that working conditions are optimized in order to avoid injury and fatigue. Adequate workplace ergonomics can go a long way in increasing productivity, efficiency, and job satisfaction. We review the current literature pertaining to the role of ergonomics in modern-day radiology especially with the development of picture archiving and communication systems (PACS) workstations.
Stewart, Robert; Lee, Ju-Yeon; Kim, Jae-Min; Kim, Sung-Wan; Shin, Il-Seon; Yoon, Jin-Sang
2014-01-01
Objective To measure the prevalence of and factors associated with online inappropriate sexual exposure, cyber-bullying victimisation, and computer-using time in early adolescence. Methods A two-year, prospective school survey was performed with 1,173 children aged 13 at baseline. Data collected included demographic factors, bullying experience, depression, anxiety, coping strategies, self-esteem, psychopathology, attention-deficit hyperactivity disorder symptoms, and school performance. These factors were investigated in relation to problematic Internet experiences and computer-using time at age 15. Results The prevalence of online inappropriate sexual exposure, cyber-bullying victimisation, academic-purpose computer overuse, and game-purpose computer overuse was 31.6%, 19.2%, 8.5%, and 21.8%, respectively, at age 15. Having older siblings, more weekly pocket money, depressive symptoms, anxiety symptoms, and passive coping strategy were associated with reported online sexual harassment. Male gender, depressive symptoms, and anxiety symptoms were associated with reported cyber-bullying victimisation. Female gender was associated with academic-purpose computer overuse, while male gender, lower academic level, increased height, and having older siblings were associated with game-purpose computer-overuse. Conclusion Different environmental and psychological factors predicted different aspects of problematic Internet experiences and computer-using time. This knowledge is important for framing public health interventions to educate adolescents about, and prevent, internet-derived problems. PMID:24605120
Neural simulations on multi-core architectures.
Eichner, Hubert; Klug, Tobias; Borst, Alexander
2009-01-01
Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing.
Neural Simulations on Multi-Core Architectures
Eichner, Hubert; Klug, Tobias; Borst, Alexander
2009-01-01
Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing. PMID:19636393
NASA Astrophysics Data System (ADS)
Gintautas, Vadas; Hubler, Alfred
2006-03-01
As worldwide computer resources increase in power and decrease in cost, real-time simulations of physical systems are becoming increasingly prevalent, from laboratory models to stock market projections and entire ``virtual worlds'' in computer games. Often, these systems are meticulously designed to match real-world systems as closely as possible. We study the limiting behavior of a virtual horizontally driven pendulum coupled to its real-world counterpart, where the interaction occurs on a time scale that is much shorter than the time scale of the dynamical system. We find that if the physical parameters of the virtual system match those of the real system within a certain tolerance, there is a qualitative change in the behavior of the two-pendulum system as the strength of the coupling is increased. Applications include a new method to measure the physical parameters of a real system and the use of resonance spectroscopy to refine a computer model. As virtual systems better approximate real ones, even very weak interactions may produce unexpected and dramatic behavior. The research is supported by the National Science Foundation Grant No. NSF PHY 01-40179, NSF DMS 03-25939 ITR, and NSF DGE 03-38215.
Wenzel, H G; Bakken, I J; Johansson, A; Götestam, K G; Øren, Anita
2009-12-01
Computer games are the most advanced form of gaming. For most people, the playing is an uncomplicated leisure activity; however, for a minority the gaming becomes excessive and is associated with negative consequences. The aim of the present study was to investigate computer game-playing behaviour in the general adult Norwegian population, and to explore mental health problems and self-reported consequences of playing. The survey includes 3,405 adults 16 to 74 years old (Norway 2007, response rate 35.3%). Overall, 65.5% of the respondents reported having ever played computer games (16-29 years, 93.9%; 30-39 years, 85.0%; 40-59 years, 56.2%; 60-74 years, 25.7%). Among 2,170 players, 89.8% reported playing less than 1 hr. as a daily average over the last month, 5.0% played 1-2 hr. daily, 3.1% played 2-4 hr. daily, and 2.2% reported playing > 4 hr. daily. The strongest risk factor for playing > 4 hr. daily was being an online player, followed by male gender, and single marital status. Reported negative consequences of computer game playing increased strongly with average daily playing time. Furthermore, prevalence of self-reported sleeping problems, depression, suicide ideations, anxiety, obsessions/ compulsions, and alcohol/substance abuse increased with increasing playing time. This study showed that adult populations should also be included in research on computer game-playing behaviour and its consequences.
Time Division Multiplexing of Semiconductor Qubits
NASA Astrophysics Data System (ADS)
Jarratt, Marie Claire; Hornibrook, John; Croot, Xanthe; Watson, John; Gardner, Geoff; Fallahi, Saeed; Manfra, Michael; Reilly, David
Readout chains, comprising resonators, amplifiers, and demodulators, are likely to be precious resources in quantum computing architectures. The potential to share readout resources is contingent on realising efficient means of time-division multiplexing (TDM) schemes that are compatible with quantum computing. Here, we demonstrate TDM using a GaAs quantum dot device with multiple charge sensors. Our device incorporates chip-level switches that do not load the impedance matching network. When used in conjunction with frequency multiplexing, each frequency tone addresses multiple time-multiplexed qubits, vastly increasing the capacity of a single readout line.
Busschaert, Cedric; Ridgers, Nicola D; De Bourdeaudhuij, Ilse; Cardon, Greet; Van Cauwenberg, Jelle; De Cocker, Katrien
2016-01-01
More knowledge is warranted about multilevel ecological variables associated with context-specific sitting time among adolescents. The present study explored cross-sectional and longitudinal associations of ecological domains of sedentary behaviour, including socio-demographic, social-cognitive, health-related and physical-environmental variables with sitting during TV viewing, computer use, electronic gaming and motorized transport among adolescents. For this longitudinal study, a sample of Belgian adolescents completed questionnaires at school on context-specific sitting time and associated ecological variables. At baseline, complete data were gathered from 513 adolescents (15.0±1.7 years). At one-year follow-up, complete data of 340 participants were available (retention rate: 66.3%). Multilevel linear regression analyses were conducted to explore cross-sectional correlates (baseline variables) and longitudinal predictors (change scores variables) of context-specific sitting time. Social-cognitive correlates/predictors were most frequently associated with context-specific sitting time. Longitudinal analyses revealed that increases over time in considering it pleasant to watch TV (p < .001), in perceiving TV watching as a way to relax (p < .05), in TV time of parents/care givers (p < .01) and in TV time of siblings (p < .001) were associated with more sitting during TV viewing at follow-up. Increases over time in considering it pleasant to use a computer in leisure time (p < .01) and in the computer time of siblings (p < .001) were associated with more sitting during computer use at follow-up. None of the changes in potential predictors were significantly related to changes in sitting during motorized transport or during electronic gaming. Future intervention studies aiming to decrease TV viewing and computer use should acknowledge the importance of the behaviour of siblings and the pleasure adolescents experience during these screen-related behaviours. In addition, more time parents or care givers spent sitting may lead to more sitting during TV viewing of the adolescents, so that a family-based approach may be preferable for interventions. Experimental study designs are warranted to confirm the present findings.
Busschaert, Cedric; Ridgers, Nicola D.; De Bourdeaudhuij, Ilse; Cardon, Greet; Van Cauwenberg, Jelle; De Cocker, Katrien
2016-01-01
Introduction More knowledge is warranted about multilevel ecological variables associated with context-specific sitting time among adolescents. The present study explored cross-sectional and longitudinal associations of ecological domains of sedentary behaviour, including socio-demographic, social-cognitive, health-related and physical-environmental variables with sitting during TV viewing, computer use, electronic gaming and motorized transport among adolescents. Methods For this longitudinal study, a sample of Belgian adolescents completed questionnaires at school on context-specific sitting time and associated ecological variables. At baseline, complete data were gathered from 513 adolescents (15.0±1.7 years). At one-year follow-up, complete data of 340 participants were available (retention rate: 66.3%). Multilevel linear regression analyses were conducted to explore cross-sectional correlates (baseline variables) and longitudinal predictors (change scores variables) of context-specific sitting time. Results Social-cognitive correlates/predictors were most frequently associated with context-specific sitting time. Longitudinal analyses revealed that increases over time in considering it pleasant to watch TV (p < .001), in perceiving TV watching as a way to relax (p < .05), in TV time of parents/care givers (p < .01) and in TV time of siblings (p < .001) were associated with more sitting during TV viewing at follow-up. Increases over time in considering it pleasant to use a computer in leisure time (p < .01) and in the computer time of siblings (p < .001) were associated with more sitting during computer use at follow-up. None of the changes in potential predictors were significantly related to changes in sitting during motorized transport or during electronic gaming. Conclusions Future intervention studies aiming to decrease TV viewing and computer use should acknowledge the importance of the behaviour of siblings and the pleasure adolescents experience during these screen-related behaviours. In addition, more time parents or care givers spent sitting may lead to more sitting during TV viewing of the adolescents, so that a family-based approach may be preferable for interventions. Experimental study designs are warranted to confirm the present findings. PMID:27936073
Home media and children's achievement and behavior.
Hofferth, Sandra L
2010-01-01
This study provides a national picture of the time American 6- to 12-year-olds spent playing video games, using the computer, and watching TV at home in 1997 and 2003, and the association of early use with their achievement and behavior as adolescents. Girls benefited from computer use more than boys, and Black children benefited more than White children. Greater computer use in middle childhood was associated with increased achievement for White and Black girls, and for Black but not White boys. Increased video game play was associated with an improved ability to solve applied problems for Black girls but lower verbal achievement for all girls. For boys, increased video game play was linked to increased aggressive behavior problems. © 2010 The Author. Child Development © 2010 Society for Research in Child Development, Inc.
Radiation Tolerant, FPGA-Based SmallSat Computer System
NASA Technical Reports Server (NTRS)
LaMeres, Brock J.; Crum, Gary A.; Martinez, Andres; Petro, Andrew
2015-01-01
The Radiation Tolerant, FPGA-based SmallSat Computer System (RadSat) computing platform exploits a commercial off-the-shelf (COTS) Field Programmable Gate Array (FPGA) with real-time partial reconfiguration to provide increased performance, power efficiency and radiation tolerance at a fraction of the cost of existing radiation hardened computing solutions. This technology is ideal for small spacecraft that require state-of-the-art on-board processing in harsh radiation environments but where using radiation hardened processors is cost prohibitive.
The Next Frontier in Computing
Sarrao, John
2018-06-13
Exascale computing refers to computing systems capable of at least one exaflop or a billion calculations per second (1018). That is 50 times faster than the most powerful supercomputers being used today and represents a thousand-fold increase over the first petascale computer that came into operation in 2008. How we use these large-scale simulation resources is the key to solving some of todayâs most pressing problems, including clean energy production, nuclear reactor lifetime extension and nuclear stockpile aging.
NASA Technical Reports Server (NTRS)
Darmofal, David L.
2003-01-01
The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.
Fischer, E A J; De Vlas, S J; Richardus, J H; Habbema, J D F
2008-09-01
Microsimulation of infectious diseases requires simulation of many life histories of interacting individuals. In particular, relatively rare infections such as leprosy need to be studied in very large populations. Computation time increases disproportionally with the size of the simulated population. We present a novel method, MUSIDH, an acronym for multiple use of simulated demographic histories, to reduce computation time. Demographic history refers to the processes of birth, death and all other demographic events that should be unrelated to the natural course of an infection, thus non-fatal infections. MUSIDH attaches a fixed number of infection histories to each demographic history, and these infection histories interact as if being the infection history of separate individuals. With two examples, mumps and leprosy, we show that the method can give a factor 50 reduction in computation time at the cost of a small loss in precision. The largest reductions are obtained for rare infections with complex demographic histories.
Gradient-free MCMC methods for dynamic causal modelling
Sengupta, Biswa; Friston, Karl J.; Penny, Will D.
2015-03-14
Here, we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density -- albeit at almost 1000% increase in computational time, in comparisonmore » to the most efficient algorithm (i.e., the adaptive MCMC sampler).« less
Street, Richard L; Liu, Lin; Farber, Neil J; Chen, Yunan; Calvitti, Alan; Zuest, Danielle; Gabuzda, Mark T; Bell, Kristin; Gray, Barbara; Rick, Steven; Ashfaq, Shazia; Agha, Zia
2014-09-01
The computer with the electronic health record (EHR) is an additional 'interactant' in the medical consultation, as clinicians must simultaneously or in alternation engage patient and computer to provide medical care. Few studies have examined how clinicians' EHR workflow (e.g., gaze, keyboard activity, and silence) influences the quality of their communication, the patient's involvement in the encounter, and conversational control of the visit. Twenty-three primary care providers (PCPs) from USA Veterans Administration (VA) primary care clinics participated in the study. Up to 6 patients per PCP were recruited. The proportion of time PCPs spent gazing at the computer was captured in real time via video-recording. Mouse click/scrolling activity was captured through Morae, a usability software that logs mouse clicks and scrolling activity. Conversational silence was coded as the proportion of time in the visit when PCP and patient were not talking. After the visit, patients completed patient satisfaction measures. Trained coders independently viewed videos of the interactions and rated the degree to which PCPs were patient-centered (informative, supportive, partnering) and patients were involved in the consultation. Conversational control was measured as the proportion of time the PCP held the floor compared to the patient. The final sample included 125 consultations. PCPs who spent more time in the consultation gazing at the computer and whose visits had more conversational silence were rated lower in patient-centeredness. PCPs controlled more of the talk time in the visits that also had longer periods of mutual silence. PCPs were rated as having less effective communication when they spent more time looking at the computer and when there was more periods of silence in the consultation. Because PCPs increasingly are using the EHR in their consultations, more research is needed to determine effective ways that they can verbally engage patients while simultaneously managing data in the EHR. EHR activity consumes an increasing proportion of clinicians' time during consultations. To ensure effective communication with their patients, clinicians may benefit from using communication strategies that maintain the flow of conversation when working with the computer, as well as from learning EHR management skills that prevent extended periods of gaze at computer and long periods of silence. Next-generation EHR design must address better usability and clinical workflow integration, including facilitating patient-clinician communication. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Street, Richard L.; Liu, Lin; Farber, Neil J.; Chen, Yunan; Calvitti, Alan; Zuest, Danielle; Gabuzda, Mark T.; Bell, Kristin; Gray, Barbara; Rick, Steven; Ashfaq, Shazia; Agha, Zia
2015-01-01
Objective The computer with the electronic health record (EHR) is an additional ‘interactant’ in the medical consultation, as clinicians must simultaneously or in alternation engage patient and computer to provide medical care. Few studies have examined how clinicians' EHR workflow (e.g., gaze, keyboard activity, and silence) influences the quality of their communication, the patient's involvement in the encounter, and conversational control of the visit. Methods Twenty-three primary care providers (PCPs) from USA Veterans Administration (VA) primary care clinics participated in the study. Up to 6 patients per PCP were recruited. The proportion of time PCPs spent gazing at the computer was captured in real time via video-recording. Mouse click/scrolling activity was captured through Morae, a usability software that logs mouse clicks and scrolling activity. Conversational silence was coded as the proportion of time in the visit when PCP and patient were not talking. After the visit, patients completed patient satisfaction measures. Trained coders independently viewed videos of the interactions and rated the degree to which PCPs were patient-centered (informative, supportive, partnering) and patients were involved in the consultation. Conversational control was measured as the proportion of time the PCP held the floor compared to the patient. Results The final sample included 125 consultations. PCPs who spent more time in the consultation gazing at the computer and whose visits had more conversational silence were rated lower inpatient-centeredness. PCPs controlled more of the talk time in the visits that also had longer periods of mutual silence. Conclusions PCPs were rated as having less effective communication when they spent more time looking at the computer and when there was more periods of silence in the consultation. Because PCPs increasingly are using the EHR in their consultations, more research is needed to determine effective ways that they can verbally engage patients while simultaneously managing data in the EHR. Practice implications EHR activity consumes an increasing proportion of clinicians' time during consultations. To ensure effective communication with their patients, clinicians may benefit from using communication strategies that maintain the flow of conversation when working with the computer, as well as from learning EHR management skills that prevent extended periods of gaze at computer and long periods of silence. Next-generation EHR design must address better usability and clinical workflow integration, including facilitating patient-clinician communication. PMID:24882086
Vector computer memory bank contention
NASA Technical Reports Server (NTRS)
Bailey, D. H.
1985-01-01
A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.
Dynamic modeling of parallel robots for computed-torque control implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Codourey, A.
1998-12-01
In recent years, increased interest in parallel robots has been observed. Their control with modern theory, such as the computed-torque method, has, however, been restrained, essentially due to the difficulty in establishing a simple dynamic model that can be calculated in real time. In this paper, a simple method based on the virtual work principle is proposed for modeling parallel robots. The mass matrix of the robot, needed for decoupling control strategies, does not explicitly appear in the formulation; however, it can be computed separately, based on kinetic energy considerations. The method is applied to the DELTA parallel robot, leadingmore » to a very efficient model that has been implemented in a real-time computed-torque control algorithm.« less
Vector computer memory bank contention
NASA Technical Reports Server (NTRS)
Bailey, David H.
1987-01-01
A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.
Teach Efficient Production with Modular Fixturing Pallets
ERIC Educational Resources Information Center
Creger, Don W.; Payne, Brent A.
2010-01-01
Advances in technology have yielded computer numerical control (CNC) machines and computer-aided manufacturing (CAM) software that saves time and increases productivity in today's industrial world. Training students to understand and use these technologies has become a key ingredient in preparing them for work in industry. Teachers of machining…
Real-Time Computer-Mediated Communication: Email and Instant Messaging Simulation
ERIC Educational Resources Information Center
Newman, Amy
2007-01-01
As computer-mediated communication becomes increasingly prevalent in the workplace, students need to apply effective writing principles to today's technologies. Email, in particular, requires interns and new hires to manage incoming messages, use an appropriate tone, and craft clear, concise messages. In addition, with instant messaging (IM)…
ERIC Educational Resources Information Center
Amara, Sofiane; Macedo, Joaquim; Bendella, Fatima; Santos, Alexandre
2016-01-01
Learners are becoming increasingly divers. They may have much personal, social, cultural, psychological, and cognitive diversity. Forming suitable learning groups represents, therefore, a hard and time-consuming task. In Mobile Computer Supported Collaborative Learning (MCSCL) environments, this task is more difficult. Instructors need to consider…
Viscoelastic Finite Difference Modeling Using Graphics Processing Units
NASA Astrophysics Data System (ADS)
Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.
2014-12-01
Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size and the slow memory transfers are the limiting factors of our GPU implementation. Those results show the benefits of using GPUs instead of CPUs for time based finite-difference seismic simulations. The reductions in computation time and in hardware costs are significant and open the door for new approaches in seismic inversion.
Kelishadi, Roya; Qorbani, Mostafa; Motlagh, Mohammad Esmaeil; Heshmat, Ramin; Ardalan, Gelayol; Jari, Mohsen
2014-08-21
Background: This study aimed to assess the relationship between leisure time spent watching television (TV) and at a computer and aggressive and violent behaviour in children and adolescents. Methods: In this nationwide study, 14,880 school students, aged 6-18 years, were selected by cluster and stratified multi-stage sampling method from 30 provinces in Iran. The World Health Organization Global School-based Health Survey questionnaire (WHO-GSHS) was used. Results: Overall, 13,486 children and adolescents (50·8% boys, 75·6% urban residents) completed the study (participation rate 90·6%). The risk of physical fighting and quarrels increased by 29% (OR 1·29, 95% CI 1·19-1·40) with watching TV for >2 hr/day, by 38% (OR 1·38, 95% CI 1·21-1·57) with leisure time computer work of >2 hr/day, and by 42% (OR 1·42, 95% CI 1·28-1·58) with the total screen time of >2 hr/day. Watching TV or leisure time spent on a computer or total screen time of >2 hr/day increased the risk of bullying by 30% (OR 1·30, 95% CI 1·18-1·43), 57% (1·57, 95% CI 1·34-1·85) and 62% (OR 1·62, 95% CI 1·43-1·83). Spending >2 hr/day watching TV and total screen time increased the risk of being bullied by 12% (OR 1·12, 95% CI 1·02-1·22) and 15% (OR 1·15, 95% CI 1·02-1·28), respectively. This relationship was not statistically significant for leisure time spent on a computer (OR 1·10, 95% CI 0·9-1·27). Conclusions: Prolonged leisure time spent on screen activities is associated with violent and aggressive behaviour in children and adolescents. In addition to the duration of screen time, the association is likely to be explained also by the media content.
Kelishadi, Roya; Qorbani, Mostafa; Motlagh, Mohammad Esmaeil; Heshmat, Ramin; Ardalan, Gelayol; Jari, Mohsen
2015-01-01
This study aimed to assess the relationship between leisure time spent watching television (TV) and at a computer and aggressive and violent behaviour in children and adolescents. In this nationwide study, 14,880 school students, aged 6-18 years, were selected by cluster and stratified multi-stage sampling method from 30 provinces in Iran. The World Health Organization Global School-based Health Survey questionnaire (WHO-GSHS) was used. Overall, 13,486 children and adolescents (50·8% boys, 75·6% urban residents) completed the study (participation rate 90·6%). The risk of physical fighting and quarrels increased by 29% (OR 1·29, 95% CI 1·19-1·40) with watching TV for >2 hr/day, by 38% (OR 1·38, 95% CI 1·21-1·57) with leisure time computer work of >2 hr/day, and by 42% (OR 1·42, 95% CI 1·28-1·58) with the total screen time of >2 hr/day. Watching TV or leisure time spent on a computer or total screen time of >2 hr/day increased the risk of bullying by 30% (OR 1·30, 95% CI 1·18-1·43), 57% (1·57, 95% CI 1·34-1·85) and 62% (OR 1·62, 95% CI 1·43-1·83). Spending >2 hr/day watching TV and total screen time increased the risk of being bullied by 12% (OR 1·12, 95% CI 1·02-1·22) and 15% (OR 1·15, 95% CI 1·02-1·28), respectively. This relationship was not statistically significant for leisure time spent on a computer (OR 1·10, 95% CI 0·9-1·27). Prolonged leisure time spent on screen activities is associated with violent and aggressive behaviour in children and adolescents. In addition to the duration of screen time, the association is likely to be explained also by the media content.
Ageostrophic winds in the severe strom environment
NASA Technical Reports Server (NTRS)
Moore, J. T.
1982-01-01
The period from 1200 GMT 10 April to 0000 GMT 11 April 1979, during which time several major tornadoes and severe thunderstorms, including the Wichita Falls tornado occurred was studied. A time adjusted, isentropic data set was used to analyze key parameters. Fourth order centered finite differences were used to compute the isallobaric, inertial advective, tendency, inertial advective geostrophic and ageostrophic winds. Explicit isentropic trajectories were computed through the isentropic, inviscid equations of motion using a 15 minute time step. Ageostrophic, geostrophic and total vertical motion fields were computed to judge the relative importance of ageostrophy in enhancing the vertical motion field. It is found that ageostrophy is symptomatic of those mass adjustments which take place during upper level jet streak propagation and can, in a favorable environment, act to increase and release potential instability over meso alpha time periods.
Davies, Daniel K; Stock, Steven E; Wehmeyer, Michael L
2002-10-01
Achieving greater independence for individuals with mental retardation depends upon the acquisition of several key skills, including time-management and scheduling skills. The ability to perform tasks according to a schedule is essential to domains like independent living and employment. The use of a portable schedule prompting system to increase independence and self-regulation in time-management for individuals with mental retardation was examined. Twelve people with mental retardation participated in a comparison of their use of the technology system to perform tasks on a schedule with use of a written schedule. Results demonstrated the utility of a Palmtop computer with schedule prompting software to increase independence in the performance of vocational and daily living tasks by individuals with mental retardation.
A real-time spike sorting method based on the embedded GPU.
Zelan Yang; Kedi Xu; Xiang Tian; Shaomin Zhang; Xiaoxiang Zheng
2017-07-01
Microelectrode arrays with hundreds of channels have been widely used to acquire neuron population signals in neuroscience studies. Online spike sorting is becoming one of the most important challenges for high-throughput neural signal acquisition systems. Graphic processing unit (GPU) with high parallel computing capability might provide an alternative solution for increasing real-time computational demands on spike sorting. This study reported a method of real-time spike sorting through computing unified device architecture (CUDA) which was implemented on an embedded GPU (NVIDIA JETSON Tegra K1, TK1). The sorting approach is based on the principal component analysis (PCA) and K-means. By analyzing the parallelism of each process, the method was further optimized in the thread memory model of GPU. Our results showed that the GPU-based classifier on TK1 is 37.92 times faster than the MATLAB-based classifier on PC while their accuracies were the same with each other. The high-performance computing features of embedded GPU demonstrated in our studies suggested that the embedded GPU provide a promising platform for the real-time neural signal processing.
Terascale Computing in Accelerator Science and Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ko, Kwok
2002-08-21
We have entered the age of ''terascale'' scientific computing. Processors and system architecture both continue to evolve; hundred-teraFLOP computers are expected in the next few years, and petaFLOP computers toward the end of this decade are conceivable. This ever-increasing power to solve previously intractable numerical problems benefits almost every field of science and engineering and is revolutionizing some of them, notably including accelerator physics and technology. At existing accelerators, it will help us optimize performance, expand operational parameter envelopes, and increase reliability. Design decisions for next-generation machines will be informed by unprecedented comprehensive and accurate modeling, as well as computer-aidedmore » engineering; all this will increase the likelihood that even their most advanced subsystems can be commissioned on time, within budget, and up to specifications. Advanced computing is also vital to developing new means of acceleration and exploring the behavior of beams under extreme conditions. With continued progress it will someday become reasonable to speak of a complete numerical model of all phenomena important to a particular accelerator.« less
NASA Astrophysics Data System (ADS)
Cary, John R.; Abell, D.; Amundson, J.; Bruhwiler, D. L.; Busby, R.; Carlsson, J. A.; Dimitrov, D. A.; Kashdan, E.; Messmer, P.; Nieter, C.; Smithe, D. N.; Spentzouris, P.; Stoltz, P.; Trines, R. M.; Wang, H.; Werner, G. R.
2006-09-01
As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly.
Multicore Programming Challenges
NASA Astrophysics Data System (ADS)
Perrone, Michael
The computer industry is facing fundamental challenges that are driving a major change in the design of computer processors. Due to restrictions imposed by quantum physics, one historical path to higher computer processor performance - by increased clock frequency - has come to an end. Increasing clock frequency now leads to power consumption costs that are too high to justify. As a result, we have seen in recent years that the processor frequencies have peaked and are receding from their high point. At the same time, competitive market conditions are giving business advantage to those companies that can field new streaming applications, handle larger data sets, and update their models to market conditions faster. The desire for newer, faster and larger is driving continued demand for higher computer performance.
Solution of a large hydrodynamic problem using the STAR-100 computer
NASA Technical Reports Server (NTRS)
Weilmuenster, K. J.; Howser, L. M.
1976-01-01
A representative hydrodynamics problem, the shock initiated flow over a flat plate, was used for exploring data organizations and program structures needed to exploit the STAR-100 vector processing computer. A brief description of the problem is followed by a discussion of how each portion of the computational process was vectorized. Finally, timings of different portions of the program are compared with equivalent operations on serial machines. The speed up of the STAR-100 over the CDC 6600 program is shown to increase as the problem size increases. All computations were carried out on a CDC 6600 and a CDC STAR 100, with code written in FORTRAN for the 6600 and in STAR FORTRAN for the STAR 100.
ERIC Educational Resources Information Center
VanLehn, Kurt; Chung, Greg; Grover, Sachin; Madni, Ayesha; Wetzel, Jon
2016-01-01
A common hypothesis is that students will more deeply understand dynamic systems and other complex phenomena if they construct computational models of them. Attempts to demonstrate the advantages of model construction have been stymied by the long time required for students to acquire skill in model construction. In order to make model…
Wan, Shixiang; Zou, Quan
2017-01-01
Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.
Increasing Accuracy in Computed Inviscid Boundary Conditions
NASA Technical Reports Server (NTRS)
Dyson, Roger
2004-01-01
A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number of time derivatives of surface-normal velocity (consistent with no flow through the boundary) up to arbitrarily high order. The corrections for the first-order spatial derivatives of pressure are calculated by use of the first-order time derivative velocity. The corrected first-order spatial derivatives are used to calculate the second- order time derivatives of velocity, which, in turn, are used to calculate the corrections for the second-order pressure derivatives. The process as described is repeated, progressing through increasing orders of derivatives, until the desired accuracy is attained.
Decoherence in adiabatic quantum computation
NASA Astrophysics Data System (ADS)
Albash, Tameem; Lidar, Daniel A.
2015-06-01
Recent experiments with increasingly larger numbers of qubits have sparked renewed interest in adiabatic quantum computation, and in particular quantum annealing. A central question that is repeatedly asked is whether quantum features of the evolution can survive over the long time scales used for quantum annealing relative to standard measures of the decoherence time. We reconsider the role of decoherence in adiabatic quantum computation and quantum annealing using the adiabatic quantum master-equation formalism. We restrict ourselves to the weak-coupling and singular-coupling limits, which correspond to decoherence in the energy eigenbasis and in the computational basis, respectively. We demonstrate that decoherence in the instantaneous energy eigenbasis does not necessarily detrimentally affect adiabatic quantum computation, and in particular that a short single-qubit T2 time need not imply adverse consequences for the success of the quantum adiabatic algorithm. We further demonstrate that boundary cancellation methods, designed to improve the fidelity of adiabatic quantum computing in the closed-system setting, remain beneficial in the open-system setting. To address the high computational cost of master-equation simulations, we also demonstrate that a quantum Monte Carlo algorithm that explicitly accounts for a thermal bosonic bath can be used to interpolate between classical and quantum annealing. Our study highlights and clarifies the significantly different role played by decoherence in the adiabatic and circuit models of quantum computing.
A Hybrid MPI/OpenMP Approach for Parallel Groundwater Model Calibration on Multicore Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan
2010-01-01
Groundwater model calibration is becoming increasingly computationally time intensive. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelism in software and hardware to reduce calibration time on multicore computers with minimal parallelization effort. At first, HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for a uranium transport model with over a hundred species involving nearly a hundred reactions, and a field scale coupled flow and transport model. In the first application, a single parallelizable loop is identified to consume over 97% of the total computational time. With a few lines of OpenMP compiler directives inserted into the code,more » the computational time reduces about ten times on a compute node with 16 cores. The performance is further improved by selectively parallelizing a few more loops. For the field scale application, parallelizable loops in 15 of the 174 subroutines in HGC5 are identified to take more than 99% of the execution time. By adding the preconditioned conjugate gradient solver and BICGSTAB, and using a coloring scheme to separate the elements, nodes, and boundary sides, the subroutines for finite element assembly, soil property update, and boundary condition application are parallelized, resulting in a speedup of about 10 on a 16-core compute node. The Levenberg-Marquardt (LM) algorithm is added into HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, compute nodes at the number of adjustable parameters (when the forward difference is used for Jacobian approximation), or twice that number (if the center difference is used), are used to reduce the calibration time from days and weeks to a few hours for the two applications. This approach can be extended to global optimization scheme and Monte Carol analysis where thousands of compute nodes can be efficiently utilized.« less
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149
Optimization of tomographic reconstruction workflows on geographically distributed resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
NASA Astrophysics Data System (ADS)
Wei, Wang; Chongchao, Pan; Yikai, Liang; Gang, Li
2017-11-01
With the rapid development of information technology, the scale of data center increases quickly, and the energy consumption of computer room also increases rapidly, among which, energy consumption of air conditioning cooling makes up a large proportion. How to apply new technology to reduce the energy consumption of the computer room becomes an important topic of energy saving in the current research. This paper study internet of things technology, and design a kind of green computer room environmental monitoring system. In the system, we can get the real-time environment data from the application of wireless sensor network technology, which will be showed in a creative way of three-dimensional effect. In the environment monitor, we can get the computer room assets view, temperature cloud view, humidity cloud view, microenvironment view and so on. Thus according to the condition of the microenvironment, we can adjust the air volume, temperature and humidity parameters of the air conditioning for the individual equipment cabinet to realize the precise air conditioning refrigeration. And this can reduce the energy consumption of air conditioning, as a result, the overall energy consumption of the green computer room will reduce greatly. At the same time, we apply this project in the computer center of Weihai, and after a year of test and running, we find that it took a good energy saving effect, which fully verified the effectiveness of this project on the energy conservation of the computer room.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne E.
2013-01-01
We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.
Effects on Training Using Illumination in Virtual Environments
NASA Technical Reports Server (NTRS)
Maida, James C.; Novak, M. S. Jennifer; Mueller, Kristian
1999-01-01
Camera based tasks are commonly performed during orbital operations, and orbital lighting conditions, such as high contrast shadowing and glare, are a factor in performance. Computer based training using virtual environments is a common tool used to make and keep CTW members proficient. If computer based training included some of these harsh lighting conditions, would the crew increase their proficiency? The project goal was to determine whether computer based training increases proficiency if one trains for a camera based task using computer generated virtual environments with enhanced lighting conditions such as shadows and glare rather than color shaded computer images normally used in simulators. Previous experiments were conducted using a two degree of freedom docking system. Test subjects had to align a boresight camera using a hand controller with one axis of rotation and one axis of rotation. Two sets of subjects were trained on two computer simulations using computer generated virtual environments, one with lighting, and one without. Results revealed that when subjects were constrained by time and accuracy, those who trained with simulated lighting conditions performed significantly better than those who did not. To reinforce these results for speed and accuracy, the task complexity was increased.
Development of Parallel Code for the Alaska Tsunami Forecast Model
NASA Astrophysics Data System (ADS)
Bahng, B.; Knight, W. R.; Whitmore, P.
2014-12-01
The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.
Cloud computing for comparative genomics
2010-01-01
Background Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. Results We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. Conclusions The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems. PMID:20482786
Cloud computing for comparative genomics.
Wall, Dennis P; Kudtarkar, Parul; Fusaro, Vincent A; Pivovarov, Rimma; Patil, Prasad; Tonellato, Peter J
2010-05-18
Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems.
Elucidating Reaction Mechanisms on Quantum Computers
NASA Astrophysics Data System (ADS)
Wiebe, Nathan; Reiher, Markus; Svore, Krysta; Wecker, Dave; Troyer, Matthias
We show how a quantum computer can be employed to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical-computer simulations for such problems, to significantly increase their accuracy and enable hitherto intractable simulations. Detailed resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. This demonstrates that quantum computers will realistically be able to tackle important problems in chemistry that are both scientifically and economically significant.
A vectorized Lanczos eigensolver for high-performance computers
NASA Technical Reports Server (NTRS)
Bostic, Susan W.
1990-01-01
The computational strategies used to implement a Lanczos-based-method eigensolver on the latest generation of supercomputers are described. Several examples of structural vibration and buckling problems are presented that show the effects of using optimization techniques to increase the vectorization of the computational steps. The data storage and access schemes and the tools and strategies that best exploit the computer resources are presented. The method is implemented on the Convex C220, the Cray 2, and the Cray Y-MP computers. Results show that very good computation rates are achieved for the most computationally intensive steps of the Lanczos algorithm and that the Lanczos algorithm is many times faster than other methods extensively used in the past.
Using genetic information while protecting the privacy of the soul.
Moor, J H
1999-01-01
Computing plays an important role in genetics (and vice versa). Theoretically, computing provides a conceptual model for the function and malfunction of our genetic machinery. Practically, contemporary computers and robots equipped with advanced algorithms make the revelation of the complete human genome imminent--computers are about to reveal our genetic souls for the first time. Ethically, computers help protect privacy by restricting access in sophisticated ways to genetic information. But the inexorable fact that computers will increasingly collect, analyze, and disseminate abundant amounts of genetic information made available through the genetic revolution, not to mention that inexpensive computing devices will make genetic information gathering easier, underscores the need for strong and immediate privacy legislation.
NASA Astrophysics Data System (ADS)
Ford, Eric B.; Dindar, Saleh; Peters, Jorg
2015-08-01
The realism of astrophysical simulations and statistical analyses of astronomical data are set by the available computational resources. Thus, astronomers and astrophysicists are constantly pushing the limits of computational capabilities. For decades, astronomers benefited from massive improvements in computational power that were driven primarily by increasing clock speeds and required relatively little attention to details of the computational hardware. For nearly a decade, increases in computational capabilities have come primarily from increasing the degree of parallelism, rather than increasing clock speeds. Further increases in computational capabilities will likely be led by many-core architectures such as Graphical Processing Units (GPUs) and Intel Xeon Phi. Successfully harnessing these new architectures, requires significantly more understanding of the hardware architecture, cache hierarchy, compiler capabilities and network network characteristics.I will provide an astronomer's overview of the opportunities and challenges provided by modern many-core architectures and elastic cloud computing. The primary goal is to help an astronomical audience understand what types of problems are likely to yield more than order of magnitude speed-ups and which problems are unlikely to parallelize sufficiently efficiently to be worth the development time and/or costs.I will draw on my experience leading a team in developing the Swarm-NG library for parallel integration of large ensembles of small n-body systems on GPUs, as well as several smaller software projects. I will share lessons learned from collaborating with computer scientists, including both technical and soft skills. Finally, I will discuss the challenges of training the next generation of astronomers to be proficient in this new era of high-performance computing, drawing on experience teaching a graduate class on High-Performance Scientific Computing for Astrophysics and organizing a 2014 advanced summer school on Bayesian Computing for Astronomical Data Analysis with support of the Penn State Center for Astrostatistics and Institute for CyberScience.
Computation of shear-induced collective-diffusivity in emulsions
NASA Astrophysics Data System (ADS)
Malipeddi, Abhilash Reddy; Sarkar, Kausik
2017-11-01
The shear-induced collective-diffusivity of drops in an emulsion is calculated through simulation. A front-tracking finite difference method is used to integrate the Navier-Stokes equations. When a cloud of drops is subjected to shear flow, after a certain time, the width of the cloud increases with the 1/3 power of time. This scaling of drop-cloud-width with time is characteristic of (sub-)diffusion that arises from irreversible two-drop interactions. The collective diffusivity is calculated from this relationship. A feature of the procedure adopted here is the modest computational requirement, wherein, a few drops ( 70) in shear for short time ( 70 strain) is found to be sufficient to get a good estimate. As far as we know, collective-diffusivity has not been calculated for drops through simulation till now. The computed values match with experimental measurements reported in the literature. The diffusivity in emulsions is calculated for a range of Capillary (Ca) and Reynolds (Re) numbers. It is found to be a unimodal function of Ca , similar to self-diffusivity. A sub-linear increase of the diffusivity with Re is seen for Re < 5 . This work has been limited to a viscosity matched case.
Light-weight Parallel Python Tools for Earth System Modeling Workflows
NASA Astrophysics Data System (ADS)
Mickelson, S. A.; Paul, K.; Xu, H.; Dennis, J.; Brown, D. I.
2015-12-01
With the growth in computing power over the last 30 years, earth system modeling codes have become increasingly data-intensive. As an example, it is expected that the data required for the next Intergovernmental Panel on Climate Change (IPCC) Assessment Report (AR6) will increase by more than 10x to an expected 25PB per climate model. Faced with this daunting challenge, developers of the Community Earth System Model (CESM) have chosen to change the format of their data for long-term storage from time-slice to time-series, in order to reduce the required download bandwidth needed for later analysis and post-processing by climate scientists. Hence, efficient tools are required to (1) perform the transformation of the data from time-slice to time-series format and to (2) compute climatology statistics, needed for many diagnostic computations, on the resulting time-series data. To address the first of these two challenges, we have developed a parallel Python tool for converting time-slice model output to time-series format. To address the second of these challenges, we have developed a parallel Python tool to perform fast time-averaging of time-series data. These tools are designed to be light-weight, be easy to install, have very few dependencies, and can be easily inserted into the Earth system modeling workflow with negligible disruption. In this work, we present the motivation, approach, and testing results of these two light-weight parallel Python tools, as well as our plans for future research and development.
PREFACE: 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP2013)
NASA Astrophysics Data System (ADS)
Groep, D. L.; Bonacorsi, D.
2014-06-01
In this age and time, capturing 'state of the art' of computing in a conference proceedings gets to be increasingly hard. It is quite common too for the submitted abstracts to refer to studies yet to be done - and the time span between abstract submission and the actual conference is often less than six months. By the time the proceedings appear in journal form, a similar period after its closing session, some of the work is over a year old, by which time new ideas will have been formed and the deployment of current ones progressed - at times beyond recognition. The preface is continued in the pdf.
Gebremariam, Mekdes K; Totland, Torunn H; Andersen, Lene F; Bergh, Ingunn H; Bjelland, Mona; Grydeland, May; Ommundsen, Yngvar; Lien, Nanna
2012-02-06
In order to inform interventions to prevent sedentariness, more longitudinal studies are needed focusing on stability and change over time in multiple sedentary behaviours. This paper investigates patterns of stability and change in TV/DVD use, computer/electronic game use and total screen time (TST) and factors associated with these patterns among Norwegian children in the transition between childhood and adolescence. The baseline of this longitudinal study took place in September 2007 and included 975 students from 25 control schools of an intervention study, the HEalth In Adolescents (HEIA) study. The first follow-up took place in May 2008 and the second follow-up in May 2009, with 885 students participating at all time points (average age at baseline = 11.2, standard deviation ± 0.3). Time used for/spent on TV/DVD and computer/electronic games was self-reported, and a TST variable (hours/week) was computed. Tracking analyses based on absolute and rank measures, as well as regression analyses to assess factors associated with change in TST and with tracking high TST were conducted. Time spent on all sedentary behaviours investigated increased in both genders. Findings based on absolute and rank measures revealed a fair to moderate level of tracking over the 2 year period. High parental education was inversely related to an increase in TST among females. In males, self-efficacy related to barriers to physical activity and living with married or cohabitating parents were inversely related to an increase in TST. Factors associated with tracking high vs. low TST in the multinomial regression analyses were low self-efficacy and being of an ethnic minority background among females, and low self-efficacy, being overweight/obese and not living with married or cohabitating parents among males. Use of TV/DVD and computer/electronic games increased with age and tracked over time in this group of 11-13 year old Norwegian children. Interventions targeting these sedentary behaviours should thus be introduced early. The identified modifiable and non-modifiable factors associated with change in TST and tracking of high TST should be taken into consideration when planning such interventions.
Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin
2015-11-01
Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Sachetto Oliveira, Rafael; Martins Rocha, Bernardo; Burgarelli, Denise; Meira, Wagner; Constantinides, Christakis; Weber Dos Santos, Rodrigo
2018-02-01
The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy. Copyright © 2017 John Wiley & Sons, Ltd.
Parallel-vector unsymmetric Eigen-Solver on high performance computers
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Jiangning, Qin
1993-01-01
The popular QR algorithm for solving all eigenvalues of an unsymmetric matrix is reviewed. Among the basic components in the QR algorithm, it was concluded from this study, that the reduction of an unsymmetric matrix to a Hessenberg form (before applying the QR algorithm itself) can be done effectively by exploiting the vector speed and multiple processors offered by modern high-performance computers. Numerical examples of several test cases have indicated that the proposed parallel-vector algorithm for converting a given unsymmetric matrix to a Hessenberg form offers computational advantages over the existing algorithm. The time saving obtained by the proposed methods is increased as the problem size increased.
The effects of syntactic complexity on the human-computer interaction
NASA Technical Reports Server (NTRS)
Chechile, R. A.; Fleischman, R. N.; Sadoski, D. M.
1986-01-01
Three divided-attention experiments were performed to evaluate the effectiveness of a syntactic analysis of the primary task of editing flight route-way-point information. For all editing conditions, a formal syntactic expression was developed for the operator's interaction with the computer. In terms of the syntactic expression, four measures of syntactic were examined. Increased syntactic complexity did increase the time to train operators, but once the operators were trained, syntactic complexity did not influence the divided-attention performance. However, the number of memory retrievals required of the operator significantly accounted for the variation in the accuracy, workload, and task completion time found on the different editing tasks under attention-sharing conditions.
Scaling predictive modeling in drug development with cloud computing.
Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola
2015-01-26
Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.
Optimizing ion channel models using a parallel genetic algorithm on graphical processors.
Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon
2012-01-01
We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.
Scalable Multiprocessor for High-Speed Computing in Space
NASA Technical Reports Server (NTRS)
Lux, James; Lang, Minh; Nishimoto, Kouji; Clark, Douglas; Stosic, Dorothy; Bachmann, Alex; Wilkinson, William; Steffke, Richard
2004-01-01
A report discusses the continuing development of a scalable multiprocessor computing system for hard real-time applications aboard a spacecraft. "Hard realtime applications" signifies applications, like real-time radar signal processing, in which the data to be processed are generated at "hundreds" of pulses per second, each pulse "requiring" millions of arithmetic operations. In these applications, the digital processors must be tightly integrated with analog instrumentation (e.g., radar equipment), and data input/output must be synchronized with analog instrumentation, controlled to within fractions of a microsecond. The scalable multiprocessor is a cluster of identical commercial-off-the-shelf generic DSP (digital-signal-processing) computers plus generic interface circuits, including analog-to-digital converters, all controlled by software. The processors are computers interconnected by high-speed serial links. Performance can be increased by adding hardware modules and correspondingly modifying the software. Work is distributed among the processors in a parallel or pipeline fashion by means of a flexible master/slave control and timing scheme. Each processor operates under its own local clock; synchronization is achieved by broadcasting master time signals to all the processors, which compute offsets between the master clock and their local clocks.
Computers and the supply of radiology services: anatomy of a disruptive technology.
Levy, Frank
2008-10-01
Over the next decade, computers will augment the supply of radiology services at a time when reimbursement rules are likely to tighten. Increased supply and slower growing demand will result in a radiology market that is more competitive, with less income growth, than the market of the past 15 years.
Gradient-free MCMC methods for dynamic causal modelling.
Sengupta, Biswa; Friston, Karl J; Penny, Will D
2015-05-15
In this technical note we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density - albeit at almost 1000% increase in computational time, in comparison to the most efficient algorithm (i.e., the adaptive MCMC sampler). Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Hospital mainframe computer documentation of pharmacist interventions.
Schumock, G T; Guenette, A J; Clark, T; McBride, J M
1993-07-01
The hospital mainframe computer pharmacist intervention documentation system described has successfully facilitated the recording, communication, analysis, and reporting of interventions at our hospital. It has proven to be time efficient, accessible, and user-friendly from the standpoint of both the pharmacist and administrator. The advantages of this system greatly outweigh manual documentation and justify the initial time investment in its design and development. In the future, it is hoped that the system can have even broader impact. Intervention/recommendations documented can be made accessible to medical and nursing staff, and as such further increase interdepartmental communication. As pharmacists embrace the pharmaceutical care mandate, documenting interventions in patient care will continue to grow in importance. Complete documentation is essential if pharmacists are to assume responsibility for patient outcomes. With time being an ever-increasing premium, and with economic and human resources dwindling, an efficient and effective means of recording and tracking pharmacist interventions will become imperative for survival in the fiscally challenged health care arena. Documentation of pharmacist intervention using a hospital mainframe computer at UIH has proven both efficient and effective.
Majeed, Raphael W; Stöhr, Mark R; Röhrig, Rainer
2012-01-01
Notifications and alerts play an important role in clinical daily routine. Rising prevalence of clinical decision support systems and electronic health records also result in increasing demands on notification systems. Failure adequately to communicate a critical value is a potential cause of adverse events. Critical laboratory values and changing vital data depend on timely notifications of medical staff. Vital monitors and medical devices rely on acoustic signals for alerting which are prone to "alert fatigue" and require medical staff to be present within audible range. Personal computers are unsuitable to display time critical notification messages, since the targeted medical staff are not always operating or watching the computer. On the other hand, mobile phones and smart devices enjoy increasing popularity. Previous notification systems sending text messages to mobile phones depend on asynchronous confirmations. By utilizing an automated telephony server, we provide a method to deliver notifications quickly and independently of the recipients' whereabouts while allowing immediate feedback and confirmations. Evaluation results suggest the feasibility of the proposed notification system for real-time notifications.
Mobility in hospital work: towards a pervasive computing hospital environment.
Morán, Elisa B; Tentori, Monica; González, Víctor M; Favela, Jesus; Martínez-Garcia, Ana I
2007-01-01
Handheld computers are increasingly being used by hospital workers. With the integration of wireless networks into hospital information systems, handheld computers can provide the basis for a pervasive computing hospital environment; to develop this designers need empirical information to understand how hospital workers interact with information while moving around. To characterise the medical phenomena we report the results of a workplace study conducted in a hospital. We found that individuals spend about half of their time at their base location, where most of their interactions occur. On average, our informants spent 23% of their time performing information management tasks, followed by coordination (17.08%), clinical case assessment (15.35%) and direct patient care (12.6%). We discuss how our results offer insights for the design of pervasive computing technology, and directions for further research and development in this field such as transferring information between heterogeneous devices and integration of the physical and digital domains.
Spectral decontamination of a real-time helicopter simulation
NASA Technical Reports Server (NTRS)
Mcfarland, R. E.
1983-01-01
Nonlinear mathematical models of a rotor system, referred to as rotating blade-element models, produce steady-state, high-frequency harmonics of significant magnitude. In a discrete simulation model, certain of these harmonics may be incompatible with realistic real-time computational constraints because of their aliasing into the operational low-pass region. However, the energy is an aliased harmonic may be suppressed by increasing the computation rate of an isolated, causal nonlinearity and using an appropriate filter. This decontamination technique is applied to Sikorsky's real-time model of the Black Hawk helicopter, as supplied to NASA for handling-qualities investigations.
Development of a small-scale computer cluster
NASA Astrophysics Data System (ADS)
Wilhelm, Jay; Smith, Justin T.; Smith, James E.
2008-04-01
An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.
NASA Technical Reports Server (NTRS)
Burleigh, Scott C.
2011-01-01
Contact Graph Routing (CGR) is a dynamic routing system that computes routes through a time-varying topology of scheduled communication contacts in a network based on the DTN (Delay-Tolerant Networking) architecture. It is designed to enable dynamic selection of data transmission routes in a space network based on DTN. This dynamic responsiveness in route computation should be significantly more effective and less expensive than static routing, increasing total data return while at the same time reducing mission operations cost and risk. The basic strategy of CGR is to take advantage of the fact that, since flight mission communication operations are planned in detail, the communication routes between any pair of bundle agents in a population of nodes that have all been informed of one another's plans can be inferred from those plans rather than discovered via dialogue (which is impractical over long one-way-light-time space links). Messages that convey this planning information are used to construct contact graphs (time-varying models of network connectivity) from which CGR automatically computes efficient routes for bundles. Automatic route selection increases the flexibility and resilience of the space network, simplifying cross-support and reducing mission management costs. Note that there are no routing tables in Contact Graph Routing. The best route for a bundle destined for a given node may routinely be different from the best route for a different bundle destined for the same node, depending on bundle priority, bundle expiration time, and changes in the current lengths of transmission queues for neighboring nodes; routes must be computed individually for each bundle, from the Bundle Protocol agent's current network connectivity model for the bundle s destination node (the contact graph). Clearly this places a premium on optimizing the implementation of the route computation algorithm. The scalability of CGR to very large networks remains a research topic. The information carried by CGR contact plan messages is useful not only for dynamic route computation, but also for the implementation of rate control, congestion forecasting, transmission episode initiation and termination, timeout interval computation, and retransmission timer suspension and resumption.
Observer training for computer-aided detection of pulmonary nodules in chest radiography.
De Boo, Diederick W; van Hoorn, François; van Schuppen, Joost; Schijf, Laura; Scheerder, Maeke J; Freling, Nicole J; Mets, Onno; Weber, Michael; Schaefer-Prokop, Cornelia M
2012-08-01
To assess whether short-term feedback helps readers to increase their performance using computer-aided detection (CAD) for nodule detection in chest radiography. The 140 CXRs (56 with a solitary CT-proven nodules and 84 negative controls) were divided into four subsets of 35; each were read in a different order by six readers. Lesion presence, location and diagnostic confidence were scored without and with CAD (IQQA-Chest, EDDA Technology) as second reader. Readers received individual feedback after each subset. Sensitivity, specificity and area under the receiver-operating characteristics curve (AUC) were calculated for readings with and without CAD with respect to change over time and impact of CAD. CAD stand-alone sensitivity was 59 % with 1.9 false-positives per image. Mean AUC slightly increased over time with and without CAD (0.78 vs. 0.84 with and 0.76 vs. 0.82 without CAD) but differences did not reach significance. The sensitivity increased (65 % vs. 70 % and 66 % vs. 70 %) and specificity decreased over time (79 % vs. 74 % and 80 % vs. 77 %) but no significant impact of CAD was found. Short-term feedback does not increase the ability of readers to differentiate true- from false-positive candidate lesions and to use CAD more effectively. • Computer-aided detection (CAD) is increasingly used as an adjunct for many radiological techniques. • Short-term feedback does not improve reader performance with CAD in chest radiography. • Differentiation between true- and false-positive CAD for low conspicious possible lesions proves difficult. • CAD can potentially increase reader performance for nodule detection in chest radiography.
GEOS Atmospheric Model: Challenges at Exascale
NASA Technical Reports Server (NTRS)
Putman, William M.; Suarez, Max J.
2017-01-01
The Goddard Earth Observing System (GEOS) model at NASA's Global Modeling and Assimilation Office (GMAO) is used to simulate the multi-scale variability of the Earth's weather and climate, and is used primarily to assimilate conventional and satellite-based observations for weather forecasting and reanalysis. In addition, assimilations coupled to an ocean model are used for longer-term forecasting (e.g., El Nino) on seasonal to interannual times-scales. The GMAO's research activities, including system development, focus on numerous time and space scales, as detailed on the GMAO website, where they are tabbed under five major themes: Weather Analysis and Prediction; Seasonal-Decadal Analysis and Prediction; Reanalysis; Global Mesoscale Modeling, and Observing System Science. A brief description of the GEOS systems can also be found at the GMAO website. GEOS executes as a collection of earth system components connected through the Earth System Modeling Framework (ESMF). The ESMF layer is supplemented with the MAPL (Modeling, Analysis, and Prediction Layer) software toolkit developed at the GMAO, which facilitates the organization of the computational components into a hierarchical architecture. GEOS systems run in parallel using a horizontal decomposition of the Earth's sphere into processing elements (PEs). Communication between PEs is primarily through a message passing framework, using the message passing interface (MPI), and through explicit use of node-level shared memory access via the SHMEM (Symmetric Hierarchical Memory access) protocol. Production GEOS weather prediction systems currently run at 12.5-kilometer horizontal resolution with 72 vertical levels decomposed into PEs associated with 5,400 MPI processes. Research GEOS systems run at resolutions as fine as 1.5 kilometers globally using as many as 30,000 MPI processes. Looking forward, these systems can be expected to see a 2 times increase in horizontal resolution every two to three years, as well as less frequent increases in vertical resolution. Coupling these resolution changes with increases in complexity, the computational demands on the GEOS production and research systems should easily increase 100-fold over the next five years. Currently, our 12.5 kilometer weather prediction system narrowly meets the time-to-solution demands of a near-real-time production system. Work is now in progress to take advantage of a hybrid MPI-OpenMP parallelism strategy, in an attempt to achieve a modest two-fold speed-up to accommodate an immediate demand due to increased scientific complexity and an increase in vertical resolution. Pursuing demands that require a 10- to 100-fold increases or more, however, would require a detailed exploration of the computational profile of GEOS, as well as targeted solutions using more advanced high-performance computing technologies. Increased computing demands of 100-fold will be required within five years based on anticipated changes in the GEOS production systems, increases of 1000-fold can be anticipated over the next ten years.
Sub-Second Parallel State Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Rice, Mark J.; Glaesemann, Kurt R.
This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly statemore » estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects of severe events on the grid. The power grid continues to grow and the number of measurements is increasing at an accelerated rate due to the variety of smart grid devices being introduced. A parallel state estimation implementation will have better performance than traditional, sequential state estimation by utilizing the power of high performance computing (HPC). This increased performance positions parallel state estimators as valuable tools for operating the increasingly more complex power grid.« less
Tang, G.; Yuan, F.; Bisht, G.; ...
2015-12-17
We explore coupling to a configurable subsurface reactive transport code as a flexible and extensible approach to biogeochemistry in land surface models; our goal is to facilitate testing of alternative models and incorporation of new understanding. A reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant uptake is used as an example. We implement the reactions in the open-source PFLOTRAN code, coupled with the Community Land Model (CLM), and test at Arctic, temperate, and tropical sites. To make the reaction network designed for use in explicit time stepping in CLM compatible with the implicit time stepping used in PFLOTRAN,more » the Monod substrate rate-limiting function with a residual concentration is used to represent the limitation of nitrogen availability on plant uptake and immobilization. To achieve accurate, efficient, and robust numerical solutions, care needs to be taken to use scaling, clipping, or log transformation to avoid negative concentrations during the Newton iterations. With a tight relative update tolerance to avoid false convergence, an accurate solution can be achieved with about 50 % more computing time than CLM in point mode site simulations using either the scaling or clipping methods. The log transformation method takes 60–100 % more computing time than CLM. The computing time increases slightly for clipping and scaling; it increases substantially for log transformation for half saturation decrease from 10 −3 to 10 −9 mol m −3, which normally results in decreasing nitrogen concentrations. The frequent occurrence of very low concentrations (e.g. below nanomolar) can increase the computing time for clipping or scaling by about 20 %; computing time can be doubled for log transformation. Caution needs to be taken in choosing the appropriate scaling factor because a small value caused by a negative update to a small concentration may diminish the update and result in false convergence even with very tight relative update tolerance. As some biogeochemical processes (e.g., methane and nitrous oxide production and consumption) involve very low half saturation and threshold concentrations, this work provides insights for addressing nonphysical negativity issues and facilitates the representation of a mechanistic biogeochemical description in earth system models to reduce climate prediction uncertainty.« less
Advanced in Visualization of 3D Time-Dependent CFD Solutions
NASA Technical Reports Server (NTRS)
Lane, David A.; Lasinski, T. A. (Technical Monitor)
1995-01-01
Numerical simulations of complex 3D time-dependent (unsteady) flows are becoming increasingly feasible because of the progress in computing systems. Unfortunately, many existing flow visualization systems were developed for time-independent (steady) solutions and do not adequately depict solutions from unsteady flow simulations. Furthermore, most systems only handle one time step of the solutions individually and do not consider the time-dependent nature of the solutions. For example, instantaneous streamlines are computed by tracking the particles using one time step of the solution. However, for streaklines and timelines, particles need to be tracked through all time steps. Streaklines can reveal quite different information about the flow than those revealed by instantaneous streamlines. Comparisons of instantaneous streamlines with dynamic streaklines are shown. For a complex 3D flow simulation, it is common to generate a grid system with several millions of grid points and to have tens of thousands of time steps. The disk requirement for storing the flow data can easily be tens of gigabytes. Visualizing solutions of this magnitude is a challenging problem with today's computer hardware technology. Even interactive visualization of one time step of the flow data can be a problem for some existing flow visualization systems because of the size of the grid. Current approaches for visualizing complex 3D time-dependent CFD solutions are described. The flow visualization system developed at NASA Ames Research Center to compute time-dependent particle traces from unsteady CFD solutions is described. The system computes particle traces (streaklines) by integrating through the time steps. This system has been used by several NASA scientists to visualize their CFD time-dependent solutions. The flow visualization capabilities of this system are described, and visualization results are shown.
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Huang, Cheng-Fu
2015-04-01
From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.
Soto-Quiros, Pablo
2015-01-01
This paper presents a parallel implementation of a kind of discrete Fourier transform (DFT): the vector-valued DFT. The vector-valued DFT is a novel tool to analyze the spectra of vector-valued discrete-time signals. This parallel implementation is developed in terms of a mathematical framework with a set of block matrix operations. These block matrix operations contribute to analysis, design, and implementation of parallel algorithms in multicore processors. In this work, an implementation and experimental investigation of the mathematical framework are performed using MATLAB with the Parallel Computing Toolbox. We found that there is advantage to use multicore processors and a parallel computing environment to minimize the high execution time. Additionally, speedup increases when the number of logical processors and length of the signal increase.
NASA Technical Reports Server (NTRS)
Habiby, Sarry F.
1987-01-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. The objective is to demonstrate the operation of an optical processor designed to minimize computation time in performing a practical computing application. This is done by using the large array of processing elements in a Hughes liquid crystal light valve, and relying on the residue arithmetic representation, a holographic optical memory, and position coded optical look-up tables. In the design, all operations are performed in effectively one light valve response time regardless of matrix size. The features of the design allowing fast computation include the residue arithmetic representation, the mapping approach to computation, and the holographic memory. In addition, other features of the work include a practical light valve configuration for efficient polarization control, a model for recording multiple exposures in silver halides with equal reconstruction efficiency, and using light from an optical fiber for a reference beam source in constructing the hologram. The design can be extended to implement larger matrix arrays without increasing computation time.
Upper extremities and spinal musculoskeletal disorders and risk factors in students using computers
Calik, Bilge Basakci; Yagci, Nesrin; Gursoy, Suleyman; Zencir, Mehmet
2014-01-01
Objective: To examine the effects of computer usage on the musculoskeletal system discomforts (MSD) of Turkish university students, the possible risk factors and study implications (SI). Methods: The study comprised a total of 871 students. Demographic information was recorded and the Student Specific Cornell Musculoskeletal Discomfort Questionnaire (SsCMDQ) was used to evaluate musculoskeletal system discomforts. Results: The neck, lower back and upper back areas were determined to be the most affected areas and percentages for SI were 21.6%, 19.3% and 16.3% respectively. Significant differences were found to be daily computer usage time for the lower back, total usage time for the neck, being female and below the age of 21 years (p<0.05) had an increased risk. Conclusions: The neck, lower back and upper back areas were found to be the most affected areas due to computer usage in university students. Risk factors for MSD were seen to be daily and total computer usage time, female gender and age below 21 years and these were deemed to cause study interference PMID:25674139
Computation of Asteroid Proper Elements on the Grid
NASA Astrophysics Data System (ADS)
Novakovic, B.; Balaz, A.; Knezevic, Z.; Potocnik, M.
2009-12-01
A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.
The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2
NASA Technical Reports Server (NTRS)
Kusmanoff, Antone; Martin, Nancy L.
1989-01-01
In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions.
The spinal posture of computing adolescents in a real-life setting
2014-01-01
Background It is assumed that good postural alignment is associated with the less likelihood of musculoskeletal pain symptoms. Encouraging good sitting postures have not reported consequent musculoskeletal pain reduction in school-based populations, possibly due to a lack of clear understanding of good posture. Therefore this paper describes the variability of postural angles in a cohort of asymptomatic high-school students whilst working on desk-top computers in a school computer classroom and to report on the relationship between the postural angles and age, gender, height, weight and computer use. Methods The baseline data from a 12 month longitudinal study is reported. The study was conducted in South African school computer classrooms. 194 Grade 10 high-school students, from randomly selected high-schools, aged 15–17 years, enrolled in Computer Application Technology for the first time, asymptomatic during the preceding month, and from whom written informed consent were obtained, participated in the study. The 3D Posture Analysis Tool captured five postural angles (head flexion, neck flexion, cranio-cervical angle, trunk flexion and head lateral bend) while the students were working on desk-top computers. Height, weight and computer use were also measured. Individual and combinations of postural angles were analysed. Results 944 Students were screened for eligibility of which the data of 194 students are reported. Trunk flexion was the most variable angle. Increased neck flexion and the combination of increased head flexion, neck flexion and trunk flexion were significantly associated with increased weight and BMI (p = 0.0001). Conclusions High-school students sit with greater ranges of trunk flexion (leaning forward or reclining) when using the classroom computer. Increased weight is significantly associated with increased sagittal plane postural angles. PMID:24950887
Analysis of Biosignals During Immersion in Computer Games.
Yeo, Mina; Lim, Seokbeen; Yoon, Gilwon
2017-11-17
The number of computer game users is increasing as computers and various IT devices in connection with the Internet are commonplace in all ages. In this research, in order to find the relevance of behavioral activity and its associated biosignal, biosignal changes before and after as well as during computer games were measured and analyzed for 31 subjects. For this purpose, a device to measure electrocardiogram, photoplethysmogram and skin temperature was developed such that the effect of motion artifacts could be minimized. The device was made wearable for convenient measurement. The game selected for the experiments was League of Legends™. Analysis on the pulse transit time, heart rate variability and skin temperature showed increased sympathetic nerve activities during computer game, while the parasympathetic nerves became less active. Interestingly, the sympathetic predominance group showed less change in the heart rate variability as compared to the normal group. The results can be valuable for studying internet gaming disorder.
Writing and Computing across the USM Chemistry Curriculum
NASA Astrophysics Data System (ADS)
Gordon, Nancy R.; Newton, Thomas A.; Rhodes, Gale; Ricci, John S.; Stebbins, Richard G.; Tracy, Henry J.
2001-01-01
The faculty of the University of Southern Maine believes the ability to communicate effectively is one of the most important skills required of successful chemists. To help students achieve that goal, the faculty has developed a Writing and Computer Program consisting of writing and computer assignments of gradually increasing sophistication for all our laboratory courses. The assignments build in complexity until, at the junior level, students are writing full journal-quality laboratory reports. Computer assignments also increase in difficulty as students attack more complicated subjects. We have found the program easy to initiate and our part-time faculty concurs as well. The Writing and Computing across the Curriculum Program also serves to unite the entire chemistry curriculum. We believe the program is helping to reverse what the USM chemistry faculty and other educators have found to be a steady deterioration in the writing skills of many of today's students.
Thorpe-Jamison, Patrice T; Culley, Colleen M; Perera, Subashan; Handler, Steven M
2013-05-01
To determine the feasibility and impact of a computer-generated rounding report on physician rounding time and perceived barriers to providing clinical care in the nursing home (NH) setting. Three NHs located in Pittsburgh, PA. Ten attending NH physicians. Time-motion method to record the time taken to gather data (pre-rounding), to evaluate patients (rounding), and document their findings/develop an assessment and plan (post-rounding). Additionally, surveys were used to determine the physicians' perception of barriers to providing optimal clinical care, as well as physician satisfaction before and after the use of a computer-generated rounding report. Ten physicians were observed during half-day sessions both before and 4 weeks after they were introduced to a computer-generated rounding report. A total of 69 distinct patients were evaluated during the 20 physician observation sessions. Each physician evaluated, on average, four patients before implementation and three patients after implementation. The observations showed a significant increase (P = .03) in the pre-rounding time, and no significant difference in the rounding (P = .09) or post-rounding times (P = .29). Physicians reported that information was more accessible (P = .03) following the implementation of the computer-generated rounding report. Most (80%) physicians stated that they would prefer to use the computer-generated rounding report rather than the paper-based process. The present study provides preliminary data suggesting that the use of a computer-generated rounding report can decrease some perceived barriers to providing optimal care in the NH. Although the rounding report did not improve rounding time efficiency, most NH physicians would prefer to use the computer-generated report rather than the current paper-based process. Improving the accuracy and harmonization of medication information with the electronic medication administration record and rounding reports, as well as improving facility network speeds might improve the effectiveness of this technology. Copyright © 2013 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.
Schmidhuber, Jürgen
2013-01-01
Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. Given a general problem-solving architecture, at any given time, the novel algorithmic framework PowerPlay (Schmidhuber, 2011) searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Newly invented tasks may require to achieve a wow-effect by making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. The greedy search of typical PowerPlay variants uses time-optimal program search to order candidate pairs of tasks and solver modifications by their conditional computational (time and space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. This biases the search toward pairs that can be described compactly and validated quickly. The computational costs of validating new tasks need not grow with task repertoire size. Standard problem solver architectures of personal computers or neural networks tend to generalize by solving numerous tasks outside the self-invented training set; PowerPlay’s ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Gödel’s sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem-solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. PowerPlay may be viewed as a greedy but practical implementation of basic principles of creativity (Schmidhuber, 2006a, 2010). A first experimental analysis can be found in separate papers (Srivastava et al., 2012a,b, 2013). PMID:23761771
Yang, Tzuhsiung; Berry, John F
2018-06-04
The computation of nuclear second derivatives of energy, or the nuclear Hessian, is an essential routine in quantum chemical investigations of ground and transition states, thermodynamic calculations, and molecular vibrations. Analytic nuclear Hessian computations require the resolution of costly coupled-perturbed self-consistent field (CP-SCF) equations, while numerical differentiation of analytic first derivatives has an unfavorable 6 N ( N = number of atoms) prefactor. Herein, we present a new method in which grid computing is used to accelerate and/or enable the evaluation of the nuclear Hessian via numerical differentiation: NUMFREQ@Grid. Nuclear Hessians were successfully evaluated by NUMFREQ@Grid at the DFT level as well as using RIJCOSX-ZORA-MP2 or RIJCOSX-ZORA-B2PLYP for a set of linear polyacenes with systematically increasing size. For the larger members of this group, NUMFREQ@Grid was found to outperform the wall clock time of analytic Hessian evaluation; at the MP2 or B2LYP levels, these Hessians cannot even be evaluated analytically. We also evaluated a 156-atom catalytically relevant open-shell transition metal complex and found that NUMFREQ@Grid is faster (7.7 times shorter wall clock time) and less demanding (4.4 times less memory requirement) than an analytic Hessian. Capitalizing on the capabilities of parallel grid computing, NUMFREQ@Grid can outperform analytic methods in terms of wall time, memory requirements, and treatable system size. The NUMFREQ@Grid method presented herein demonstrates how grid computing can be used to facilitate embarrassingly parallel computational procedures and is a pioneer for future implementations.
A GPU-Based Architecture for Real-Time Data Assessment at Synchrotron Experiments
NASA Astrophysics Data System (ADS)
Chilingaryan, Suren; Mirone, Alessandro; Hammersley, Andrew; Ferrero, Claudio; Helfen, Lukas; Kopmann, Andreas; Rolo, Tomy dos Santos; Vagovic, Patrik
2011-08-01
Advances in digital detector technology leads presently to rapidly increasing data rates in imaging experiments. Using fast two-dimensional detectors in computed tomography, the data acquisition can be much faster than the reconstruction if no adequate measures are taken, especially when a high photon flux at synchrotron sources is used. We have optimized the reconstruction software employed at the micro-tomography beamlines of our synchrotron facilities to use the computational power of modern graphic cards. The main paradigm of our approach is the full utilization of all system resources. We use a pipelined architecture, where the GPUs are used as compute coprocessors to reconstruct slices, while the CPUs are preparing the next ones. Special attention is devoted to minimize data transfers between the host and GPU memory and to execute memory transfers in parallel with the computations. We were able to reduce the reconstruction time by a factor 30 and process a typical data set of 20 GB in 40 seconds. The time needed for the first evaluation of the reconstructed sample is reduced significantly and quasi real-time visualization is now possible.
Application of ubiquitous computing in personal health monitoring systems.
Kunze, C; Grossmann, U; Stork, W; Müller-Glaser, K D
2002-01-01
A possibility to significantly reduce the costs of public health systems is to increasingly use information technology. The Laboratory for Information Processing Technology (ITIV) at the University of Karlsruhe is developing a personal health monitoring system, which should improve health care and at the same time reduce costs by combining micro-technological smart sensors with personalized, mobile computing systems. In this paper we present how ubiquitous computing theory can be applied in the health-care domain.
The potential benefits of photonics in the computing platform
NASA Astrophysics Data System (ADS)
Bautista, Jerry
2005-03-01
The increase in computational requirements for real-time image processing, complex computational fluid dynamics, very large scale data mining in the health industry/Internet, and predictive models for financial markets are driving computer architects to consider new paradigms that rely upon very high speed interconnects within and between computing elements. Further challenges result from reduced power requirements, reduced transmission latency, and greater interconnect density. Optical interconnects may solve many of these problems with the added benefit extended reach. In addition, photonic interconnects provide relative EMI immunity which is becoming an increasing issue with a greater dependence on wireless connectivity. However, to be truly functional, the optical interconnect mesh should be able to support arbitration, addressing, etc. completely in the optical domain with a BER that is more stringent than "traditional" communication requirements. Outlined are challenges in the advanced computing environment, some possible optical architectures and relevant platform technologies, as well roughly sizing these opportunities which are quite large relative to the more "traditional" optical markets.
NASA Technical Reports Server (NTRS)
Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.
1982-01-01
Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.
Architecture Adaptive Computing Environment
NASA Technical Reports Server (NTRS)
Dorband, John E.
2006-01-01
Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.
On-chip phase-change photonic memory and computing
NASA Astrophysics Data System (ADS)
Cheng, Zengguang; Ríos, Carlos; Youngblood, Nathan; Wright, C. David; Pernice, Wolfram H. P.; Bhaskaran, Harish
2017-08-01
The use of photonics in computing is a hot topic of interest, driven by the need for ever-increasing speed along with reduced power consumption. In existing computing architectures, photonic data storage would dramatically improve the performance by reducing latencies associated with electrical memories. At the same time, the rise of `big data' and `deep learning' is driving the quest for non-von Neumann and brain-inspired computing paradigms. To succeed in both aspects, we have demonstrated non-volatile multi-level photonic memory avoiding the von Neumann bottleneck in the existing computing paradigm and a photonic synapse resembling the biological synapses for brain-inspired computing using phase-change materials (Ge2Sb2Te5).
Bedez, Mathieu; Belhachmi, Zakaria; Haeberlé, Olivier; Greget, Renaud; Moussaoui, Saliha; Bouteiller, Jean-Marie; Bischoff, Serge
2016-01-15
The resolution of a model describing the electrical activity of neural tissue and its propagation within this tissue is highly consuming in term of computing time and requires strong computing power to achieve good results. In this study, we present a method to solve a model describing the electrical propagation in neuronal tissue, using parareal algorithm, coupling with parallelization space using CUDA in graphical processing unit (GPU). We applied the method of resolution to different dimensions of the geometry of our model (1-D, 2-D and 3-D). The GPU results are compared with simulations from a multi-core processor cluster, using message-passing interface (MPI), where the spatial scale was parallelized in order to reach a comparable calculation time than that of the presented method using GPU. A gain of a factor 100 in term of computational time between sequential results and those obtained using the GPU has been obtained, in the case of 3-D geometry. Given the structure of the GPU, this factor increases according to the fineness of the geometry used in the computation. To the best of our knowledge, it is the first time such a method is used, even in the case of neuroscience. Parallelization time coupled with GPU parallelization space allows for drastically reducing computational time with a fine resolution of the model describing the propagation of the electrical signal in a neuronal tissue. Copyright © 2015 Elsevier B.V. All rights reserved.
Beal, Jacob; Viroli, Mirko
2015-07-28
Computation increasingly takes place not on an individual device, but distributed throughout a material or environment, whether it be a silicon surface, a network of wireless devices, a collection of biological cells or a programmable material. Emerging programming models embrace this reality and provide abstractions inspired by physics, such as computational fields, that allow such systems to be programmed holistically, rather than in terms of individual devices. This paper aims to provide a unified approach for the investigation and engineering of computations programmed with the aid of space-time abstractions, by bringing together a number of recent results, as well as to identify critical open problems. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Beck, J R; Fung, K; Lopez, H; Mongero, L B; Argenziano, M
2015-01-01
Delayed perfusionist identification and reaction to abnormal clinical situations has been reported to contribute to increased mortality and morbidity. The use of automated data acquisition and compliance safety alerts has been widely accepted in many industries and its use may improve operator performance. A study was conducted to evaluate the reaction time of perfusionists with and without the use of compliance alert. A compliance alert is a computer-generated pop-up banner on a pump-mounted computer screen to notify the user of clinical parameters outside of a predetermined range. A proctor monitored and recorded the time from an alert until the perfusionist recognized the parameter was outside the desired range. Group one included 10 cases utilizing compliance alerts. Group 2 included 10 cases with the primary perfusionist blinded to the compliance alerts. In Group 1, 97 compliance alerts were identified and, in group two, 86 alerts were identified. The average reaction time in the group using compliance alerts was 3.6 seconds. The average reaction time in the group not using the alerts was nearly ten times longer than the group using computer-assisted, real-time data feedback. Some believe that real-time computer data acquisition and feedback improves perfusionist performance and may allow clinicians to identify and rectify potentially dangerous situations. © The Author(s) 2014.
Critical Thinking Outcomes of Computer-Assisted Instruction versus Written Nursing Process.
ERIC Educational Resources Information Center
Saucier, Bonnie L.; Stevens, Kathleen R.; Williams, Gail B.
2000-01-01
Nursing students (n=43) who used clinical case studies via computer-assisted instruction (CAI) were compared with 37 who used the written nursing process (WNP). California Critical Thinking Skills Test results did not show significant increases in critical thinking. The WNP method was more time consuming; the CAI group was more satisfied. Use of…
Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor
NASA Astrophysics Data System (ADS)
Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul
2017-05-01
Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.
Randomized Trial of Desktop Humidifier for Dry Eye Relief in Computer Users.
Wang, Michael T M; Chan, Evon; Ea, Linda; Kam, Clifford; Lu, Yvonne; Misra, Stuti L; Craig, Jennifer P
2017-11-01
Dry eye is a frequently reported problem among computer users. Low relative humidity environments are recognized to exacerbate signs and symptoms of dry eye, yet are common in offices of computer operators. Desktop USB-powered humidifiers are available commercially, but their efficacy for dry eye relief has not been established. This study aims to evaluate the potential for a desktop USB-powered humidifier to improve tear-film parameters, ocular surface characteristics, and subjective comfort of computer users. Forty-four computer users were enrolled in a prospective, masked, randomized crossover study. On separate days, participants were randomized to 1 hour of continuous computer use, with and without exposure to a desktop humidifier. Lipid-layer grade, noninvasive tear-film breakup time, and tear meniscus height were measured before and after computer use. Following the 1-hour period, participants reported whether ocular comfort was greater, equal, or lesser than that at baseline. The desktop humidifier effected a relative difference in humidity between the two environments of +5.4 ± 5.0% (P < .001). Participants demonstrated no significant differences in lipid-layer grade and tear meniscus height between the two environments (all P > .05). However, a relative increase in the median noninvasive tear-film breakup time of +4.0 seconds was observed in the humidified environment (P < .001), which was associated with a higher proportion of subjects reporting greater comfort relative to baseline (36% vs. 5%, P < .001). Even with a modest increase in relative humidity locally, the desktop humidifier shows potential to improve tear-film stability and subjective comfort during computer use.Trial registration no: ACTRN12617000326392.
MICROPROCESSOR-BASED DATA-ACQUISITION SYSTEM FOR A BOREHOLE RADAR.
Bradley, Jerry A.; Wright, David L.
1987-01-01
An efficient microprocessor-based system is described that permits real-time acquisition, stacking, and digital recording of data generated by a borehole radar system. Although the system digitizes, stacks, and records independently of a computer, it is interfaced to a desktop computer for program control over system parameters such as sampling interval, number of samples, number of times the data are stacked prior to recording on nine-track tape, and for graphics display of the digitized data. The data can be transferred to the desktop computer during recording, or it can be played back from a tape at a latter time. Using the desktop computer, the operator observes results while recording data and generates hard-copy graphics in the field. Thus, the radar operator can immediately evaluate the quality of data being obtained, modify system parameters, study the radar logs before leaving the field, and rerun borehole logs if necessary. The system has proven to be reliable in the field and has increased productivity both in the field and in the laboratory.
Kuntzelman, Karl; Jack Rhodes, L; Harrington, Lillian N; Miskovic, Vladimir
2018-06-01
There is a broad family of statistical methods for capturing time series regularity, with increasingly widespread adoption by the neuroscientific community. A common feature of these methods is that they permit investigators to quantify the entropy of brain signals - an index of unpredictability/complexity. Despite the proliferation of algorithms for computing entropy from neural time series data there is scant evidence concerning their relative stability and efficiency. Here we evaluated several different algorithmic implementations (sample, fuzzy, dispersion and permutation) of multiscale entropy in terms of their stability across sessions, internal consistency and computational speed, accuracy and precision using a combination of electroencephalogram (EEG) and synthetic 1/ƒ noise signals. Overall, we report fair to excellent internal consistency and longitudinal stability over a one-week period for the majority of entropy estimates, with several caveats. Computational timing estimates suggest distinct advantages for dispersion and permutation entropy over other entropy estimates. Considered alongside the psychometric evidence, we suggest several ways in which researchers can maximize computational resources (without sacrificing reliability), especially when working with high-density M/EEG data or multivoxel BOLD time series signals. Copyright © 2018 Elsevier Inc. All rights reserved.
Computer model to simulate testing at the National Transonic Facility
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.
1995-01-01
A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.
Chaudhry, Fouad A; Ismail, Sanaa Z; Davis, Edward T
2018-05-01
Computer-assisted navigation techniques are used to optimise component placement and alignment in total hip replacement. It has developed in the last 10 years but despite its advantages only 0.3% of all total hip replacements in England and Wales are done using computer navigation. One of the reasons for this is that computer-assisted technology increases operative time. A new method of pelvic registration has been developed without the need to register the anterior pelvic plane (BrainLab hip 6.0) which has shown to improve the accuracy of THR. The purpose of this study was to find out if the new method reduces the operating time. This was a retrospective analysis of comparing operating time in computer navigated primary uncemented total hip replacement using two methods of registration. Group 1 included 128 cases that were performed using BrainLab versions 2.1-5.1. This version relied on the acquisition of the anterior pelvic plane for registration. Group 2 included 128 cases that were performed using the newest navigation software, BrainLab hip 6.0 (registration possible with the patient in the lateral decubitus position). The operating time was 65.79 (40-98) minutes using the old method of registration and was 50.87 (33-74) minutes using the new method of registration. This difference was statistically significant. The body mass index (BMI) was comparable in both groups. The study supports the use of new method of registration in improving the operating time in computer navigated primary uncemented total hip replacements.
Real-time robot deliberation by compilation and monitoring of anytime algorithms
NASA Technical Reports Server (NTRS)
Zilberstein, Shlomo
1994-01-01
Anytime algorithms are algorithms whose quality of results improves gradually as computation time increases. Certainty, accuracy, and specificity are metrics useful in anytime algorighm construction. It is widely accepted that a successful robotic system must trade off between decision quality and the computational resources used to produce it. Anytime algorithms were designed to offer such a trade off. A model of compilation and monitoring mechanisms needed to build robots that can efficiently control their deliberation time is presented. This approach simplifies the design and implementation of complex intelligent robots, mechanizes the composition and monitoring processes, and provides independent real time robotic systems that automatically adjust resource allocation to yield optimum performance.
Resource Constrained Planning of Multiple Projects with Separable Activities
NASA Astrophysics Data System (ADS)
Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya
In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.
Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duque, Earl P.N.; Whitlock, Brad J.
High performance computers have for many years been on a trajectory that gives them extraordinary compute power with the addition of more and more compute cores. At the same time, other system parameters such as the amount of memory per core and bandwidth to storage have remained constant or have barely increased. This creates an imbalance in the computer, giving it the ability to compute a lot of data that it cannot reasonably save out due to time and storage constraints. While technologies have been invented to mitigate this problem (burst buffers, etc.), software has been adapting to employ inmore » situ libraries which perform data analysis and visualization on simulation data while it is still resident in memory. This avoids the need to ever have to pay the costs of writing many terabytes of data files. Instead, in situ enables the creation of more concentrated data products such as statistics, plots, and data extracts, which are all far smaller than the full-sized volume data. With the increasing popularity of in situ, multiple in situ infrastructures have been created, each with its own mechanism for integrating with a simulation. To make it easier to instrument a simulation with multiple in situ infrastructures and include custom analysis algorithms, this project created the SENSEI framework.« less
Optimization of Angular-Momentum Biases of Reaction Wheels
NASA Technical Reports Server (NTRS)
Lee, Clifford; Lee, Allan
2008-01-01
RBOT [RWA Bias Optimization Tool (wherein RWA signifies Reaction Wheel Assembly )] is a computer program designed for computing angular momentum biases for reaction wheels used for providing spacecraft pointing in various directions as required for scientific observations. RBOT is currently deployed to support the Cassini mission to prevent operation of reaction wheels at unsafely high speeds while minimizing time in undesirable low-speed range, where elasto-hydrodynamic lubrication films in bearings become ineffective, leading to premature bearing failure. The problem is formulated as a constrained optimization problem in which maximum wheel speed limit is a hard constraint and a cost functional that increases as speed decreases below a low-speed threshold. The optimization problem is solved using a parametric search routine known as the Nelder-Mead simplex algorithm. To increase computational efficiency for extended operation involving large quantity of data, the algorithm is designed to (1) use large time increments during intervals when spacecraft attitudes or rates of rotation are nearly stationary, (2) use sinusoidal-approximation sampling to model repeated long periods of Earth-point rolling maneuvers to reduce computational loads, and (3) utilize an efficient equation to obtain wheel-rate profiles as functions of initial wheel biases based on conservation of angular momentum (in an inertial frame) using pre-computed terms.
NASA Astrophysics Data System (ADS)
Jenkins, David R.; Basden, Alastair; Myers, Richard M.
2018-05-01
We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.
The influence of prophylactic factor VIII in severe hemophilia A
Gissel, Matthew; Whelihan, Matthew F; Ferris, Lauren A; Mann, Kenneth G; Rivard, Georges E; Brummel-Ziedins, Kathleen E
2013-01-01
Introduction Hemophilia A individuals displaying a similar genetic defect have heterogeneous clinical phenotypes. Aim To evaluate the underlying effect of exogenous factor (f)VIII on tissue factor (Tf)-initiated blood coagulation in severe hemophilia utilizing both empirical and computational models. Methods We investigated twenty-five clinically severe hemophilia A patients. All individuals were on fVIII prophylaxis and had not received fVIII from 0.25 to 4 days prior to phlebotomy. Coagulation was initiated by the addition of Tf to contact-pathway inhibited whole blood ± an anti-fVIII antibody. Aliquots were quenched over 20 min and analyzed for thrombin generation and fibrin formation. Coagulation factor levels were obtained and used to computationally predict thrombin generation with fVIII set to either zero or its value at the time of the draw. Results Due to prophylactic fVIII, at the time of the blood draw, the individuals had fVIII levels that ranged from <1% to 22%. Thrombin generation (maximum level and rate) in both empirical and computational systems increased as the level of fVIII increased. FXIII activation rates also increased as the fVIII level increased. Upon suppression of fVIII, thrombin generation became comparable in both systems. Plasma composition analysis showed a negative correlation between bleeding history and computational thrombin generation in the absence of fVIII. Conclusion Residual prophylactic fVIII directly causes an increase in thrombin generation and fibrin cross-linking in individuals with clinically severe hemophilia A. The combination of each individual's coagulation factors (outside of fVIII) determine each individual's baseline thrombin potential and may affect bleeding risk. PMID:21899664
I-deas TMG to NX Space Systems Thermal Model Conversion and Computational Performance Comparison
NASA Technical Reports Server (NTRS)
Somawardhana, Ruwan
2011-01-01
CAD/CAE packages change on a continuous basis as the power of the tools increase to meet demands. End -users must adapt to new products as they come to market and replace legacy packages. CAE modeling has continued to evolve and is constantly becoming more detailed and complex. Though this comes at the cost of increased computing requirements Parallel processing coupled with appropriate hardware can minimize computation time. Users of Maya Thermal Model Generator (TMG) are faced with transitioning from NX I -deas to NX Space Systems Thermal (SST). It is important to understand what differences there are when changing software packages We are looking for consistency in results.
Data assimilation using a GPU accelerated path integral Monte Carlo approach
NASA Astrophysics Data System (ADS)
Quinn, John C.; Abarbanel, Henry D. I.
2011-09-01
The answers to data assimilation questions can be expressed as path integrals over all possible state and parameter histories. We show how these path integrals can be evaluated numerically using a Markov Chain Monte Carlo method designed to run in parallel on a graphics processing unit (GPU). We demonstrate the application of the method to an example with a transmembrane voltage time series of a simulated neuron as an input, and using a Hodgkin-Huxley neuron model. By taking advantage of GPU computing, we gain a parallel speedup factor of up to about 300, compared to an equivalent serial computation on a CPU, with performance increasing as the length of the observation time used for data assimilation increases.
QMC Goes BOINC: Using Public Resource Computing to Perform Quantum Monte Carlo Calculations
NASA Astrophysics Data System (ADS)
Rainey, Cameron; Engelhardt, Larry; Schröder, Christian; Hilbig, Thomas
2008-10-01
Theoretical modeling of magnetic molecules traditionally involves the diagonalization of quantum Hamiltonian matrices. However, as the complexity of these molecules increases, the matrices become so large that this process becomes unusable. An additional challenge to this modeling is that many repetitive calculations must be performed, further increasing the need for computing power. Both of these obstacles can be overcome by using a quantum Monte Carlo (QMC) method and a distributed computing project. We have recently implemented a QMC method within the Spinhenge@home project, which is a Public Resource Computing (PRC) project where private citizens allow part-time usage of their PCs for scientific computing. The use of PRC for scientific computing will be described in detail, as well as how you can contribute to the project. See, e.g., L. Engelhardt, et. al., Angew. Chem. Int. Ed. 47, 924 (2008). C. Schröoder, in Distributed & Grid Computing - Science Made Transparent for Everyone. Principles, Applications and Supporting Communities. (Weber, M.H.W., ed., 2008). Project URL: http://spin.fh-bielefeld.de
Busschaert, Cedric; Scherrens, Anne-Lore; De Bourdeaudhuij, Ilse; Cardon, Greet; Van Cauwenberg, Jelle; De Cocker, Katrien
2016-01-01
Introduction Knowledge about variables associated with context-specific sitting time in older adults is limited. Therefore, this study explored cross-sectional and longitudinal associations of socio-demographic, social-cognitive, physical-environmental and health-related variables with sitting during TV viewing, computer use and motorized transport in older adults. Methods A sample of Belgian older adults completed structured interviews on context-specific sitting time and associated variables using a longitudinal study design. Objective measurements of grip strength and physical performance were also completed. Complete baseline data were available of 258 participants (73.98±6.16 years) of which 229 participants remained in the study at one year follow-up (retention rate: 91.60%). Cross-sectional correlates (baseline data) and longitudinal predictors (change-scores in relation with change in sitting time) were explored through multiple linear regression analyses. Results Per context-specific sitting time, most of the cross-sectional correlates differed from the longitudinal predictors. Increases over time in enjoyment of watching TV (+one unit), encouragement of partner to watch less TV (+one unit) and TV time of partner (+30.0 min/day) were associated with respectively 9.1 min/day (p<0.001), 16.0 min/day (p<0.001) and 12.0 min/day (p<0.001) more sitting during TV viewing at follow-up. Increases over time in enjoyment of using a computer (+one unit), the number of smartphones and tablets (+1) and computer use of the partner (+30.0 min/day) were associated with respectively 5.5 min/day (p < .01), 10.4 min/day (p < .05) and 3.0 min/day (p < .05) more sitting during computer use at follow-up. An increase over time in self-efficacy regarding taking a bicycle or walking was associated with 2.9 min/day (p < .05) less sitting during motorized transport at follow-up. Conclusions The results stressed the importance of looking at separate contexts of sitting. Further, the results highlighted the importance of longitudinal research in order to reveal which changes in particular variables predicted changes in context-specific sitting time. Variables at the social-cognitive level were most frequently related to context-specific sitting. PMID:27997603
Dogramaci, Mahmut; Williams, Katie; Lee, Ed; Williamson, Tom H
2013-01-01
There is sudden and dramatic visual function deterioration in 1-10 % of eyes filled with silicone oil at the time of removal of silicon oil. Transmission of high-energy blue light is increased in eyes filled with silicone oil. We sought to identify if increased foveal light exposure is a potential factor in the pathophysiology of the visual loss at the time of removal of silicone oil. A graphic ray tracing computer program and laboratory models were used to determine the effect of the intraocular silicone oil bubble size on the foveal illuminance at the time of removal of silicone oil under direct microscope light. The graphic ray tracing computer program revealed a range of optical vignetting effects created by different sizes of silicone oil bubble within the vitreous cavity giving rise to an uneven macular illumination. The laboratory model was used to quantify the variation of illuminance at the foveal region with different sizes of silicone oil bubble with in the vitreous cavity at the time of removal of silicon oil under direct microscope light. To substantiate the hypothesis of the light toxicity during removal of silicone oil, The outcome of oil removal procedures performed under direct microscope illumination in compared to those performed under blocked illumination. The computer program showed that the optical vignetting effect at the macula was dependent on the size of the intraocular silicone oil bubble. The laboratory eye model showed that the foveal illuminance followed a bell-shaped curve with 70 % greater illuminance demonstrated at with 50-60 % silicone oil fill. The clinical data identified five eyes with unexplained vision loss out of 114 eyes that had the procedure performed under direct microscope illumination compared to none out of 78 eyes that had the procedure under blocked illumination. Foveal light exposure, and therefore the potential for phototoxicity, is transiently increased at the time of removal of silicone oil. This is due to uneven macular illumination resulting from the optical vignetting effect of different silicone oil bubble sizes. The increase in foveal light exposure may be significant when the procedure is performed under bright operating microscope light on already stressed photoreceptors of an eye filled with silicon oil. We advocate the use of precautions, such as central shadow filter on the operating microscope light source to reduce foveal light exposure and the risk of phototoxicity at the time of removal of silicone oil. The graphic ray tracing computer program used in this study shows promise in eye modeling for future studies.
On the upscaling of process-based models in deltaic applications
NASA Astrophysics Data System (ADS)
Li, L.; Storms, J. E. A.; Walstra, D. J. R.
2018-03-01
Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.
Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less
Adaptive [theta]-methods for pricing American options
NASA Astrophysics Data System (ADS)
Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran
2008-12-01
We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.
Graphical processors for HEP trigger systems
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.
2017-02-01
General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to employ GPUs as accelerators in offline computations. With the steady decrease of GPU latencies and the increase in link and memory throughputs, time is ripe for real-time applications using GPUs in high-energy physics data acquisition and trigger systems. We will discuss the use of online parallel computing on GPUs for synchronous low level trigger systems, focusing on tests performed on the trigger of the CERN NA62 experiment. Latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Moreover, we discuss how specific trigger algorithms can be parallelised and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen LHC luminosity upgrade where highly selective algorithms will be crucial to maintain sustainable trigger rates with very high pileup.
Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions
NASA Astrophysics Data System (ADS)
Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.
2016-09-01
Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.
Real-time interactive 3D computer stereography for recreational applications
NASA Astrophysics Data System (ADS)
Miyazawa, Atsushi; Ishii, Motonaga; Okuzawa, Kazunori; Sakamoto, Ryuuichi
2008-02-01
With the increasing calculation costs of 3D computer stereography, low-cost, high-speed implementation of the latter requires effective distribution of computing resources. In this paper, we attempt to re-classify 3D display technologies on the basis of humans' 3D perception, in order to determine what level of presence or reality is required in recreational video game systems. We then discuss the design and implementation of stereography systems in two categories of the new classification.
The applications of computers in biological research
NASA Technical Reports Server (NTRS)
Wei, Jennifer
1988-01-01
Research in many fields could not be done without computers. There is often a great deal of technical data, even in the biological fields, that need to be analyzed. These data, unfortunately, previously absorbed much of every researcher's time. Now, due to the steady increase in computer technology, biological researchers are able to make incredible advances in their work without the added worries of tedious and difficult tasks such as the many mathematical calculations involved in today's research and health care.
The New Feedback Control System of RFX-mod Based on the MARTe Real-Time Framework
NASA Astrophysics Data System (ADS)
Manduchi, G.; Luchetta, A.; Soppelsa, A.; Taliercio, C.
2014-06-01
A real-time system has been successfully used since 2004 in the RFX-mod nuclear fusion experiment to control the position of the plasma and its Magneto Hydrodynamic (MHD) modes. However, its latency and the limited computation power of the used processors prevented the usage of more aggressive control algorithms. Therefore a new hardware and software architecture has been designed to overcome such limitations and to provide a shorter latency and a much increased computation power. The new system is based on a Linux multi-core server and uses MARTe, a framework for real-time control which is gaining interest in the fusion community.
Bringing MapReduce Closer To Data With Active Drives
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Prathapan, S.; Warmka, R.; Wyatt, B.; Halem, M.; Trantham, J. D.; Markey, C. A.
2017-12-01
Moving computation closer to the data location has been a much theorized improvement to computation for decades. The increase in processor performance, the decrease in processor size and power requirement combined with the increase in data intensive computing has created a push to move computation as close to data as possible. We will show the next logical step in this evolution in computing: moving computation directly to storage. Hypothetical systems, known as Active Drives, have been proposed as early as 1998. These Active Drives would have a general-purpose CPU on each disk allowing for computations to be performed on them without the need to transfer the data to the computer over the system bus or via a network. We will utilize Seagate's Active Drives to perform general purpose parallel computing using the MapReduce programming model directly on each drive. We will detail how the MapReduce programming model can be adapted to the Active Drive compute model to perform general purpose computing with comparable results to traditional MapReduce computations performed via Hadoop. We will show how an Active Drive based approach significantly reduces the amount of data leaving the drive when performing several common algorithms: subsetting and gridding. We will show that an Active Drive based design significantly improves data transfer speeds into and out of drives compared to Hadoop's HDFS while at the same time keeping comparable compute speeds as Hadoop.
Neural Computations in a Dynamical System with Multiple Time Scales.
Mi, Yuanyuan; Lin, Xiaohan; Wu, Si
2016-01-01
Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.
On localization attacks against cloud infrastructure
NASA Astrophysics Data System (ADS)
Ge, Linqiang; Yu, Wei; Sistani, Mohammad Ali
2013-05-01
One of the key characteristics of cloud computing is the device and location independence that enables the user to access systems regardless of their location. Because cloud computing is heavily based on sharing resource, it is vulnerable to cyber attacks. In this paper, we investigate a localization attack that enables the adversary to leverage central processing unit (CPU) resources to localize the physical location of server used by victims. By increasing and reducing CPU usage through the malicious virtual machine (VM), the response time from the victim VM will increase and decrease correspondingly. In this way, by embedding the probing signal into the CPU usage and correlating the same pattern in the response time from the victim VM, the adversary can find the location of victim VM. To determine attack accuracy, we investigate features in both the time and frequency domains. We conduct both theoretical and experimental study to demonstrate the effectiveness of such an attack.
Method and system for benchmarking computers
Gustafson, John L.
1993-09-14
A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.
Multigrid Computations of 3-D Incompressible Internal and External Viscous Rotating Flows
NASA Technical Reports Server (NTRS)
Sheng, Chunhua; Taylor, Lafayette K.; Chen, Jen-Ping; Jiang, Min-Yee; Whitfield, David L.
1996-01-01
This report presents multigrid methods for solving the 3-D incompressible viscous rotating flows in a NASA low-speed centrifugal compressor and a marine propeller 4119. Numerical formulations are given in both the rotating reference frame and the absolute frame. Comparisons are made for the accuracy, efficiency, and robustness between the steady-state scheme and the time-accurate scheme for simulating viscous rotating flows for complex internal and external flow applications. Prospects for further increase in efficiency and accuracy of unsteady time-accurate computations are discussed.
The economics of time shared computing: Congestion, user costs and capacity
NASA Technical Reports Server (NTRS)
Agnew, C. E.
1982-01-01
Time shared systems permit the fixed costs of computing resources to be spread over large numbers of users. However, bottleneck results in the theory of closed queueing networks can be used to show that this economy of scale will be offset by the increased congestion that results as more users are added to the system. If one considers the total costs, including the congestion cost, there is an optimal number of users for a system which equals the saturation value usually used to define system capacity.
Development of Automatic Control of Bayer Plant Digestion
NASA Astrophysics Data System (ADS)
Riffaud, J. P.
Supervisory computer control has been achieved in Alcan's Bayer Plants at Arvida, Quebec, Canada. The purpose of the automatic control system is to stabilize and consequently increase, the alumina/caustic ratio within the digester train and in the blow-off liquor. Measurements of the electrical conductivity of the liquor are obtained from electrodeless conductivity meters. These signals, along with several others are scanned by the computer and converted to engineering units, using specific relationships which are updated periodically for calibration purposes. On regular time intervals, values of ratio are compared to target values and adjustments are made to the bauxite flow entering the digesters. Dead time compensation included in the control algorithm enables a faster rate for corrections. Modification of production rate is achieved through careful timing of various flow changes. Calibration of the conductivity meters is achieved by sampling at intervals the liquor flowing through them, and analysing it with a thermometric titrator. Calibration of the thermometric titrator is done at intervals with a standard solution. Calculations for both calibrations are performed by computer from data entered by the analyst. The computer was used for on-line data collection, modelling of the digester system, calculation of disturbances and simulation of control strategies before implementing the most successful strategy in the Plant. Control of ratio has been improved by the integrated system, resulting in increased Plant productivity.
Impact of IT on health care professionals: changes in work and the productivity paradox.
Hebert, M A
1998-05-01
Health care organizations are under increasing pressure to become more efficient while at the same time maintaining or improving the quality of care. Information technology (IT), with its potential to increase efficiency, accuracy and accessibility of information, has been expected to play an important role in supporting these changes. We report the impact of patient care information systems on health care professionals in five community hospitals. The study framework incorporated both quality of care in Donabedian's elements of structure-process-outcome and Grusec's three levels of IT impact: direct substitution, proceduralization and new capabilities. The study results suggest that, for specific tasks, IT increased efficiency and productivity--a single employee was able to complete more tasks. However, this produced other consequences not predicted. Participants noted this change did not 'free up time' to spend with patients, but meant there were potentially more opportunities to provide services and more tasks to complete. Other effects included: reduced job satisfaction as more time was spent on the computer; less frequent interactions with patients and for shorter duration; and an increasingly 'visible' accountability as performance was easily monitored. There were also changes in roles and responsibilities as the computer enabled tasks to be carried out from a number of locations and by a variety of personnel. When innovations are introduced into organizations there are both expected and unexpected consequences. Increased awareness of the interactive relationship between computer users and the technology helps organizations better understand why results do, or do not, occur. One must look beyond just simply increasing productivity by replacing manual tasks with automated ones, to examining how the changes influence the nature of work and relationships within the organization.
Multiple Motor Learning Strategies in Visuomotor Rotation
Saijo, Naoki; Gomi, Hiroaki
2010-01-01
Background When exposed to a continuous directional discrepancy between movements of a visible hand cursor and the actual hand (visuomotor rotation), subjects adapt their reaching movements so that the cursor is brought to the target. Abrupt removal of the discrepancy after training induces reaching error in the direction opposite to the original discrepancy, which is called an aftereffect. Previous studies have shown that training with gradually increasing visuomotor rotation results in a larger aftereffect than with a suddenly increasing one. Although the aftereffect difference implies a difference in the learning process, it is still unclear whether the learned visuomotor transformations are qualitatively different between the training conditions. Methodology/Principal Findings We examined the qualitative changes in the visuomotor transformation after the learning of the sudden and gradual visuomotor rotations. The learning of the sudden rotation led to a significant increase of the reaction time for arm movement initiation and then the reaching error decreased, indicating that the learning is associated with an increase of computational load in motor preparation (planning). In contrast, the learning of the gradual rotation did not change the reaction time but resulted in an increase of the gain of feedback control, suggesting that the online adjustment of the reaching contributes to the learning of the gradual rotation. When the online cursor feedback was eliminated during the learning of the gradual rotation, the reaction time increased, indicating that additional computations are involved in the learning of the gradual rotation. Conclusions/Significance The results suggest that the change in the motor planning and online feedback adjustment of the movement are involved in the learning of the visuomotor rotation. The contributions of those computations to the learning are flexibly modulated according to the visual environment. Such multiple learning strategies would be required for reaching adaptation within a short training period. PMID:20195373
Monitoring Statistics Which Have Increased Power over a Reduced Time Range.
ERIC Educational Resources Information Center
Tang, S. M.; MacNeill, I. B.
1992-01-01
The problem of monitoring trends for changes at unknown times is considered. Statistics that permit one to focus high power on a segment of the monitored period are studied. Numerical procedures are developed to compute the null distribution of these statistics. (Author)
The programming language HAL: A specification
NASA Technical Reports Server (NTRS)
1971-01-01
HAL accomplishes three significant objectives: (1) increased readability, through the use of a natural two-dimensional mathematical format; (2) increased reliability, by providing for selective recognition of common data and subroutines, and by incorporating specific data-protect features; (3) real-time control facility, by including a comprehensive set of real-time control commands and signal conditions. Although HAL is designed primarily for programming on-board computers, it is general enough to meet nearly all the needs in the production, verification and support of aerospace, and other real-time applications.
Dynamics of threading dislocations in porous heteroepitaxial GaN films
NASA Astrophysics Data System (ADS)
Gutkin, M. Yu.; Rzhavtsev, E. A.
2017-12-01
Behavior of threading dislocations in porous heteroepitaxial gallium nitride (GaN) films has been studied using computer simulation by the two-dimensional discrete dislocation dynamics approach. A computational scheme, where pores are modeled as cross sections of cylindrical cavities, elastically interacting with unidirectional parallel edge dislocations, which imitate threading dislocations, is used. Time dependences of coordinates and velocities of each dislocation from dislocation ensembles under investigation are obtained. Visualization of current structure of dislocation ensemble is performed in the form of a location map of dislocations at any time. It has been shown that the density of appearing dislocation structures significantly depends on the ratio of area of a pore cross section to area of the simulation region. In particular, increasing the portion of pores surface on the layer surface up to 2% should lead to about a 1.5-times decrease of the final density of threading dislocations, and increase of this portion up to 15% should lead to approximately a 4.5-times decrease of it.
Platform-Independence and Scheduling In a Multi-Threaded Real-Time Simulation
NASA Technical Reports Server (NTRS)
Sugden, Paul P.; Rau, Melissa A.; Kenney, P. Sean
2001-01-01
Aviation research often relies on real-time, pilot-in-the-loop flight simulation as a means to develop new flight software, flight hardware, or pilot procedures. Often these simulations become so complex that a single processor is incapable of performing the necessary computations within a fixed time-step. Threads are an elegant means to distribute the computational work-load when running on a symmetric multi-processor machine. However, programming with threads often requires operating system specific calls that reduce code portability and maintainability. While a multi-threaded simulation allows a significant increase in the simulation complexity, it also increases the workload of a simulation operator by requiring that the operator determine which models run on which thread. To address these concerns an object-oriented design was implemented in the NASA Langley Standard Real-Time Simulation in C++ (LaSRS++) application framework. The design provides a portable and maintainable means to use threads and also provides a mechanism to automatically load balance the simulation models.
Computational Process Modeling for Additive Manufacturing
NASA Technical Reports Server (NTRS)
Bagg, Stacey; Zhang, Wei
2014-01-01
Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.
Fast distributed large-pixel-count hologram computation using a GPU cluster.
Pan, Yuechao; Xu, Xuewu; Liang, Xinan
2013-09-10
Large-pixel-count holograms are one essential part for big size holographic three-dimensional (3D) display, but the generation of such holograms is computationally demanding. In order to address this issue, we have built a graphics processing unit (GPU) cluster with 32.5 Tflop/s computing power and implemented distributed hologram computation on it with speed improvement techniques, such as shared memory on GPU, GPU level adaptive load balancing, and node level load distribution. Using these speed improvement techniques on the GPU cluster, we have achieved 71.4 times computation speed increase for 186M-pixel holograms. Furthermore, we have used the approaches of diffraction limits and subdivision of holograms to overcome the GPU memory limit in computing large-pixel-count holograms. 745M-pixel and 1.80G-pixel holograms were computed in 343 and 3326 s, respectively, for more than 2 million object points with RGB colors. Color 3D objects with 1.02M points were successfully reconstructed from 186M-pixel hologram computed in 8.82 s with all the above three speed improvement techniques. It is shown that distributed hologram computation using a GPU cluster is a promising approach to increase the computation speed of large-pixel-count holograms for large size holographic display.
Using Educational Games for Sign Language Learning--A SignWriting Learning Game: Case Study
ERIC Educational Resources Information Center
Bouzid, Yosra; Khenissi, Mohamed Ali; Essalmi, Fathi; Jemni, Mohamed
2016-01-01
Apart from being used as a means of entertainment, computer games have been adopted for a long time as a valuable tool for learning. Computer games can offer many learning benefits to students since they can consume their attention and increase their motivation and engagement which can then lead to stimulate learning. However, most of the research…
ERIC Educational Resources Information Center
Qian, Yizhou; Hambrusch, Susanne; Yadav, Aman; Gretter, Sarah
2018-01-01
The new Advanced Placement (AP) Computer Science (CS) Principles course increases the need for quality CS teachers and thus the need for professional development (PD). This article presents the results of a 2-year study investigating how teachers teaching the AP CS Principles course for the first time used online PD material. Our results showed…
Static Memory Deduplication for Performance Optimization in Cloud Computing.
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-04-27
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.
Static Memory Deduplication for Performance Optimization in Cloud Computing
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-01-01
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible. PMID:28448434
NASA Astrophysics Data System (ADS)
Nguyen, L.; Chee, T.; Minnis, P.; Palikonda, R.; Smith, W. L., Jr.; Spangenberg, D.
2016-12-01
The NASA LaRC Satellite ClOud and Radiative Property retrieval System (SatCORPS) processes and derives near real-time (NRT) global cloud products from operational geostationary satellite imager datasets. These products are being used in NRT to improve forecast model, aircraft icing warnings, and support aircraft field campaigns. Next generation satellites, such as the Japanese Himawari-8 and the upcoming NOAA GOES-R, present challenges for NRT data processing and product dissemination due to the increase in temporal and spatial resolution. The volume of data is expected to increase to approximately 10 folds. This increase in data volume will require additional IT resources to keep up with the processing demands to satisfy NRT requirements. In addition, these resources are not readily available due to cost and other technical limitations. To anticipate and meet these computing resource requirements, we have employed a hybrid cloud computing environment to augment the generation of SatCORPS products. This paper will describe the workflow to ingest, process, and distribute SatCORPS products and the technologies used. Lessons learn from working on both AWS Clouds and GovCloud will be discussed: benefits, similarities, and differences that could impact decision to use cloud computing and storage. A detail cost analysis will be presented. In addition, future cloud utilization, parallelization, and architecture layout will be discussed for GOES-R.
The effects of feedback on computer workstation posture habits.
Epstein, Rhonda; Colford, Sean; Epstein, Ethan; Loye, Brandon; Walsh, Michael
2012-01-01
Repetitive stress injuries (RSI) and musculoskeletal disorders in the United States and worldwide are increasing at an alarming rate due to the advent of ubiquitous computer usage. Factors that lead to computer-related musculoskeletal disorders (MSD) include inadequately designed workstations, poor posture, and lack of knowledge about proper ergonomics and use habits. Studies have documented the negative impact of improper posture and the MSD seen in students and office workers due to frequent computer usage. Determine if the frequency (single vs. continuous reminder) and/or use of feedback affects posture at a computer workstation. Observations of posture habits were made in three local schools and one local company. Feedback effects were tested on the students (ages 10-15). Real time feedback was given in two studies. In one study, instructions and a verbal reminder were given to students and in a second study, a prototype 'Posture Pad' was developed to provide continuous feedback to the user. Verbal reminders to sit correctly led to transient improvement of posture. Use of the 'Posture Pad' resulted in significant improvement in posture with subjects exhibiting correct posture 98 ± 5% of the time. Real time feedback about how one is sitting is an effective mechanism for non-transient improvement of posture at computer workstations.
Power System Decomposition for Practical Implementation of Bulk-Grid Voltage Control Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.
Power system algorithms such as AC optimal power flow and coordinated volt/var control of the bulk power system are computationally intensive and become difficult to solve in operational time frames. The computational time required to run these algorithms increases exponentially as the size of the power system increases. The solution time for multiple subsystems is less than that for solving the entire system simultaneously, and the local nature of the voltage problem lends itself to such decomposition. This paper describes an algorithm that can be used to perform power system decomposition from the point of view of the voltage controlmore » problem. Our approach takes advantage of the dominant localized effect of voltage control and is based on clustering buses according to the electrical distances between them. One of the contributions of the paper is to use multidimensional scaling to compute n-dimensional Euclidean coordinates for each bus based on electrical distance to perform algorithms like K-means clustering. A simple coordinated reactive power control of photovoltaic inverters for voltage regulation is used to demonstrate the effectiveness of the proposed decomposition algorithm and its components. The proposed decomposition method is demonstrated on the IEEE 118-bus system.« less
Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Nikbay, Melike; Heeg, Jennifer
2017-01-01
This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.
An evaluation of superminicomputers for thermal analysis
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Vidal, J. B.; Jones, G. K.
1982-01-01
The use of superminicomputers for solving a series of increasingly complex thermal analysis problems is investigated. The approach involved (1) installation and verification of the SPAR thermal analyzer software on superminicomputers at Langley Research Center and Goddard Space Flight Center, (2) solution of six increasingly complex thermal problems on this equipment, and (3) comparison of solution (accuracy, CPU time, turnaround time, and cost) with solutions on large mainframe computers.
Elucidating reaction mechanisms on quantum computers.
Reiher, Markus; Wiebe, Nathan; Svore, Krysta M; Wecker, Dave; Troyer, Matthias
2017-07-18
With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.
Elucidating reaction mechanisms on quantum computers
Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias
2017-01-01
With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources. PMID:28674011
Elucidating reaction mechanisms on quantum computers
NASA Astrophysics Data System (ADS)
Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias
2017-07-01
With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.
Human face recognition using eigenface in cloud computing environment
NASA Astrophysics Data System (ADS)
Siregar, S. T. M.; Syahputra, M. F.; Rahmat, R. F.
2018-02-01
Doing a face recognition for one single face does not take a long time to process, but if we implement attendance system or security system on companies that have many faces to be recognized, it will take a long time. Cloud computing is a computing service that is done not on a local device, but on an internet connected to a data center infrastructure. The system of cloud computing also provides a scalability solution where cloud computing can increase the resources needed when doing larger data processing. This research is done by applying eigenface while collecting data as training data is also done by using REST concept to provide resource, then server can process the data according to existing stages. After doing research and development of this application, it can be concluded by implementing Eigenface, recognizing face by applying REST concept as endpoint in giving or receiving related information to be used as a resource in doing model formation to do face recognition.
NASA Technical Reports Server (NTRS)
Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.
1989-01-01
The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.
Visualization of Pulsar Search Data
NASA Astrophysics Data System (ADS)
Foster, R. S.; Wolszczan, A.
1993-05-01
The search for periodic signals from rotating neutron stars or pulsars has been a computationally taxing problem to astronomers for more than twenty-five years. Over this time interval, increases in computational capability have allowed ever more sensitive searches, covering a larger parameter space. The volume of input data and the general presence of radio frequency interference typically produce numerous spurious signals. Visualization of the search output and enhanced real-time processing of significant candidate events allow the pulsar searcher to optimally processes and search for new radio pulsars. The pulsar search algorithm and visualization system presented in this paper currently runs on serial RISC based workstations, a traditional vector based super computer, and a massively parallel computer. A description of the serial software algorithm and its modifications for massively parallel computing are describe. The results of four successive searches for millisecond period radio pulsars using the Arecibo telescope at 430 MHz have resulted in the successful detection of new long-period and millisecond period radio pulsars.
Reciprocal Associations between Electronic Media Use and Behavioral Difficulties in Preschoolers.
Poulain, Tanja; Vogel, Mandy; Neef, Madlen; Abicht, Franziska; Hilbert, Anja; Genuneit, Jon; Körner, Antje; Kiess, Wieland
2018-04-21
The use of electronic media has increased substantially and is already observable in young children. The present study explored associations of preschoolers’ use of electronic media with age, gender, and socio-economic status, investigated time trends, and examined reciprocal longitudinal relations between children’s use of electronic media and their behavioral difficulties. The study participants included 527 German two- to six-year-old children whose parents had provided information on their use of electronic media and their behavioral difficulties at two time points, with approximately 12 months between baseline and follow-up. The analyses revealed that older vs. younger children, as well as children from families with a lower vs. higher socio-economic status, were more often reported to use electronic media. Furthermore, the usage of mobile phones increased significantly between 2011 and 2016. Most interestingly, baseline usage of computer/Internet predicted more emotional and conduct problems at follow-up, and baseline usage of mobile phones was associated with more conduct problems and hyperactivity or inattention at follow-up. Peer relationship problems at baseline, on the other hand, increased the likelihood of using computer/Internet and mobile phones at follow-up. The findings indicate that preschoolers’ use of electronic media, especially newer media such as computer/Internet and mobile phones, and their behavioral difficulties are mutually related over time.
Reciprocal Associations between Electronic Media Use and Behavioral Difficulties in Preschoolers
Vogel, Mandy; Neef, Madlen; Abicht, Franziska; Hilbert, Anja; Körner, Antje; Kiess, Wieland
2018-01-01
The use of electronic media has increased substantially and is already observable in young children. The present study explored associations of preschoolers’ use of electronic media with age, gender, and socio-economic status, investigated time trends, and examined reciprocal longitudinal relations between children’s use of electronic media and their behavioral difficulties. The study participants included 527 German two- to six-year-old children whose parents had provided information on their use of electronic media and their behavioral difficulties at two time points, with approximately 12 months between baseline and follow-up. The analyses revealed that older vs. younger children, as well as children from families with a lower vs. higher socio-economic status, were more often reported to use electronic media. Furthermore, the usage of mobile phones increased significantly between 2011 and 2016. Most interestingly, baseline usage of computer/Internet predicted more emotional and conduct problems at follow-up, and baseline usage of mobile phones was associated with more conduct problems and hyperactivity or inattention at follow-up. Peer relationship problems at baseline, on the other hand, increased the likelihood of using computer/Internet and mobile phones at follow-up. The findings indicate that preschoolers’ use of electronic media, especially newer media such as computer/Internet and mobile phones, and their behavioral difficulties are mutually related over time. PMID:29690498
Dynamic Computation Offloading for Low-Power Wearable Health Monitoring Systems.
Kalantarian, Haik; Sideris, Costas; Mortazavi, Bobak; Alshurafa, Nabil; Sarrafzadeh, Majid
2017-03-01
The objective of this paper is to describe and evaluate an algorithm to reduce power usage and increase battery lifetime for wearable health-monitoring devices. We describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data processing between the wearable device and mobile application as a function of desired classification accuracy. By making the correct offloading decision based on current system parameters, we show that we are able to reduce system power by as much as 20%. We demonstrate that computation offloading can be applied to real-time monitoring systems, and yields significant power savings. Making correct offloading decisions for health monitoring devices can extend battery life and improve adherence.
Berner, Eta S.; Detmer, Don E.; Simborg, Donald
2005-01-01
For over thirty years, there have been predictions that the widespread clinical use of computers was imminent. Yet the “wave” has never broken. In this article, two broad time periods are examined: the 1960's to the 1980's and the 1980's to the present. Technology immaturity, health administrator focus on financial systems, application “unfriendliness,” and physician resistance were all barriers to acceptance during the early time period. Although these factors persist, changes in clinicians' economics, more computer literacy in the general population, and, most importantly, changes in government policies and increased support for clinical computing suggest that the wave may break in the next decade. PMID:15492029
2012-01-01
Background In order to inform interventions to prevent sedentariness, more longitudinal studies are needed focusing on stability and change over time in multiple sedentary behaviours. This paper investigates patterns of stability and change in TV/DVD use, computer/electronic game use and total screen time (TST) and factors associated with these patterns among Norwegian children in the transition between childhood and adolescence. Methods The baseline of this longitudinal study took place in September 2007 and included 975 students from 25 control schools of an intervention study, the HEalth In Adolescents (HEIA) study. The first follow-up took place in May 2008 and the second follow-up in May 2009, with 885 students participating at all time points (average age at baseline = 11.2, standard deviation ± 0.3). Time used for/spent on TV/DVD and computer/electronic games was self-reported, and a TST variable (hours/week) was computed. Tracking analyses based on absolute and rank measures, as well as regression analyses to assess factors associated with change in TST and with tracking high TST were conducted. Results Time spent on all sedentary behaviours investigated increased in both genders. Findings based on absolute and rank measures revealed a fair to moderate level of tracking over the 2 year period. High parental education was inversely related to an increase in TST among females. In males, self-efficacy related to barriers to physical activity and living with married or cohabitating parents were inversely related to an increase in TST. Factors associated with tracking high vs. low TST in the multinomial regression analyses were low self-efficacy and being of an ethnic minority background among females, and low self-efficacy, being overweight/obese and not living with married or cohabitating parents among males. Conclusions Use of TV/DVD and computer/electronic games increased with age and tracked over time in this group of 11-13 year old Norwegian children. Interventions targeting these sedentary behaviours should thus be introduced early. The identified modifiable and non-modifiable factors associated with change in TST and tracking of high TST should be taken into consideration when planning such interventions. PMID:22309715
Computer Technology for Industry
NASA Technical Reports Server (NTRS)
1979-01-01
In this age of the computer, more and more business firms are automating their operations for increased efficiency in a great variety of jobs, from simple accounting to managing inventories, from precise machining to analyzing complex structures. In the interest of national productivity, NASA is providing assistance both to longtime computer users and newcomers to automated operations. Through a special technology utilization service, NASA saves industry time and money by making available already developed computer programs which have secondary utility. A computer program is essentially a set of instructions which tells the computer how to produce desired information or effect by drawing upon its stored input. Developing a new program from scratch can be costly and time-consuming. Very often, however, a program developed for one purpose can readily be adapted to a totally different application. To help industry take advantage of existing computer technology, NASA operates the Computer Software Management and Information Center (COSMIC)(registered TradeMark),located at the University of Georgia. COSMIC maintains a large library of computer programs developed for NASA, the Department of Defense, the Department of Energy and other technology-generating agencies of the government. The Center gets a continual flow of software packages, screens them for adaptability to private sector usage, stores them and informs potential customers of their availability.
Biological modelling of a computational spiking neural network with neuronal avalanches.
Li, Xiumin; Chen, Qing; Xue, Fangzheng
2017-06-28
In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance.This article is part of the themed issue 'Mathematical methods in medicine: neuroscience, cardiology and pathology'. © 2017 The Author(s).
Biological modelling of a computational spiking neural network with neuronal avalanches
NASA Astrophysics Data System (ADS)
Li, Xiumin; Chen, Qing; Xue, Fangzheng
2017-05-01
In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.
Response Time as an Indicator of Test Taker Speed: Assumptions Meet Reality
ERIC Educational Resources Information Center
Wise, Steven L.
2015-01-01
The growing presence of computer-based testing has brought with it the capability to routinely capture the time that test takers spend on individual test items. This, in turn, has led to an increased interest in potential applications of response time in measuring intellectual ability and achievement. Goldhammer (this issue) provides a very useful…
78 FR 59775 - Blueberry Promotion, Research and Information Order; Assessment Rate Increase
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-30
... demand. \\6\\ The econometric model used statistical methods with time series data to measure how strongly... been over 15 times greater than the costs. At the opposite end of the spectrum in the supply response, the average BCR was computed to be 5.36, implying that the benefits of the USHBC were over five times...
Perspectives on the Future of CFD
NASA Technical Reports Server (NTRS)
Kwak, Dochan
2000-01-01
This viewgraph presentation gives an overview of the future of computational fluid dynamics (CFD), which in the past has pioneered the field of flow simulation. Over time CFD has progressed as computing power. Numerical methods have been advanced as CPU and memory capacity increases. Complex configurations are routinely computed now and direct numerical simulations (DNS) and large eddy simulations (LES) are used to study turbulence. As the computing resources changed to parallel and distributed platforms, computer science aspects such as scalability (algorithmic and implementation) and portability and transparent codings have advanced. Examples of potential future (or current) challenges include risk assessment, limitations of the heuristic model, and the development of CFD and information technology (IT) tools.
Programmable computing with a single magnetoresistive element
NASA Astrophysics Data System (ADS)
Ney, A.; Pampuch, C.; Koch, R.; Ploog, K. H.
2003-10-01
The development of transistor-based integrated circuits for modern computing is a story of great success. However, the proved concept for enhancing computational power by continuous miniaturization is approaching its fundamental limits. Alternative approaches consider logic elements that are reconfigurable at run-time to overcome the rigid architecture of the present hardware systems. Implementation of parallel algorithms on such `chameleon' processors has the potential to yield a dramatic increase of computational speed, competitive with that of supercomputers. Owing to their functional flexibility, `chameleon' processors can be readily optimized with respect to any computer application. In conventional microprocessors, information must be transferred to a memory to prevent it from getting lost, because electrically processed information is volatile. Therefore the computational performance can be improved if the logic gate is additionally capable of storing the output. Here we describe a simple hardware concept for a programmable logic element that is based on a single magnetic random access memory (MRAM) cell. It combines the inherent advantage of a non-volatile output with flexible functionality which can be selected at run-time to operate as an AND, OR, NAND or NOR gate.
Accelerating Time Integration for the Shallow Water Equations on the Sphere Using GPUs
Archibald, R.; Evans, K. J.; Salinger, A.
2015-06-01
The push towards larger and larger computational platforms has made it possible for climate simulations to resolve climate dynamics across multiple spatial and temporal scales. This direction in climate simulation has created a strong need to develop scalable timestepping methods capable of accelerating throughput on high performance computing. This study details the recent advances in the implementation of implicit time stepping of the spectral element dynamical core within the United States Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) on graphical processing units (GPU) based machines. We demonstrate how solvers in the Trilinos project are interfaced with ACMEmore » and GPU kernels to increase computational speed of the residual calculations in the implicit time stepping method for the atmosphere dynamics. We demonstrate the optimization gains and data structure reorganization that facilitates the performance improvements.« less
NASA Astrophysics Data System (ADS)
Saxena, Nishank; Hofmann, Ronny; Alpak, Faruk O.; Berg, Steffen; Dietderich, Jesse; Agarwal, Umang; Tandon, Kunj; Hunter, Sander; Freeman, Justin; Wilson, Ove Bjorn
2017-11-01
We generate a novel reference dataset to quantify the impact of numerical solvers, boundary conditions, and simulation platforms. We consider a variety of microstructures ranging from idealized pipes to digital rocks. Pore throats of the digital rocks considered are large enough to be well resolved with state-of-the-art micro-computerized tomography technology. Permeability is computed using multiple numerical engines, 12 in total, including, Lattice-Boltzmann, computational fluid dynamics, voxel based, fast semi-analytical, and known empirical models. Thus, we provide a measure of uncertainty associated with flow computations of digital media. Moreover, the reference and standards dataset generated is the first of its kind and can be used to test and improve new fluid flow algorithms. We find that there is an overall good agreement between solvers for idealized cross-section shape pipes. As expected, the disagreement increases with increase in complexity of the pore space. Numerical solutions for pipes with sinusoidal variation of cross section show larger variability compared to pipes of constant cross-section shapes. We notice relatively larger variability in computed permeability of digital rocks with coefficient of variation (of up to 25%) in computed values between various solvers. Still, these differences are small given other subsurface uncertainties. The observed differences between solvers can be attributed to several causes including, differences in boundary conditions, numerical convergence criteria, and parameterization of fundamental physics equations. Solvers that perform additional meshing of irregular pore shapes require an additional step in practical workflows which involves skill and can introduce further uncertainty. Computation times for digital rocks vary from minutes to several days depending on the algorithm and available computational resources. We find that more stringent convergence criteria can improve solver accuracy but at the expense of longer computation time.
Reconfigurable Computing As an Enabling Technology for Single-Photon-Counting Laser Altimetry
NASA Technical Reports Server (NTRS)
Powell, Wesley; Hicks, Edward; Pinchinat, Maxime; Dabney, Philip; McGarry, Jan; Murray, Paul
2003-01-01
Single-photon-counting laser altimetry is a new measurement technique offering significant advantages in vertical resolution, reducing instrument size, mass, and power, and reducing laser complexity as compared to analog or threshold detection laser altimetry techniques. However, these improvements come at the cost of a dramatically increased requirement for onboard real-time data processing. Reconfigurable computing has been shown to offer considerable performance advantages in performing this processing. These advantages have been demonstrated on the Multi-KiloHertz Micro-Laser Altimeter (MMLA), an aircraft based single-photon-counting laser altimeter developed by NASA Goddard Space Flight Center with several potential spaceflight applications. This paper describes how reconfigurable computing technology was employed to perform MMLA data processing in real-time under realistic operating constraints, along with the results observed. This paper also expands on these prior results to identify concepts for using reconfigurable computing to enable spaceflight single-photon-counting laser altimeter instruments.
An automatic step adjustment method for average power analysis technique used in fiber amplifiers
NASA Astrophysics Data System (ADS)
Liu, Xue-Ming
2006-04-01
An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.
The computational challenges of Earth-system science.
O'Neill, Alan; Steenman-Clark, Lois
2002-06-15
The Earth system--comprising atmosphere, ocean, land, cryosphere and biosphere--is an immensely complex system, involving processes and interactions on a wide range of space- and time-scales. To understand and predict the evolution of the Earth system is one of the greatest challenges of modern science, with success likely to bring enormous societal benefits. High-performance computing, along with the wealth of new observational data, is revolutionizing our ability to simulate the Earth system with computer models that link the different components of the system together. There are, however, considerable scientific and technical challenges to be overcome. This paper will consider four of them: complexity, spatial resolution, inherent uncertainty and time-scales. Meeting these challenges requires a significant increase in the power of high-performance computers. The benefits of being able to make reliable predictions about the evolution of the Earth system should, on their own, amply repay this investment.
Parallel photonic information processing at gigabyte per second data rates using transient states
NASA Astrophysics Data System (ADS)
Brunner, Daniel; Soriano, Miguel C.; Mirasso, Claudio R.; Fischer, Ingo
2013-01-01
The increasing demands on information processing require novel computational concepts and true parallelism. Nevertheless, hardware realizations of unconventional computing approaches never exceeded a marginal existence. While the application of optics in super-computing receives reawakened interest, new concepts, partly neuro-inspired, are being considered and developed. Here we experimentally demonstrate the potential of a simple photonic architecture to process information at unprecedented data rates, implementing a learning-based approach. A semiconductor laser subject to delayed self-feedback and optical data injection is employed to solve computationally hard tasks. We demonstrate simultaneous spoken digit and speaker recognition and chaotic time-series prediction at data rates beyond 1Gbyte/s. We identify all digits with very low classification errors and perform chaotic time-series prediction with 10% error. Our approach bridges the areas of photonic information processing, cognitive and information science.
Application of multi-grid method on the simulation of incremental forging processes
NASA Astrophysics Data System (ADS)
Ramadan, Mohamad; Khaled, Mahmoud; Fourment, Lionel
2016-10-01
Numerical simulation becomes essential in manufacturing large part by incremental forging processes. It is a splendid tool allowing to show physical phenomena however behind the scenes, an expensive bill should be paid, that is the computational time. That is why many techniques are developed to decrease the computational time of numerical simulation. Multi-Grid method is a numerical procedure that permits to reduce computational time of numerical calculation by performing the resolution of the system of equations on several mesh of decreasing size which allows to smooth faster the low frequency of the solution as well as its high frequency. In this paper a Multi-Grid method is applied to cogging process in the software Forge 3. The study is carried out using increasing number of degrees of freedom. The results shows that calculation time is divide by two for a mesh of 39,000 nodes. The method is promising especially if coupled with Multi-Mesh method.
Scheduling algorithms for automatic control systems for technological processes
NASA Astrophysics Data System (ADS)
Chernigovskiy, A. S.; Tsarev, R. Yu; Kapulin, D. V.
2017-01-01
Wide use of automatic process control systems and the usage of high-performance systems containing a number of computers (processors) give opportunities for creation of high-quality and fast production that increases competitiveness of an enterprise. Exact and fast calculations, control computation, and processing of the big data arrays - all of this requires the high level of productivity and, at the same time, minimum time of data handling and result receiving. In order to reach the best time, it is necessary not only to use computing resources optimally, but also to design and develop the software so that time gain will be maximal. For this purpose task (jobs or operations), scheduling techniques for the multi-machine/multiprocessor systems are applied. Some of basic task scheduling methods for the multi-machine process control systems are considered in this paper, their advantages and disadvantages come to light, and also some usage considerations, in case of the software for automatic process control systems developing, are made.
Floating-Point Modules Targeted for Use with RC Compilation Tools
NASA Technical Reports Server (NTRS)
Sahin, Ibrahin; Gloster, Clay S.
2000-01-01
Reconfigurable Computing (RC) has emerged as a viable computing solution for computationally intensive applications. Several applications have been mapped to RC system and in most cases, they provided the smallest published execution time. Although RC systems offer significant performance advantages over general-purpose processors, they require more application development time than general-purpose processors. This increased development time of RC systems provides the motivation to develop an optimized module library with an assembly language instruction format interface for use with future RC system that will reduce development time significantly. In this paper, we present area/performance metrics for several different types of floating point (FP) modules that can be utilized to develop complex FP applications. These modules are highly pipelined and optimized for both speed and area. Using these modules, and example application, FP matrix multiplication, is also presented. Our results and experiences show, that with these modules, 8-10X speedup over general-purpose processors can be achieved.
Optical signal processing using photonic reservoir computing
NASA Astrophysics Data System (ADS)
Salehi, Mohammad Reza; Dehyadegari, Louiza
2014-10-01
As a new approach to recognition and classification problems, photonic reservoir computing has such advantages as parallel information processing, power efficient and high speed. In this paper, a photonic structure has been proposed for reservoir computing which is investigated using a simple, yet, non-partial noisy time series prediction task. This study includes the application of a suitable topology with self-feedbacks in a network of SOA's - which lends the system a strong memory - and leads to adjusting adequate parameters resulting in perfect recognition accuracy (100%) for noise-free time series, which shows a 3% improvement over previous results. For the classification of noisy time series, the rate of accuracy showed a 4% increase and amounted to 96%. Furthermore, an analytical approach was suggested to solve rate equations which led to a substantial decrease in the simulation time, which is an important parameter in classification of large signals such as speech recognition, and better results came up compared with previous works.
Na, Okpin; Cai, Xiao-Chuan; Xi, Yunping
2017-01-01
The prediction of the chloride-induced corrosion is very important because of the durable life of concrete structure. To simulate more realistic durability performance of concrete structures, complex scientific methods and more accurate material models are needed. In order to predict the robust results of corrosion initiation time and to describe the thin layer from concrete surface to reinforcement, a large number of fine meshes are also used. The purpose of this study is to suggest more realistic physical model regarding coupled hygro-chemo transport and to implement the model with parallel finite element algorithm. Furthermore, microclimate model with environmental humidity and seasonal temperature is adopted. As a result, the prediction model of chloride diffusion under unsaturated condition was developed with parallel algorithms and was applied to the existing bridge to validate the model with multi-boundary condition. As the number of processors increased, the computational time decreased until the number of processors became optimized. Then, the computational time increased because the communication time between the processors increased. The framework of present model can be extended to simulate the multi-species de-icing salts ingress into non-saturated concrete structures in future work. PMID:28772714
Hsu, John; Huang, Jie; Fung, Vicki; Robertson, Nan; Jimison, Holly; Frankel, Richard
2005-01-01
The aim of this study was to evaluate the impact of introducing health information technology (HIT) on physician-patient interactions during outpatient visits. This was a longitudinal pre-post study: two months before and one and seven months after introduction of examination room computers. Patient questionnaires (n = 313) after primary care visits with physicians (n = 8) within an integrated delivery system. There were three patient satisfaction domains: (1) satisfaction with visit components, (2) comprehension of the visit, and (3) perceptions of the physician's use of the computer. Patients reported that physicians used computers in 82.3% of visits. Compared with baseline, overall patient satisfaction with visits increased seven months after the introduction of computers (odds ratio [OR] = 1.50; 95% confidence interval [CI]: 1.01-2.22), as did satisfaction with physicians' familiarity with patients (OR = 1.60, 95% CI: 1.01-2.52), communication about medical issues (OR = 1.61; 95% CI: 1.05-2.47), and comprehension of decisions made during the visit (OR = 1.63; 95% CI: 1.06-2.50). In contrast, there were no significant changes in patient satisfaction with comprehension of self-care responsibilities, communication about psychosocial issues, or available visit time. Seven months post-introduction, patients were more likely to report that the computer helped the visit run in a more timely manner (OR = 1.76; 95% CI: 1.28-2.42) compared with the first month after introduction. There were no other significant changes in patient perceptions of the computer use over time. The examination room computers appeared to have positive effects on physician-patient interactions related to medical communication without significant negative effects on other areas such as time available for patient concerns. Further study is needed to better understand HIT use during outpatient visits.
A Series of Computational Neuroscience Labs Increases Comfort with MATLAB.
Nichols, David F
2015-01-01
Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience.
A Series of Computational Neuroscience Labs Increases Comfort with MATLAB
Nichols, David F.
2015-01-01
Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience. PMID:26557798
Space Transportation and the Computer Industry: Learning from the Past
NASA Technical Reports Server (NTRS)
Merriam, M. L.; Rasky, D.
2002-01-01
Since the space shuttle began flying in 1981, NASA has made a number of attempts to advance the state of the art in space transportation. In spite of billions of dollars invested, and several concerted attempts, no replacement for the shuttle is expected before 2010. Furthermore, the cost of access to space has dropped very slowly over the last two decades. On the other hand, the same two decades have seen dramatic progress in the computer industry. Computational speeds have increased by about a factor of 1000 and available memory, disk space, and network bandwidth has seen similar increases. At the same time, the cost of computing has dropped by about a factor of 10000. Is the space transportation problem simply harder? Or is there something to be learned from the computer industry? In looking for the answers, this paper reviews the early history of NASA's experience with supercomputers and NASA's visionary course change in supercomputer procurement strategy.
NASA Astrophysics Data System (ADS)
Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain
2017-10-01
We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.
Neighborhood disorder and screen time among 10-16 year old Canadian youth: A cross-sectional study
2012-01-01
Background Screen time activities (e.g., television, computers, video games) have been linked to several negative health outcomes among young people. In order to develop evidence-based interventions to reduce screen time, the factors that influence the behavior need to be better understood. High neighborhood disorder, which may encourage young people to stay indoors where screen time activities are readily available, is one potential factor to consider. Methods Results are based on 15,917 youth in grades 6-10 (aged 10-16 years old) who participated in the Canadian 2009/10 Health Behaviour in School-aged Children Survey (HBSC). Total hours per week of television, video games, and computer use were reported by the participating students in the HBSC student questionnaire. Ten items of neighborhood disorder including safety, neighbors taking advantage, drugs/drinking in public, ethnic tensions, gangs, crime, conditions of buildings/grounds, abandoned buildings, litter, and graffiti were measured using the HBSC student questionnaire, the HBSC administrator questionnaire, and Geographic Information Systems. Based upon these 10 items, social and physical neighborhood disorder variables were derived using principal component analysis. Multivariate multilevel logistic regression analyses were used to examine the relationship between social and physical neighborhood disorder and individual screen time variables. Results High (top quartile) social neighborhood disorder was associated with approximately 35-45% increased risk of high (top quartile) television, computer, and video game use. Physical neighborhood disorder was not associated with screen time activities after adjusting for social neighborhood disorder. However, high social and physical neighborhood disorder combined was associated with approximately 40-60% increased likelihood of high television, computer, and video game use. Conclusion High neighborhood disorder is one environmental factor that may be important to consider for future public health interventions and strategies aiming to reduce screen time among youth. PMID:22651908
Harris, C; Straker, L; Pollock, C
2013-01-01
Young people are exposed to a range of information technologies (IT) in different environments, including home and school, however the factors influencing IT use at home and school are poorly understood. The aim of this study was to investigate young people's computer exposure patterns at home and school, and related factors such as age, gender and the types of IT used. 1351 children in Years 1, 6, 9 and 11 from 10 schools in metropolitan Western Australia were surveyed. Most children had access to computers at home and school, with computer exposures comparable to TV, reading and writing. Total computer exposure was greater at home than school, and increased with age. Computer activities varied with age and gender and became more social with increased age, at the same time parental involvement reduced. Bedroom computer use was found to result in higher exposure patterns. High use of home and school computers were associated with each other. Associations varied depending on the type of IT exposure measure (frequency, mean weekly hours, usual and longest duration). The frequency and duration of children's computer exposure were associated with a complex interplay of the environment of use, the participant's age and gender and other IT activities.
Integration of active pauses and pattern of muscular activity during computer work.
St-Onge, Nancy; Samani, Afshin; Madeleine, Pascal
2017-09-01
Submaximal isometric muscle contractions have been reported to increase variability of muscle activation during computer work; however, other types of active contractions may be more beneficial. Our objective was to determine which type of active pause vs. rest is more efficient in changing muscle activity pattern during a computer task. Asymptomatic regular computer users performed a standardised 20-min computer task four times, integrating a different type of pause: sub-maximal isometric contraction, dynamic contraction, postural exercise and rest. Surface electromyographic (SEMG) activity was recorded bilaterally from five neck/shoulder muscles. Root-mean-square decreased with isometric pauses in the cervical paraspinals, upper trapezius and middle trapezius, whereas it increased with rest. Variability in the pattern of muscular activity was not affected by any type of pause. Overall, no detrimental effects on the level of SEMG during active pauses were found suggesting that they could be implemented without a cost on activation level or variability. Practitioner Summary: We aimed to determine which type of active pause vs. rest is best in changing muscle activity pattern during a computer task. Asymptomatic computer users performed a standardised computer task integrating different types of pauses. Muscle activation decreased with isometric pauses in neck/shoulder muscles, suggesting their implementation during computer work.
Three-dimensional particle-particle simulations: Dependence of relaxation time on plasma parameter
NASA Astrophysics Data System (ADS)
Zhao, Yinjian
2018-05-01
A particle-particle simulation model is applied to investigate the dependence of the relaxation time on the plasma parameter in a three-dimensional unmagnetized plasma. It is found that the relaxation time increases linearly as the plasma parameter increases within the range of the plasma parameter from 2 to 10; when the plasma parameter equals 2, the relaxation time is independent of the total number of particles, but when the plasma parameter equals 10, the relaxation time slightly increases as the total number of particles increases, which indicates the transition of a plasma from collisional to collisionless. In addition, ions with initial Maxwell-Boltzmann (MB) distribution are found to stay in the MB distribution during the whole simulation time, and the mass of ions does not significantly affect the relaxation time of electrons. This work also shows the feasibility of the particle-particle model when using GPU parallel computing techniques.
Dilsizian, Steven E; Siegel, Eliot L
2014-01-01
Although advances in information technology in the past decade have come in quantum leaps in nearly every aspect of our lives, they seem to be coming at a slower pace in the field of medicine. However, the implementation of electronic health records (EHR) in hospitals is increasing rapidly, accelerated by the meaningful use initiatives associated with the Center for Medicare & Medicaid Services EHR Incentive Programs. The transition to electronic medical records and availability of patient data has been associated with increases in the volume and complexity of patient information, as well as an increase in medical alerts, with resulting "alert fatigue" and increased expectations for rapid and accurate diagnosis and treatment. Unfortunately, these increased demands on health care providers create greater risk for diagnostic and therapeutic errors. In the near future, artificial intelligence (AI)/machine learning will likely assist physicians with differential diagnosis of disease, treatment options suggestions, and recommendations, and, in the case of medical imaging, with cues in image interpretation. Mining and advanced analysis of "big data" in health care provide the potential not only to perform "in silico" research but also to provide "real time" diagnostic and (potentially) therapeutic recommendations based on empirical data. "On demand" access to high-performance computing and large health care databases will support and sustain our ability to achieve personalized medicine. The IBM Jeopardy! Challenge, which pitted the best all-time human players against the Watson computer, captured the imagination of millions of people across the world and demonstrated the potential to apply AI approaches to a wide variety of subject matter, including medicine. The combination of AI, big data, and massively parallel computing offers the potential to create a revolutionary way of practicing evidence-based, personalized medicine.
NASA Technical Reports Server (NTRS)
Moravec, Hans
1993-01-01
Exploration and colonization of the universe awaits, but Earth-adapted biological humans are ill-equipped to respond to the challenge. Machines have gone farther and seen more, limited though they presently are by insect-like behavior inflexibility. As they become smarter over the coming decades, space will be theirs. Organizations of robots of ever increasing intelligence and sensory and motor ability will expand and transform what they occupy, working with matter, space and time. As they grow, a smaller and smaller fraction of their territory will be undeveloped frontier. Competitive success will depend more and more on using already available matter and space in ever more refined and useful forms. The process, analogous to the miniaturization that makes today's computers a trillion times more powerful than the mechanical calculators of the past, will gradually transform all activity from grossly physical homesteading of raw nature, to minimum-energy quantum transactions of computation. The final frontier will be urbanized, ultimately into an arena where every bit of activity is a meaningful computation: the inhabited portion of the universe will be transformed into a cyberspace. Because it will use resources more efficiently, a mature cyberspace of the distant future will be effectively much bigger than the present physical universe. While only an infinitesimal fraction of existing matter and space is doing interesting work, in a well developed cyberspace every bit will be part of a relevant computation or storing a useful datum. Over time, more compact and faster ways of using space and matter will be invented, and used to restructure the cyberspace, effectively increasing the amount of computational spacetime per unit of physical spacetime. Computational speed-ups will affect the subjective experience of entities in the cyberspace in a paradoxical way. At first glimpse, there is no subjective effect, because everything, inside and outside the individual, speeds up equally. But, more subtly, speed-up produces an expansion of the cyber universe, because, as thought accelerates, more subjective time passes during the fixed (probably lightspeed) physical transit time of a message between a given pair of locations - so those fixed locations seem to grow farther apart. Also, as information storage is made continually more efficient through both denser utilization of matter and more efficient encodings, there will be increasingly more cyber-stuff between any two points. The effect may somewhat resemble the continuous-creation process in the old steady-state theory of the physical universe of Hoyle, Bondi and Gold, where hydrogen atoms appear just fast enough throughout the expanding cosmos to maintain a constant density.
The science of visual analysis at extreme scale
NASA Astrophysics Data System (ADS)
Nowell, Lucy T.
2011-01-01
Driven by market forces and spanning the full spectrum of computational devices, computer architectures are changing in ways that present tremendous opportunities and challenges for data analysis and visual analytic technologies. Leadership-class high performance computing system will have as many as a million cores by 2020 and support 10 billion-way concurrency, while laptop computers are expected to have as many as 1,000 cores by 2015. At the same time, data of all types are increasing exponentially and automated analytic methods are essential for all disciplines. Many existing analytic technologies do not scale to make full use of current platforms and fewer still are likely to scale to the systems that will be operational by the end of this decade. Furthermore, on the new architectures and for data at extreme scales, validating the accuracy and effectiveness of analytic methods, including visual analysis, will be increasingly important.
NASA Astrophysics Data System (ADS)
Mundis, Nathan L.; Mavriplis, Dimitri J.
2017-09-01
The time-spectral method applied to the Euler and coupled aeroelastic equations theoretically offers significant computational savings for purely periodic problems when compared to standard time-implicit methods. However, attaining superior efficiency with time-spectral methods over traditional time-implicit methods hinges on the ability rapidly to solve the large non-linear system resulting from time-spectral discretizations which become larger and stiffer as more time instances are employed or the period of the flow becomes especially short (i.e. the maximum resolvable wave-number increases). In order to increase the efficiency of these solvers, and to improve robustness, particularly for large numbers of time instances, the Generalized Minimal Residual Method (GMRES) is used to solve the implicit linear system over all coupled time instances. The use of GMRES as the linear solver makes time-spectral methods more robust, allows them to be applied to a far greater subset of time-accurate problems, including those with a broad range of harmonic content, and vastly improves the efficiency of time-spectral methods. In previous work, a wave-number independent preconditioner that mitigates the increased stiffness of the time-spectral method when applied to problems with large resolvable wave numbers has been developed. This preconditioner, however, directly inverts a large matrix whose size increases in proportion to the number of time instances. As a result, the computational time of this method scales as the cube of the number of time instances. In the present work, this preconditioner has been reworked to take advantage of an approximate-factorization approach that effectively decouples the spatial and temporal systems. Once decoupled, the time-spectral matrix can be inverted in frequency space, where it has entries only on the main diagonal and therefore can be inverted quite efficiently. This new GMRES/preconditioner combination is shown to be over an order of magnitude more efficient than the previous wave-number independent preconditioner for problems with large numbers of time instances and/or large reduced frequencies.
CloudMC: a cloud computing application for Monte Carlo simulation.
Miras, H; Jiménez, R; Miras, C; Gomà, C
2013-04-21
This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.
ERIC Educational Resources Information Center
Guimarães, Bruno; Ribeiro, José; Cruz, Bernardo; Ferreira, André; Alves, Hélio; Cruz-Correia, Ricardo; Madeira, Maria Dulce; Ferreira, Maria Amélia
2018-01-01
The time, material, and staff-consuming nature of anatomy's traditional pen-and-paper assessment system, the increase in the number of students enrolling in medical schools and the ever-escalating workload of academic staff have made the use of computer-based assessment (CBA) an attractive proposition. To understand the impact of such shift in the…
ERIC Educational Resources Information Center
DeSchryver, Michael D.; Yadav, Aman
2015-01-01
For too long, creativity in schools has been almost solely associated with art, music, and writing classes. Now, creative thinking skills are increasingly emphasized across the disciplines. At the same time, technological progress has brought about calls for the integration of new literacies and computational thinking to prepare students as…
Characterizing and Mitigating Work Time Inflation in Task Parallel Programs
Olivier, Stephen L.; de Supinski, Bronis R.; Schulz, Martin; ...
2013-01-01
Task parallelism raises the level of abstraction in shared memory parallel programming to simplify the development of complex applications. However, task parallel applications can exhibit poor performance due to thread idleness, scheduling overheads, and work time inflation – additional time spent by threads in a multithreaded computation beyond the time required to perform the same work in a sequential computation. We identify the contributions of each factor to lost efficiency in various task parallel OpenMP applications and diagnose the causes of work time inflation in those applications. Increased data access latency can cause significant work time inflation in NUMA systems.more » Our locality framework for task parallel OpenMP programs mitigates this cause of work time inflation. Our extensions to the Qthreads library demonstrate that locality-aware scheduling can improve performance up to 3X compared to the Intel OpenMP task scheduler.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palazzo, S.; Vagliasindi, G.; Arena, P.
2010-08-15
In the past years cameras have become increasingly common tools in scientific applications. They are now quite systematically used in magnetic confinement fusion, to the point that infrared imaging is starting to be used systematically for real-time machine protection in major devices. However, in order to guarantee that the control system can always react rapidly in case of critical situations, the time required for the processing of the images must be as predictable as possible. The approach described in this paper combines the new computational paradigm of cellular nonlinear networks (CNNs) with field-programmable gate arrays and has been tested inmore » an application for the detection of hot spots on the plasma facing components in JET. The developed system is able to perform real-time hot spot recognition, by processing the image stream captured by JET wide angle infrared camera, with the guarantee that computational time is constant and deterministic. The statistical results obtained from a quite extensive set of examples show that this solution approximates very well an ad hoc serial software algorithm, with no false or missed alarms and an almost perfect overlapping of alarm intervals. The computational time can be reduced to a millisecond time scale for 8 bit 496x560-sized images. Moreover, in our implementation, the computational time, besides being deterministic, is practically independent of the number of iterations performed by the CNN - unlike software CNN implementations.« less
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-07
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
NASA Astrophysics Data System (ADS)
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-01
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
The Role of Parents and Related Factors on Adolescent Computer Use
Epstein, Jennifer A.
2012-01-01
Background Research suggested the importance of parents on their adolescents’ computer activity. Spending too much time on the computer for recreational purposes in particular has been found to be related to areas of public health concern in children/adolescents, including obesity and substance use. Design and Methods The goal of the research was to determine the association between recreational computer use and potentially linked factors (parental monitoring, social influences to use computers including parents, age of first computer use, self-control, and particular internet activities). Participants (aged 13-17 years and residing in the United States) were recruited via the Internet to complete an anonymous survey online using a survey tool. The target sample of 200 participants who completed the survey was achieved. The sample’s average age was 16 and was 63% girls. Results A set of regressions with recreational computer use as dependent variables were run. Conclusions Less parental monitoring, younger age at first computer use, listening or downloading music from the internet more frequently, using the internet for educational purposes less frequently, and parent’s use of the computer for pleasure were related to spending a greater percentage of time on non-school computer use. These findings suggest the importance of parental monitoring and parental computer use on their children’s own computer use, and the influence of some internet activities on adolescent computer use. Finally, programs aimed at parents to help them increase the age when their children start using computers and learn how to place limits on recreational computer use are needed. PMID:25170449
Robust Duplication with Comparison Methods in Microcontrollers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Heather Marie; Baker, Zachary Kent; Fairbanks, Thomas D.
Commercial microprocessors could be useful computational platforms in space systems, as long as the risk is bound. Many spacecraft are computationally constrained because all of the computation is done on a single radiation-hardened microprocessor. It is possible that a commercial microprocessor could be used for configuration, monitoring and background tasks that are not mission critical. Most commercial microprocessors are affected by radiation, including single-event effects (SEEs) that could be destructive to the component or corrupt the data. Part screening can help designers avoid components with destructive failure modes, and mitigation can suppress data corruption. We have been experimenting with amore » method for masking radiation-induced faults through the software executing on the microprocessor. While triple-modular redundancy (TMR) techniques are very effective at masking faults in software, the increased amount of execution time to complete the computation is not desirable. Here in this article we present a technique for combining duplication with compare (DWC) with TMR that decreases observable errors by as much as 145 times with only a 2.35 time decrease in performance.« less
Robust Duplication with Comparison Methods in Microcontrollers
Quinn, Heather Marie; Baker, Zachary Kent; Fairbanks, Thomas D.; ...
2016-01-01
Commercial microprocessors could be useful computational platforms in space systems, as long as the risk is bound. Many spacecraft are computationally constrained because all of the computation is done on a single radiation-hardened microprocessor. It is possible that a commercial microprocessor could be used for configuration, monitoring and background tasks that are not mission critical. Most commercial microprocessors are affected by radiation, including single-event effects (SEEs) that could be destructive to the component or corrupt the data. Part screening can help designers avoid components with destructive failure modes, and mitigation can suppress data corruption. We have been experimenting with amore » method for masking radiation-induced faults through the software executing on the microprocessor. While triple-modular redundancy (TMR) techniques are very effective at masking faults in software, the increased amount of execution time to complete the computation is not desirable. Here in this article we present a technique for combining duplication with compare (DWC) with TMR that decreases observable errors by as much as 145 times with only a 2.35 time decrease in performance.« less
A novel processing platform for post tape out flows
NASA Astrophysics Data System (ADS)
Vu, Hien T.; Kim, Soohong; Word, James; Cai, Lynn Y.
2018-03-01
As the computational requirements for post tape out (PTO) flows increase at the 7nm and below technology nodes, there is a need to increase the scalability of the computational tools in order to reduce the turn-around time (TAT) of the flows. Utilization of design hierarchy has been one proven method to provide sufficient partitioning to enable PTO processing. However, as the data is processed through the PTO flow, its effective hierarchy is reduced. The reduction is necessary to achieve the desired accuracy. Also, the sequential nature of the PTO flow is inherently non-scalable. To address these limitations, we are proposing a quasi-hierarchical solution that combines multiple levels of parallelism to increase the scalability of the entire PTO flow. In this paper, we describe the system and present experimental results demonstrating the runtime reduction through scalable processing with thousands of computational cores.
Use of a Tracing Task to Assess Visuomotor Performance: Effects of Age, Sex, and Handedness
2013-01-01
Background. Visuomotor abnormalities are common in aging and age-related disease, yet difficult to quantify. This study investigated the effects of healthy aging, sex, and handedness on the performance of a tracing task. Participants (n = 150, aged 21–95 years, 75 females) used a stylus to follow a moving target around a circle on a tablet computer with their dominant and nondominant hands. Participants also performed the Trail Making Test (a measure of executive function). Methods. Deviations from the circular path were computed to derive an “error” time series. For each time series, absolute mean, variance, and complexity index (a proposed measure of system functionality and adaptability) were calculated. Using the moving target and stylus coordinates, the percentage of task time within the target region and the cumulative micropause duration (a measure of motion continuity) were computed. Results. All measures showed significant effects of aging (p < .0005). Post hoc age group comparisons showed that with increasing age, the absolute mean and variance of the error increased, complexity index decreased, percentage of time within the target region decreased, and cumulative micropause duration increased. Only complexity index showed a significant difference between dominant versus nondominant hands within each age group (p < .0005). All measures showed relationships to the Trail Making Test (p < .05). Conclusions. Measures derived from a tracing task identified performance differences in healthy individuals as a function of age, sex, and handedness. Studies in populations with specific neuromotor syndromes are warranted to test the utility of measures based on the dynamics of tracking a target as a clinical assessment tool. PMID:23388876
An Overview of High Performance Computing and Challenges for the Future
Google Tech Talks
2017-12-09
In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and lgorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Speaker: Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He has published approximately 200 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches. He is a Fellow of the AAAS, ACM, and the IEEE and a member of the National Academy of Engineering.
An Overview of High Performance Computing and Challenges for the Future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Google Tech Talks
In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and lgorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies,more » range of latencies, and increased run--time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Speaker: Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He has published approximately 200 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches. He is a Fellow of the AAAS, ACM, and the IEEE and a member of the National Academy of Engineering.« less
Multiprocessing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1990-01-01
Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.
Frequency modulation television analysis: Distortion analysis
NASA Technical Reports Server (NTRS)
Hodge, W. H.; Wong, W. H.
1973-01-01
Computer simulation is used to calculate the time-domain waveform of standard T-pulse-and-bar test signal distorted in passing through an FM television system. The simulator includes flat or preemphasized systems and requires specification of the RF predetection filter characteristics. The predetection filters are modeled with frequency-symmetric Chebyshev (0.1-db ripple) and Butterworth filters. The computer was used to calculate distorted output signals for sixty-four different specified systems, and the output waveforms are plotted for all sixty-four. Comparison of the plotted graphs indicates that a Chebyshev predetection filter of four poles causes slightly more signal distortion than a corresponding Butterworth filter and the signal distortion increases as the number of poles increases. An increase in the peak deviation also increases signal distortion. Distortion also increases with the addition of preemphasis.
Performance Comparison of Mainframe, Workstations, Clusters, and Desktop Computers
NASA Technical Reports Server (NTRS)
Farley, Douglas L.
2005-01-01
A performance evaluation of a variety of computers frequently found in a scientific or engineering research environment was conducted using a synthetic and application program benchmarks. From a performance perspective, emerging commodity processors have superior performance relative to legacy mainframe computers. In many cases, the PC clusters exhibited comparable performance with traditional mainframe hardware when 8-12 processors were used. The main advantage of the PC clusters was related to their cost. Regardless of whether the clusters were built from new computers or whether they were created from retired computers their performance to cost ratio was superior to the legacy mainframe computers. Finally, the typical annual maintenance cost of legacy mainframe computers is several times the cost of new equipment such as multiprocessor PC workstations. The savings from eliminating the annual maintenance fee on legacy hardware can result in a yearly increase in total computational capability for an organization.
On the computational aspects of comminution in discrete element method
NASA Astrophysics Data System (ADS)
Chaudry, Mohsin Ali; Wriggers, Peter
2018-04-01
In this paper, computational aspects of crushing/comminution of granular materials are addressed. For crushing, maximum tensile stress-based criterion is used. Crushing model in discrete element method (DEM) is prone to problems of mass conservation and reduction in critical time step. The first problem is addressed by using an iterative scheme which, depending on geometric voids, recovers mass of a particle. In addition, a global-local framework for DEM problem is proposed which tends to alleviate the local unstable motion of particles and increases the computational efficiency.
One approach for evaluating the Distributed Computing Design System (DCDS)
NASA Technical Reports Server (NTRS)
Ellis, J. T.
1985-01-01
The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.
Orientation/Time Management Skill Training Lesson: Development and Evaluation. Final Report.
ERIC Educational Resources Information Center
Dobrovolny, Jacqueline L.; And Others
A lesson was developed containing materials designed to assist students in their adaptation to the novelties of a computer assisted or managed instructional environment, providing students with appropriate role models for increasing acceptance of their increased responsibility for learning and introducing a progress tracking approach to assist…
Real-time, autonomous precise satellite orbit determination using the global positioning system
NASA Astrophysics Data System (ADS)
Goldstein, David Ben
2000-10-01
The desire for autonomously generated, rapidly available, and highly accurate satellite ephemeris is growing with the proliferation of constellations of satellites and the cost and overhead of ground tracking resources. Autonomous Orbit Determination (OD) may be done on the ground in a post-processing mode or in real-time on board a satellite and may be accomplished days, hours or immediately after observations are processed. The Global Positioning System (GPS) is now widely used as an alternative to ground tracking resources to supply observation data for satellite positioning and navigation. GPS is accurate, inexpensive, provides continuous coverage, and is an excellent choice for autonomous systems. In an effort to estimate precise satellite ephemeris in real-time on board a satellite, the Goddard Space Flight Center (GSFC) created the GPS Enhanced OD Experiment (GEODE) flight navigation software. This dissertation offers alternative methods and improvements to GEODE to increase on board autonomy and real-time total position accuracy and precision without increasing computational burden. First, GEODE is modified to include a Gravity Acceleration Approximation Function (GAAF) to replace the traditional spherical harmonic representation of the gravity field. Next, an ionospheric correction method called Differenced Range Versus Integrated Doppler (DRVID) is applied to correct for ionospheric errors in the GPS measurements used in GEODE. Then, Dynamic Model Compensation (DMC) is added to estimate unmodeled and/or mismodeled forces in the dynamic model and to provide an alternative process noise variance-covariance formulation. Finally, a Genetic Algorithm (GA) is implemented in the form of Genetic Model Compensation (GMC) to optimize DMC forcing noise parameters. Application of GAAF, DRVID and DMC improved GEODE's position estimates by 28.3% when applied to GPS/MET data collected in the presence of Selective Availability (SA), 17.5% when SA is removed from the GPS/MET data and 10.8% on SA free TOPEX data. Position estimates with RSS errors below I meter are now achieved using SA free TOPEX data. DRVID causes an increase in computational burden while GAAF and DMC reduce computational burden. The net effect of applying GAAF, DRVID and DMC is an improvement in GEODE's accuracy/precision without an increase in computational burden.
International benchmarking of longitudinal train dynamics simulators: results
NASA Astrophysics Data System (ADS)
Wu, Qing; Spiryagin, Maksym; Cole, Colin; Chang, Chongyi; Guo, Gang; Sakalo, Alexey; Wei, Wei; Zhao, Xubao; Burgelman, Nico; Wiersma, Pier; Chollet, Hugues; Sebes, Michel; Shamdani, Amir; Melzi, Stefano; Cheli, Federico; di Gialleonardo, Egidio; Bosso, Nicola; Zampieri, Nicolò; Luo, Shihui; Wu, Honghua; Kaza, Guy-Léon
2018-03-01
This paper presents the results of the International Benchmarking of Longitudinal Train Dynamics Simulators which involved participation of nine simulators (TABLDSS, UM, CRE-LTS, TDEAS, PoliTo, TsDyn, CARS, BODYSIM and VOCO) from six countries. Longitudinal train dynamics results and computing time of four simulation cases are presented and compared. The results show that all simulators had basic agreement in simulations of locomotive forces, resistance forces and track gradients. The major differences among different simulators lie in the draft gear models. TABLDSS, UM, CRE-LTS, TDEAS, TsDyn and CARS had general agreement in terms of the in-train forces; minor differences exist as reflections of draft gear model variations. In-train force oscillations were observed in VOCO due to the introduction of wheel-rail contact. In-train force instabilities were sometimes observed in PoliTo and BODYSIM due to the velocity controlled transitional characteristics which could have generated unreasonable transitional stiffness. Regarding computing time per train operational second, the following list is in order of increasing computing speed: VOCO, TsDyn, PoliTO, CARS, BODYSIM, UM, TDEAS, CRE-LTS and TABLDSS (fastest); all simulators except VOCO, TsDyn and PoliTo achieved faster speeds than real-time simulations. Similarly, regarding computing time per integration step, the computing speeds in order are: CRE-LTS, VOCO, CARS, TsDyn, UM, TABLDSS and TDEAS (fastest).
Time Orientation and Human Performance
2004-06-01
Work with Computing Systems 2004. H.M. Khalid, M.G. Helander, A.W. Yeo (Editors) . Kuala Lumpur: Damai Sciences. 1 Time Orientation and Human...Multi-tasking. 1 . Introduction With increased globalization, understanding the various cultures and people’s attitudes and behaviours is crucial...reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching
Amols, Howard I
2008-11-01
New technologies such as intensity modulated and image guided radiation therapy, computer controlled linear accelerators, record and verify systems, electronic charts, and digital imaging have revolutionized radiation therapy over the past 10-15 y. Quality assurance (QA) as historically practiced and as recommended in reports such as American Association of Physicists in Medicine Task Groups 40 and 53 needs to be updated to address the increasing complexity and computerization of radiotherapy equipment, and the increased quantity of data defining a treatment plan and treatment delivery. While new technology has reduced the probability of many types of medical events, seeing new types of errors caused by improper use of new technology, communication failures between computers, corrupted or erroneous computer data files, and "software bugs" are now being seen. The increased use of computed tomography, magnetic resonance, and positron emission tomography imaging has become routine for many types of radiotherapy treatment planning, and QA for imaging modalities is beyond the expertise of most radiotherapy physicists. Errors in radiotherapy rarely result solely from hardware failures. More commonly they are a combination of computer and human errors. The increased use of radiosurgery, hypofractionation, more complex intensity modulated treatment plans, image guided radiation therapy, and increasing financial pressures to treat more patients in less time will continue to fuel this reliance on high technology and complex computer software. Clinical practitioners and regulatory agencies are beginning to realize that QA for new technologies is a major challenge and poses dangers different in nature than what are historically familiar.
Hakala, Paula T; Rimpelä, Arja H; Saarni, Lea A; Salminen, Jouko J
2006-10-01
Neck-shoulder pain (NSP) and low back pain (LBP) increased among adolescents in the 1990s and the beginning of 2000. A potential risk factor for this increase is the use of information and communication technology. We studied how the use of computers, the Internet, and mobile phones, playing digital games and viewing television are related to NSP and LBP in adolescents. Mailed survey with nationally representative samples of 14-, 16-, and 18-year-old Finns in 2003 (n = 6003, response rate 68%). The outcome variables were weekly NSP and LBP. NSP was perceived by 26% and LBP by 12%. When compared with non-users, the risk of NSP was 1.3 (adjusted odds ratios) when using computers > 2-3 h/day, and 1.8 when using 4-5 h/day; 2.5 when using computers > or = 42 h/week, and 1.7 when using the Internet > or = 42 h/week. Compared with non-users, the risk of LBP was 2.0 when using computers > 5 h/day, 1.7 when using > or = 42 h/week, 1.8 when using the Internet > or = 42 h/week, and 2.0 when playing digital games > 5 h/day. Times spent on digital gaming, viewing television, and using mobile phones were not associated with NSP, nor were use of mobile phones and viewing television with LBP after adjusting for confounding factors. Frequent computer-related activities are an independent risk factor for NSP and LBP. Daily use of computers exceeding 2-3 h seems to be a threshold for NSP and exceeding 5 h for LBP. Computer-related activities may explain the increase of NSP and LBP in the 1990s and the beginning of 2000.
Goldklang, Monica P.; Tekabe, Yared; Zelonina, Tina; Trischler, Jordis; Xiao, Rui; Stearns, Kyle; Romanov, Alexander; Muzio, Valeria; Shiomi, Takayuki; Johnson, Lynne L.
2016-01-01
Evaluation of lung disease is limited by the inability to visualize ongoing pathological processes. Molecular imaging that targets cellular processes related to disease pathogenesis has the potential to assess disease activity over time to allow intervention before lung destruction. Because apoptosis is a critical component of lung damage in emphysema, a functional imaging approach was taken to determine if targeting apoptosis in a smoke exposure model would allow the quantification of early lung damage in vivo. Rabbits were exposed to cigarette smoke for 4 or 16 weeks and underwent single-photon emission computed tomography/computed tomography scanning using technetium-99m–rhAnnexin V-128. Imaging results were correlated with ex vivo tissue analysis to validate the presence of lung destruction and apoptosis. Lung computed tomography scans of long-term smoke–exposed rabbits exhibit anatomical similarities to human emphysema, with increased lung volumes compared with controls. Morphometry on lung tissue confirmed increased mean linear intercept and destructive index at 16 weeks of smoke exposure and compliance measurements documented physiological changes of emphysema. Tissue and lavage analysis displayed the hallmarks of smoke exposure, including increased tissue cellularity and protease activity. Technetium-99m–rhAnnexin V-128 single-photon emission computed tomography signal was increased after smoke exposure at 4 and 16 weeks, with confirmation of increased apoptosis through terminal deoxynucleotidyl transferase dUTP nick end labeling staining and increased tissue neutral sphingomyelinase activity in the tissue. These studies not only describe a novel emphysema model for use with future therapeutic applications, but, most importantly, also characterize a promising imaging modality that identifies ongoing destructive cellular processes within the lung. PMID:27483341
Hybrid techniques for the digital control of mechanical and optical systems
NASA Astrophysics Data System (ADS)
Acernese, Fausto; Barone, Fabrizio; De Rosa, Rosario; Eleuteri, Antonio; Milano, Leopoldo; Pardi, Silvio; Ricciardi, Iolanda; Russo, Guido
2004-07-01
One of the main requirements of a digital system for the control of interferometric detectors of gravitational waves is the computing power, that is a direct consequence of the increasing complexity of the digital algorithms necessary for the control signals generation. For this specific task many specialised non standard real-time architectures have been developed, often very expensive and difficult to upgrade. On the other hand, such computing power is generally fully available for off-line applications on standard Pc based systems. Therefore, a possible and obvious solution may be provided by the integration of both the the real-time and off-line architecture resulting in a hybrid control system architecture based on standards available components, trying to get both the advantages of the perfect data synchronization provided by the real-time systems and by the large computing power available on Pc based systems. Such integration may be provided by the implementation of the link between the two different architectures through the standard Ethernet network, whose data transfer speed is largely increasing in these years, using the TCP/IP and UDP protocols. In this paper we describe the architecture of an hybrid Ethernet based real-time control system protoype we implemented in Napoli, discussing its characteristics and performances. Finally we discuss a possible application to the real-time control of a suspended mass of the mode cleaner of the 3m prototype optical interferometer for gravitational wave detection (IDGW-3P) operational in Napoli.
Bui, Huu Phuoc; Tomar, Satyendra; Courtecuisse, Hadrien; Audette, Michel; Cotin, Stéphane; Bordas, Stéphane P A
2018-05-01
An error-controlled mesh refinement procedure for needle insertion simulations is presented. As an example, the procedure is applied for simulations of electrode implantation for deep brain stimulation. We take into account the brain shift phenomena occurring when a craniotomy is performed. We observe that the error in the computation of the displacement and stress fields is localised around the needle tip and the needle shaft during needle insertion simulation. By suitably and adaptively refining the mesh in this region, our approach enables to control, and thus to reduce, the error whilst maintaining a coarser mesh in other parts of the domain. Through academic and practical examples we demonstrate that our adaptive approach, as compared with a uniform coarse mesh, increases the accuracy of the displacement and stress fields around the needle shaft and, while for a given accuracy, saves computational time with respect to a uniform finer mesh. This facilitates real-time simulations. The proposed methodology has direct implications in increasing the accuracy, and controlling the computational expense of the simulation of percutaneous procedures such as biopsy, brachytherapy, regional anaesthesia, or cryotherapy. Moreover, the proposed approach can be helpful in the development of robotic surgeries because the simulation taking place in the control loop of a robot needs to be accurate, and to occur in real time. Copyright © 2018 John Wiley & Sons, Ltd.
GAMUT: GPU accelerated microRNA analysis to uncover target genes through CUDA-miRanda
2014-01-01
Background Non-coding sequences such as microRNAs have important roles in disease processes. Computational microRNA target identification (CMTI) is becoming increasingly important since traditional experimental methods for target identification pose many difficulties. These methods are time-consuming, costly, and often need guidance from computational methods to narrow down candidate genes anyway. However, most CMTI methods are computationally demanding, since they need to handle not only several million query microRNA and reference RNA pairs, but also several million nucleotide comparisons within each given pair. Thus, the need to perform microRNA identification at such large scale has increased the demand for parallel computing. Methods Although most CMTI programs (e.g., the miRanda algorithm) are based on a modified Smith-Waterman (SW) algorithm, the existing parallel SW implementations (e.g., CUDASW++ 2.0/3.0, SWIPE) are unable to meet this demand in CMTI tasks. We present CUDA-miRanda, a fast microRNA target identification algorithm that takes advantage of massively parallel computing on Graphics Processing Units (GPU) using NVIDIA's Compute Unified Device Architecture (CUDA). CUDA-miRanda specifically focuses on the local alignment of short (i.e., ≤ 32 nucleotides) sequences against longer reference sequences (e.g., 20K nucleotides). Moreover, the proposed algorithm is able to report multiple alignments (up to 191 top scores) and the corresponding traceback sequences for any given (query sequence, reference sequence) pair. Results Speeds over 5.36 Giga Cell Updates Per Second (GCUPs) are achieved on a server with 4 NVIDIA Tesla M2090 GPUs. Compared to the original miRanda algorithm, which is evaluated on an Intel Xeon E5620@2.4 GHz CPU, the experimental results show up to 166 times performance gains in terms of execution time. In addition, we have verified that the exact same targets were predicted in both CUDA-miRanda and the original miRanda implementations through multiple test datasets. Conclusions We offer a GPU-based alternative to high performance compute (HPC) that can be developed locally at a relatively small cost. The community of GPU developers in the biomedical research community, particularly for genome analysis, is still growing. With increasing shared resources, this community will be able to advance CMTI in a very significant manner. Our source code is available at https://sourceforge.net/projects/cudamiranda/. PMID:25077821
Computational Simulation of the Formation and Material Behavior of Ice
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Singhal, Surendra N.; Chamis, Christos C.
1994-01-01
Computational methods are described for simulating the formation and the material behavior of ice in prevailing transient environments. The methodology developed at the NASA Lewis Research Center was adopted. A three dimensional finite-element heat transfer analyzer was used to predict the thickness of ice formed under prevailing environmental conditions. A multi-factor interaction model for simulating the material behavior of time-variant ice layers is presented. The model, used in conjunction with laminated composite mechanics, updates the material properties of an ice block as its thickness increases with time. A sample case of ice formation in a body of water was used to demonstrate the methodology. The results showed that the formation and the material behavior of ice can be computationally simulated using the available composites technology.
Linear optical quantum computing in a single spatial mode.
Humphreys, Peter C; Metcalf, Benjamin J; Spring, Justin B; Moore, Merritt; Jin, Xian-Min; Barbieri, Marco; Kolthammer, W Steven; Walmsley, Ian A
2013-10-11
We present a scheme for linear optical quantum computing using time-bin-encoded qubits in a single spatial mode. We show methods for single-qubit operations and heralded controlled-phase (cphase) gates, providing a sufficient set of operations for universal quantum computing with the Knill-Laflamme-Milburn [Nature (London) 409, 46 (2001)] scheme. Our protocol is suited to currently available photonic devices and ideally allows arbitrary numbers of qubits to be encoded in the same spatial mode, demonstrating the potential for time-frequency modes to dramatically increase the quantum information capacity of fixed spatial resources. As a test of our scheme, we demonstrate the first entirely single spatial mode implementation of a two-qubit quantum gate and show its operation with an average fidelity of 0.84±0.07.
A computer-based maintenance reminder and record-keeping system for clinical laboratories.
Roberts, B I; Mathews, C L; Walton, C J; Frazier, G
1982-09-01
"Maintenance" is all the activity an organization devotes to keeping instruments within performance specifications to assure accurate and precise operation. The increasing use of complex analytical instruments as "workhorses" in clinical laboratories requires more maintenance awareness by laboratory personnel. Record-keeping systems that document maintenance completion and that should prompt the continued performance of maintenance tasks have not kept up with instrumentation development. We report here a computer-based record-keeping and reminder system that lists weekly the maintenance items due for each work station in the laboratory, including the time required to complete each item. Written in BASIC, the system uses a DATABOSS data base management system running on a time-shared Digital Equipment Corporation PDP 11/60 computer with a RSTS V 7.0 operating system.
NASA Astrophysics Data System (ADS)
Michaelis, A.; Wang, W.; Melton, F. S.; Votava, P.; Milesi, C.; Hashimoto, H.; Nemani, R. R.; Hiatt, S. H.
2009-12-01
As the length and diversity of the global earth observation data records grow, modeling and analyses of biospheric conditions increasingly requires multiple terabytes of data from a diversity of models and sensors. With network bandwidth beginning to flatten, transmission of these data from centralized data archives presents an increasing challenge, and costs associated with local storage and management of data and compute resources are often significant for individual research and application development efforts. Sharing community valued intermediary data sets, results and codes from individual efforts with others that are not in direct funded collaboration can also be a challenge with respect to time, cost and expertise. We purpose a modeling, data and knowledge center that houses NASA satellite data, climate data and ancillary data where a focused community may come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform, named Ecosystem Modeling Center (EMC). With the recent development of new technologies for secure hardware virtualization, an opportunity exists to create specific modeling, analysis and compute environments that are customizable, “archiveable” and transferable. Allowing users to instantiate such environments on large compute infrastructures that are directly connected to large data archives may significantly reduce costs and time associated with scientific efforts by alleviating users from redundantly retrieving and integrating data sets and building modeling analysis codes. The EMC platform also provides the possibility for users receiving indirect assistance from expertise through prefabricated compute environments, potentially reducing study “ramp up” times.
NASA Technical Reports Server (NTRS)
Razzaq, Zia; Prasad, Venkatesh
1988-01-01
The results of a detailed investigation of the distribution of stresses in aluminum and composite panels subjected to uniform end shortening are presented. The focus problem is a rectangular panel with two longitudinal stiffeners, and an inner stiffener discontinuous at a central hole in the panel. The influence of the stiffeners on the stresses is evaluated through a two-dimensional global finite element analysis in the absence or presence of the hole. Contrary to the physical feel, it is found that the maximum stresses from the glocal analysis for both stiffened aluminum and composite panels are greater than the corresponding stresses for the unstiffened panels. The inner discontinuous stiffener causes a greater increase in stresses than the reduction provided by the two outer stiffeners. A detailed layer-by-layer study of stresses around the hole is also presented for both unstiffened and stiffened composite panels. A parallel equation solver is used for the global system of equations since the computational time is far less than that using a sequential scheme. A parallel Choleski method with up to 16 processors is used on Flex/32 Multicomputer at NASA Langley Research Center. The parallel computing results are summarized and include the computational times, speedups, bandwidths, and their inter-relationships for the panel problems. It is found that the computational time for the Choleski method decreases with a decrease in bandwidth, and better speedups result as the bandwidth increases.
Alfredsson, Jayne; Plichart, Patrick; Zary, Nabil
2012-01-01
Research on computer supported scoring of assessments in health care education has mainly focused on automated scoring. Little attention has been given to how informatics can support the currently predominant human-based grading approach. This paper reports steps taken to develop a model for a computer supported scoring process that focuses on optimizing a task that was previously undertaken without computer support. The model was also implemented in the open source assessment platform TAO in order to study its benefits. Ability to score test takers anonymously, analytics on the graders reliability and a more time efficient process are example of observed benefits. A computer supported scoring will increase the quality of the assessment results.
Komaromy, Andras M; Brooks, Dennis E; Kallberg, Maria E; Dawson, William W; Sapp, Harold L; Sherwood, Mark B; Lambrou, George N; Percicot, Christine L
2003-05-01
The purpose of our study was to determine changes in amplitudes and implicit times of retinal and cortical pattern evoked potentials with increasing body weight in young, growing rhesus macaques (Macaca mulatta). Retinal and cortical pattern evoked potentials were recorded from 29 male rhesus macaques between 3 and 7 years of age. Thirteen animals were reexamined after 11 months. Computed tomography (CT) was performed on two animals to measure the distance between the location of the skin electrode and the surface of the striate cortex. Spearman correlation coefficients were calculated to describe the relationship between body weights and either root mean square (rms) amplitudes or implicit times. For 13 animals rms amplitudes and implicit times were compared with the Wilcoxon matched pairs signed rank test for recordings taken 11 months apart. Highly significant correlations between increases in body weights and decreases in cortical rms amplitudes were noted in 29 monkeys (p < 0.0005). No significant changes were found in the cortical rms amplitudes in thirteen monkeys over 11 months. Computed tomography showed a large increase of soft tissue thickness over the skull and striate cortex with increased body weight. The decreased amplitude in cortical evoked potentials with weight gain associated with aging can be explained by the increased distance between skin electrode and striate cortex due to soft tissue thickening (passive attenuation).
Minimal Increase Network Coding for Dynamic Networks.
Zhang, Guoyin; Fan, Xu; Wu, Yanxia
2016-01-01
Because of the mobility, computing power and changeable topology of dynamic networks, it is difficult for random linear network coding (RLNC) in static networks to satisfy the requirements of dynamic networks. To alleviate this problem, a minimal increase network coding (MINC) algorithm is proposed. By identifying the nonzero elements of an encoding vector, it selects blocks to be encoded on the basis of relationship between the nonzero elements that the controls changes in the degrees of the blocks; then, the encoding time is shortened in a dynamic network. The results of simulations show that, compared with existing encoding algorithms, the MINC algorithm provides reduced computational complexity of encoding and an increased probability of delivery.
Minimal Increase Network Coding for Dynamic Networks
Wu, Yanxia
2016-01-01
Because of the mobility, computing power and changeable topology of dynamic networks, it is difficult for random linear network coding (RLNC) in static networks to satisfy the requirements of dynamic networks. To alleviate this problem, a minimal increase network coding (MINC) algorithm is proposed. By identifying the nonzero elements of an encoding vector, it selects blocks to be encoded on the basis of relationship between the nonzero elements that the controls changes in the degrees of the blocks; then, the encoding time is shortened in a dynamic network. The results of simulations show that, compared with existing encoding algorithms, the MINC algorithm provides reduced computational complexity of encoding and an increased probability of delivery. PMID:26867211
NASA Astrophysics Data System (ADS)
Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock
2017-01-01
The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.
[The effects of computer-use on adolescents].
Stefănescu, C; Chele, Gabriela; Chiriţă, V; Chiriţă, Roxana; Mavros, M; Macarie, G; Ilinca, M
2005-01-01
Computers continue to play a vital role in today's generation. The need for information about the effects of computers on their users also increases. The purpose of this study is to investigate how children and adolescents use a computer and to explore the beneficial and harmful effects of computer use on children's mental and physical health. The studied group of samples comprised 69 subjects, aged between 13 and 18 years, who answered to a questionnaire. The parents of the children also answered another questionnaire with the same subject. Data have been statistically processed using the program SPSS. The results were obtained about computer use and the pathological use was identified. Some children spend much time on computers, 4% more than five hours/day. 41% of the parents believe that the usage of the computer is favorable to the children's mental and physical health and development, 49% of parents believe that the computer may be harmful. 1.4% of the children had pathological use of the computer.
Petersson, K J F; Friberg, L E; Karlsson, M O
2010-10-01
Computer models of biological systems grow more complex as computing power increase. Often these models are defined as differential equations and no analytical solutions exist. Numerical integration is used to approximate the solution; this can be computationally intensive, time consuming and be a large proportion of the total computer runtime. The performance of different integration methods depend on the mathematical properties of the differential equations system at hand. In this paper we investigate the possibility of runtime gains by calculating parts of or the whole differential equations system at given time intervals, outside of the differential equations solver. This approach was tested on nine models defined as differential equations with the goal to reduce runtime while maintaining model fit, based on the objective function value. The software used was NONMEM. In four models the computational runtime was successfully reduced (by 59-96%). The differences in parameter estimates, compared to using only the differential equations solver were less than 12% for all fixed effects parameters. For the variance parameters, estimates were within 10% for the majority of the parameters. Population and individual predictions were similar and the differences in OFV were between 1 and -14 units. When computational runtime seriously affects the usefulness of a model we suggest evaluating this approach for repetitive elements of model building and evaluation such as covariate inclusions or bootstraps.
Multiple operating system rotation environment moving target defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Nathaniel; Thompson, Michael
Systems and methods for providing a multiple operating system rotation environment ("MORE") moving target defense ("MTD") computing system are described. The MORE-MTD system provides enhanced computer system security through a rotation of multiple operating systems. The MORE-MTD system increases attacker uncertainty, increases the cost of attacking the system, reduces the likelihood of an attacker locating a vulnerability, and reduces the exposure time of any located vulnerability. The MORE-MTD environment is effectuated by rotation of the operating systems at a given interval. The rotating operating systems create a consistently changing attack surface for remote attackers.
Real-time structured light intraoral 3D measurement pipeline
NASA Astrophysics Data System (ADS)
Gheorghe, Radu; Tchouprakov, Andrei; Sokolov, Roman
2013-02-01
Computer aided design and manufacturing (CAD/CAM) is increasingly becoming a standard feature and service provided to patients in dentist offices and denture manufacturing laboratories. Although the quality of the tools and data has slowly improved in the last years, due to various surface measurement challenges, practical, accurate, invivo, real-time 3D high quality data acquisition and processing still needs improving. Advances in GPU computational power have allowed for achieving near real-time 3D intraoral in-vivo scanning of patient's teeth. We explore in this paper, from a real-time perspective, a hardware-software-GPU solution that addresses all the requirements mentioned before. Moreover we exemplify and quantify the hard and soft deadlines required by such a system and illustrate how they are supported in our implementation.
Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks
Naveros, Francisco; Garrido, Jesus A.; Carrillo, Richard R.; Ros, Eduardo; Luque, Niceto R.
2017-01-01
Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity. PMID:28223930
Synchronization and fault-masking in redundant real-time systems
NASA Technical Reports Server (NTRS)
Krishna, C. M.; Shin, K. G.; Butler, R. W.
1983-01-01
A real time computer may fail because of massive component failures or not responding quickly enough to satisfy real time requirements. An increase in redundancy - a conventional means of improving reliability - can improve the former but can - in some cases - degrade the latter considerably due to the overhead associated with redundancy management, namely the time delay resulting from synchronization and voting/interactive consistency techniques. The implications of synchronization and voting/interactive consistency algorithms in N-modular clusters on reliability are considered. All these studies were carried out in the context of real time applications. As a demonstrative example, we have analyzed results from experiments conducted at the NASA Airlab on the Software Implemented Fault Tolerance (SIFT) computer. This analysis has indeed indicated that in most real time applications, it is better to employ hardware synchronization instead of software synchronization and not allow reconfiguration.
Multicore Architecture-aware Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srinivasa, Avinash
Modern high performance systems are becoming increasingly complex and powerful due to advancements in processor and memory architecture. In order to keep up with this increasing complexity, applications have to be augmented with certain capabilities to fully exploit such systems. These may be at the application level, such as static or dynamic adaptations or at the system level, like having strategies in place to override some of the default operating system polices, the main objective being to improve computational performance of the application. The current work proposes two such capabilites with respect to multi-threaded scientific applications, in particular a largemore » scale physics application computing ab-initio nuclear structure. The first involves using a middleware tool to invoke dynamic adaptations in the application, so as to be able to adjust to the changing computational resource availability at run-time. The second involves a strategy for effective placement of data in main memory, to optimize memory access latencies and bandwidth. These capabilties when included were found to have a significant impact on the application performance, resulting in average speedups of as much as two to four times.« less
Computer aided manual validation of mass spectrometry-based proteomic data.
Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M
2013-06-15
Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.
Rethinking Approaches to Exploration and Analysis of Big Data in Earth Science
NASA Astrophysics Data System (ADS)
Graves, S. J.; Maskey, M.
2015-12-01
With increasing amounts of data available for exploration and analysis, there are increasing numbers of users that need information extracted from the data for very specific purposes. Many of the specific purposes may not have even been considered yet so how do computational and data scientists plan for this diverse and not well defined set of possible users? There are challenges to be considered in the computational architectures, as well as the organizational structures for the data to allow for the best possible exploration and analytical capabilities. Data analytics need to be a key component in thinking about the data structures and types of storage of these large amounts of data, coming from a variety of sensing platforms that may be space based, airborne, in situ and social media. How do we provide for better capabilities for exploration and anaylsis at the point of collection for real-time or near real-time requirements? This presentation will address some of the approaches being considered and the challenges the computational and data science communities are facing in collaboration with the Earth Science research and application communities.
Scase, Mark; Marandure, Blessing; Hancox, Jennie; Kreiner, Karl; Hanke, Sten; Kropf, Johannes
2017-01-01
The older population of Europe is increasing and there has been a corresponding increase in long term care costs. This project sought to promote active ageing by delivering tasks via a tablet computer to participants aged 65-80 with mild cognitive impairment. An age-appropriate gamified environment was developed and adherence to this solution was assessed through an intervention. The gamified environment was developed through focus groups. Mixed methods were used in the intervention with the time spent engaging with applications recorded supplemented by participant interviews to gauge adherence. There were two groups of participants: one living in a retirement village and the other living separately across a city. The retirement village participants engaged in more than three times the number of game sessions compared to the other group possibly because of different social arrangements between the groups. A gamified environment can help older people engage in computer-based applications. However, social community factors influence adherence in a longer term intervention.
Advanced computations in plasma physics
NASA Astrophysics Data System (ADS)
Tang, W. M.
2002-05-01
Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.
Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications
NASA Astrophysics Data System (ADS)
Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon
1997-04-01
A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.
Anytime Prediction: Efficient Ensemble Methods for Any Computational Budget
2014-01-21
difficult problem and is the focus of this work. 1.1 Motivation The number of machine learning applications which involve real time and latency sensitive pre...significantly increasing latency , and the computational costs associated with hosting a service are often critical to its viability. For such...balancing training costs, concerns such as scalability and tractability are often more important, as opposed to factors such as latency which are more
O'Donnell, Michael
2015-01-01
State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf
Visual Fatigue Induced by Viewing a Tablet Computer with a High-resolution Display.
Kim, Dong Ju; Lim, Chi Yeon; Gu, Namyi; Park, Choul Yong
2017-10-01
In the present study, the visual discomfort induced by smart mobile devices was assessed in normal and healthy adults. Fifty-nine volunteers (age, 38.16 ± 10.23 years; male : female = 19 : 40) were exposed to tablet computer screen stimuli (iPad Air, Apple Inc.) for 1 hour. Participants watched a movie or played a computer game on the tablet computer. Visual fatigue and discomfort were assessed using an asthenopia questionnaire, tear film break-up time, and total ocular wavefront aberration before and after viewing smart mobile devices. Based on the questionnaire, viewing smart mobile devices for 1 hour significantly increased mean total asthenopia score from 19.59 ± 8.58 to 22.68 ± 9.39 (p < 0.001). Specifically, the scores for five items (tired eyes, sore/aching eyes, irritated eyes, watery eyes, and hot/burning eye) were significantly increased by viewing smart mobile devices. Tear film break-up time significantly decreased from 5.09 ± 1.52 seconds to 4.63 ± 1.34 seconds (p = 0.003). However, total ocular wavefront aberration was unchanged. Visual fatigue and discomfort were significantly induced by viewing smart mobile devices, even though the devices were equipped with state-of-the-art display technology. © 2017 The Korean Ophthalmological Society
Visual Fatigue Induced by Viewing a Tablet Computer with a High-resolution Display
Kim, Dong Ju; Lim, Chi-Yeon; Gu, Namyi
2017-01-01
Purpose In the present study, the visual discomfort induced by smart mobile devices was assessed in normal and healthy adults. Methods Fifty-nine volunteers (age, 38.16 ± 10.23 years; male : female = 19 : 40) were exposed to tablet computer screen stimuli (iPad Air, Apple Inc.) for 1 hour. Participants watched a movie or played a computer game on the tablet computer. Visual fatigue and discomfort were assessed using an asthenopia questionnaire, tear film break-up time, and total ocular wavefront aberration before and after viewing smart mobile devices. Results Based on the questionnaire, viewing smart mobile devices for 1 hour significantly increased mean total asthenopia score from 19.59 ± 8.58 to 22.68 ± 9.39 (p < 0.001). Specifically, the scores for five items (tired eyes, sore/aching eyes, irritated eyes, watery eyes, and hot/burning eye) were significantly increased by viewing smart mobile devices. Tear film break-up time significantly decreased from 5.09 ± 1.52 seconds to 4.63 ± 1.34 seconds (p = 0.003). However, total ocular wavefront aberration was unchanged. Conclusions Visual fatigue and discomfort were significantly induced by viewing smart mobile devices, even though the devices were equipped with state-of-the-art display technology. PMID:28914003
Finding Strong Bridges and Strong Articulation Points in Linear Time
NASA Astrophysics Data System (ADS)
Italiano, Giuseppe F.; Laura, Luigi; Santaroni, Federico
Given a directed graph G, an edge is a strong bridge if its removal increases the number of strongly connected components of G. Similarly, we say that a vertex is a strong articulation point if its removal increases the number of strongly connected components of G. In this paper, we present linear-time algorithms for computing all the strong bridges and all the strong articulation points of directed graphs, solving an open problem posed in [2].
Re-engineering a pharmacy work system and layout to facilitate patient counseling.
Lin, A C; Jang, R; Sedani, D; Thomas, S; Barker, K N; Flynn, E A
1996-07-01
The development and evaluation of a new work system and facility design for a chain of community pharmacies are described. A new work system was developed to optimize utilization of pharmacist and technician time and allow the pharmacy to increase patient counseling without adding personnel. In the new system, pharmacists would review prescriptions, check technicians' work, and dispense prescriptions, counseling patients as needed; technicians would enter prescriptions into the pharmacy computer and fill them. The existing work system and design were evaluated in June and July of 1992 by observing, classifying, and recording activities of pharmacy personnel three days per week at six pharmacies in the chain. Pharmacy designs that would work with the new work system were created by a university design class after consultation with representatives of the pharmacy chain and the university's college of pharmacy. The pharmacy chain selected one design, and a detailed floor plan and specifications were created. To test how the new design and system would work at each of the six test pharmacies, a computer simulation program was developed and verified by using the data collected on the existing pharmacy operations. Computer simulation showed that, with the new design and system, increasing patient counseling would increase patient waiting time slightly but would not require additional personnel. The layout and work system in a chain of community pharmacies were redesigned to facilitate patient counseling and make the best use of employee time.
Cerebro-cerebellar interactions underlying temporal information processing.
Aso, Kenji; Hanakawa, Takashi; Aso, Toshihiko; Fukuyama, Hidenao
2010-12-01
The neural basis of temporal information processing remains unclear, but it is proposed that the cerebellum plays an important role through its internal clock or feed-forward computation functions. In this study, fMRI was used to investigate the brain networks engaged in perceptual and motor aspects of subsecond temporal processing without accompanying coprocessing of spatial information. Direct comparison between perceptual and motor aspects of time processing was made with a categorical-design analysis. The right lateral cerebellum (lobule VI) was active during a time discrimination task, whereas the left cerebellar lobule VI was activated during a timed movement generation task. These findings were consistent with the idea that the cerebellum contributed to subsecond time processing in both perceptual and motor aspects. The feed-forward computational theory of the cerebellum predicted increased cerebro-cerebellar interactions during time information processing. In fact, a psychophysiological interaction analysis identified the supplementary motor and dorsal premotor areas, which had a significant functional connectivity with the right cerebellar region during a time discrimination task and with the left lateral cerebellum during a timed movement generation task. The involvement of cerebro-cerebellar interactions may provide supportive evidence that temporal information processing relies on the simulation of timing information through feed-forward computation in the cerebellum.
TOPICAL REVIEW: Advances and challenges in computational plasma science
NASA Astrophysics Data System (ADS)
Tang, W. M.; Chan, V. S.
2005-02-01
Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.
Advances and challenges in computational plasma science
NASA Astrophysics Data System (ADS)
Tang, W. M.
2005-02-01
Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.
Video and Computer Games in the '90s: Children's Time Commitment and Game Preference.
ERIC Educational Resources Information Center
Buchman, Debra D.; Funk, Jeanne B.
1996-01-01
Examined electronic game-playing habits of 900 children. Found that time commitment to game-playing decreased from fourth to eighth grade. Boys played more than girls. Preference for general entertainment games increased across grades while educational games preference decreased. Violent game popularity remained consistent; fantasy violence was…
Computer mouse use predicts acute pain but not prolonged or chronic pain in the neck and shoulder.
Andersen, J H; Harhoff, M; Grimstrup, S; Vilstrup, I; Lassen, C F; Brandt, L P A; Kryger, A I; Overgaard, E; Hansen, K D; Mikkelsen, S
2008-02-01
Computer use may have an adverse effect on musculoskeletal outcomes. This study assessed the risk of neck and shoulder pain associated with objectively recorded professional computer use. A computer programme was used to collect data on mouse and keyboard usage and weekly reports of neck and shoulder pain among 2146 technical assistants. Questionnaires were also completed at baseline and at 12 months. The three outcome measures were: (1) acute pain (measured as weekly pain); (2) prolonged pain (no or minor pain in the neck and shoulder region over four consecutive weeks followed by three consecutive weeks with a high pain score); and (3) chronic pain (reported pain or discomfort lasting more than 30 days and "quite a lot of trouble" during the past 12 months). Risk for acute neck pain and shoulder pain increased linearly by 4% and 10%, respectively, for each quartile increase in weekly mouse usage time. Mouse and keyboard usage time did not predict the onset of prolonged or chronic pain in the neck or shoulder. Women had higher risks for neck and shoulder pain. Number of keystrokes and mouse clicks, length of the average activity period, and micro-pauses did not influence reports of acute or prolonged pain. A few psychosocial factors predicted the risk of prolonged pain. Most computer workers have no or minor neck and shoulder pain, few experience prolonged pain, and even fewer, chronic neck and shoulder pain. Moreover, there seems to be no relationship between computer use and prolonged and chronic neck and shoulder pain.
Petscher, Yaacov; Mitchell, Alison M; Foorman, Barbara R
2015-01-01
A growing body of literature suggests that response latency, the amount of time it takes an individual to respond to an item, may be an important factor to consider when using assessment data to estimate the ability of an individual. Considering that tests of passage and list fluency are being adapted to a computer administration format, it is possible that accounting for individual differences in response times may be an increasingly feasible option to strengthen the precision of individual scores. The present research evaluated the differential reliability of scores when using classical test theory and item response theory as compared to a conditional item response model which includes response time as an item parameter. Results indicated that the precision of student ability scores increased by an average of 5 % when using the conditional item response model, with greater improvements for those who were average or high ability. Implications for measurement models of speeded assessments are discussed.
Petscher, Yaacov; Mitchell, Alison M.; Foorman, Barbara R.
2016-01-01
A growing body of literature suggests that response latency, the amount of time it takes an individual to respond to an item, may be an important factor to consider when using assessment data to estimate the ability of an individual. Considering that tests of passage and list fluency are being adapted to a computer administration format, it is possible that accounting for individual differences in response times may be an increasingly feasible option to strengthen the precision of individual scores. The present research evaluated the differential reliability of scores when using classical test theory and item response theory as compared to a conditional item response model which includes response time as an item parameter. Results indicated that the precision of student ability scores increased by an average of 5 % when using the conditional item response model, with greater improvements for those who were average or high ability. Implications for measurement models of speeded assessments are discussed. PMID:27721568
Use of the computer and Internet among Italian families: first national study.
Bricolo, Francesco; Gentile, Douglas A; Smelser, Rachel L; Serpelloni, Giovanni
2007-12-01
Although home Internet access has continued to increase, little is known about actual usage patterns in homes. This nationally representative study of over 4,700 Italian households with children measured computer and Internet use of each family member across 3 months. Data on actual computer and Internet usage were collected by Nielsen//NetRatings service and provide national baseline information on several variables for several age groups separately, including children, adolescents, and adult men and women. National averages are shown for the average amount of time spent using computers and on the Web, the percentage of each age group online, and the types of Web sites viewed. Overall, about one-third of children ages 2 to 11, three-fourths of adolescents and adult women, and over four-fifths of adult men access the Internet each month. Children spend an average of 22 hours/month on the computer, with a jump to 87 hours/month for adolescents. Adult women spend less time (about 60 hours/month), and adult men spend more (over 100). The types of Web sites visited are reported, including the top five for each age group. In general, search engines and Web portals are the top sites visited, regardless of age group. These data provide a baseline for comparisons across time and cultures.
Steele, K. S.
1994-01-01
Langston University, a Historically Black University located at Langston, Oklahoma, has a computing and information science program within the Langston University Division of Business. Since 1984, Langston University has participated in the Historically Black College and University program of the U.S. Department of Interior, which provided education, training, and funding through a combined earth-science and computer-technology cooperative program with the U.S. Geological Survey (USGS). USGS personnel have presented guest lectures at Langston University since 1984. Students have been enthusiastic about the lectures, and as a result of this program, 13 Langston University students have been hired by the USGS on a part-time basis while they continued their education at the University. The USGS expanded the offering of guest lectures in 1992 by increasing the number of visits to Langston University, and by inviting participation of speakers from throughout the country. The objectives of the guest-lecture series are to assist Langston University in offering state-of-the-art education in the computer sciences, to provide students with an opportunity to learn from and interact with skilled computer-science professionals, and to develop a pool of potential future employees for part-time and full-time employment. This report includes abstracts for guest-lecture presentations during 1992-93 school year.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.
Resource Provisioning in SLA-Based Cluster Computing
NASA Astrophysics Data System (ADS)
Xiong, Kaiqi; Suh, Sang
Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.
A parallel-processing approach to computing for the geographic sciences
Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Haga, Jim; Maddox, Brian; Feller, Mark
2001-01-01
The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting research into various areas, such as advanced computer architecture, algorithms to meet the processing needs for real-time image and data processing, the creation of custom datasets from seamless source data, rapid turn-around of products for emergency response, and support for computationally intense spatial and temporal modeling.
GenomicTools: a computational platform for developing high-throughput analytics in genomics.
Tsirigos, Aristotelis; Haiminen, Niina; Bilal, Erhan; Utro, Filippo
2012-01-15
Recent advances in sequencing technology have resulted in the dramatic increase of sequencing data, which, in turn, requires efficient management of computational resources, such as computing time, memory requirements as well as prototyping of computational pipelines. We present GenomicTools, a flexible computational platform, comprising both a command-line set of tools and a C++ API, for the analysis and manipulation of high-throughput sequencing data such as DNA-seq, RNA-seq, ChIP-seq and MethylC-seq. GenomicTools implements a variety of mathematical operations between sets of genomic regions thereby enabling the prototyping of computational pipelines that can address a wide spectrum of tasks ranging from pre-processing and quality control to meta-analyses. Additionally, the GenomicTools platform is designed to analyze large datasets of any size by minimizing memory requirements. In practical applications, where comparable, GenomicTools outperforms existing tools in terms of both time and memory usage. The GenomicTools platform (version 2.0.0) was implemented in C++. The source code, documentation, user manual, example datasets and scripts are available online at http://code.google.com/p/ibm-cbc-genomic-tools.
SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX-80
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.; Watson, Brian C.
1992-11-01
The finite element method has proven to be an invaluable tool for analysis and design of complex, high performance systems, such as bladed-disk assemblies in aircraft turbofan engines. However, as the problem size increase, the computation time required by conventional computers can be prohibitively high. Parallel processing computers provide the means to overcome these computation time limits. This report summarizes the results of a research activity aimed at providing a finite element capability for analyzing turbomachinery bladed-disk assemblies in a vector/parallel processing environment. A special purpose code, named with the acronym SAPNEW, has been developed to perform static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements. SAPNEW provides a stand alone capability for static and eigen analysis on the Alliant FX/80, a parallel processing computer. A preprocessor, named with the acronym NTOS, has been developed to accept NASTRAN input decks and convert them to the SAPNEW format to make SAPNEW more readily used by researchers at NASA Lewis Research Center.
Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClean, Jarrod R.; Kimchi-Schwartz, Mollie E.; Carter, Jonathan
Using quantum devices supported by classical computational resources is a promising approach to quantum-enabled computation. One powerful example of such a hybrid quantum-classical approach optimized for classically intractable eigenvalue problems is the variational quantum eigensolver, built to utilize quantum resources for the solution of eigenvalue problems and optimizations with minimal coherence time requirements by leveraging classical computational resources. These algorithms have been placed as leaders among the candidates for the first to achieve supremacy over classical computation. Here, we provide evidence for the conjecture that variational approaches can automatically suppress even nonsystematic decoherence errors by introducing an exactly solvable channelmore » model of variational state preparation. Moreover, we develop a more general hierarchy of measurement and classical computation that allows one to obtain increasingly accurate solutions by leveraging additional measurements and classical resources. In conclusion, we demonstrate numerically on a sample electronic system that this method both allows for the accurate determination of excited electronic states as well as reduces the impact of decoherence, without using any additional quantum coherence time or formal error-correction codes.« less
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.
2017-12-01
As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. The algorithmic needs of computational astrophysics are indeed very special. The methods need to be robust and preserve the positivity of density and pressure. Relativistic flows should remain sub-luminal. These requirements place additional pressures on a computational astrophysics code, which are usually not felt by a traditional fluid dynamics code. Hence the need for a specialized review. The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes. At third order, they are most similar to piecewise parabolic method schemes, which are also included. DG schemes evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and DG schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger. Time-dependent astrophysical codes need to be accurate in space and time with the result that the spatial and temporal accuracies must be matched. This is realized with the help of strong stability preserving Runge-Kutta schemes and ADER (Arbitrary DERivative in space and time) schemes, both of which are also described. The emphasis of this review is on computer-implementable ideas, not necessarily on the underlying theory.
Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units
NASA Astrophysics Data System (ADS)
Kemal, Jonathan Yashar
For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.
Wilson, J Adam; Williams, Justin C
2009-01-01
The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.
Ross, Sharon; Silman, Zmira; Maoz, Hagai; Bloch, Yuval
2015-01-01
Background Several studies have suggested that high levels of computer use are linked to psychopathology. However, there is ambiguity about what should be considered normal or over-use of computers. Furthermore, the nature of the link between computer usage and psychopathology is controversial. The current study utilized the context of age to address these questions. Our hypothesis was that the context of age will be paramount for differentiating normal from excessive use, and that this context will allow a better understanding of the link to psychopathology. Methods In a cross-sectional study, 185 parents and children aged 3–18 years were recruited in clinical and community settings. They were asked to fill out questionnaires regarding demographics, functional and academic variables, computer use as well as psychiatric screening questionnaires. Using a regression model, we identified 3 groups of normal-use, over-use and under-use and examined known factors as putative differentiators between the over-users and the other groups. Results After modeling computer screen time according to age, factors linked to over-use were: decreased socialization (OR 3.24, Confidence interval [CI] 1.23–8.55, p = 0.018), difficulty to disengage from the computer (OR 1.56, CI 1.07–2.28, p = 0.022) and age, though borderline-significant (OR 1.1 each year, CI 0.99–1.22, p = 0.058). While psychopathology was not linked to over-use, post-hoc analysis revealed that the link between increased computer screen time and psychopathology was age-dependent and solidified as age progressed (p = 0.007). Unlike computer usage, the use of small-screens and smartphones was not associated with psychopathology. Conclusions The results suggest that computer screen time follows an age-based course. We conclude that differentiating normal from over-use as well as defining over-use as a possible marker for psychiatric difficulties must be performed within the context of age. If verified by additional studies, future research should integrate those views in order to better understand the intricacies of computer over-use. PMID:26536037
Segev, Aviv; Mimouni-Bloch, Aviva; Ross, Sharon; Silman, Zmira; Maoz, Hagai; Bloch, Yuval
2015-01-01
Several studies have suggested that high levels of computer use are linked to psychopathology. However, there is ambiguity about what should be considered normal or over-use of computers. Furthermore, the nature of the link between computer usage and psychopathology is controversial. The current study utilized the context of age to address these questions. Our hypothesis was that the context of age will be paramount for differentiating normal from excessive use, and that this context will allow a better understanding of the link to psychopathology. In a cross-sectional study, 185 parents and children aged 3-18 years were recruited in clinical and community settings. They were asked to fill out questionnaires regarding demographics, functional and academic variables, computer use as well as psychiatric screening questionnaires. Using a regression model, we identified 3 groups of normal-use, over-use and under-use and examined known factors as putative differentiators between the over-users and the other groups. After modeling computer screen time according to age, factors linked to over-use were: decreased socialization (OR 3.24, Confidence interval [CI] 1.23-8.55, p = 0.018), difficulty to disengage from the computer (OR 1.56, CI 1.07-2.28, p = 0.022) and age, though borderline-significant (OR 1.1 each year, CI 0.99-1.22, p = 0.058). While psychopathology was not linked to over-use, post-hoc analysis revealed that the link between increased computer screen time and psychopathology was age-dependent and solidified as age progressed (p = 0.007). Unlike computer usage, the use of small-screens and smartphones was not associated with psychopathology. The results suggest that computer screen time follows an age-based course. We conclude that differentiating normal from over-use as well as defining over-use as a possible marker for psychiatric difficulties must be performed within the context of age. If verified by additional studies, future research should integrate those views in order to better understand the intricacies of computer over-use.
Parry, Sharon; Straker, Leon; Gilson, Nicholas D.; Smith, Anne J.
2013-01-01
Background Occupational sedentary behaviour is an important contributor to overall sedentary risk. There is limited evidence for effective workplace interventions to reduce occupational sedentary time and increase light activity during work hours. The purpose of the study was to determine if participatory workplace interventions could reduce total sedentary time, sustained sedentary time (bouts >30 minutes), increase the frequency of breaks in sedentary time and promote light intensity activity and moderate/vigorous activity (MVPA) during work hours. Methods A randomised controlled trial (ANZCTR number: ACTN12612000743864) was conducted using clerical, call centre and data processing workers (n = 62, aged 25–59 years) in 3 large government organisations in Perth, Australia. Three groups developed interventions with a participatory approach: ‘Active office’ (n = 19), ‘Active Workstation’ and promotion of incidental office activity; ‘Traditional physical activity’ (n = 14), pedometer challenge to increase activity between productive work time and ‘Office ergonomics’ (n = 29), computer workstation design and breaking up computer tasks. Accelerometer (ActiGraph GT3X, 7 days) determined sedentary time, sustained sedentary time, breaks in sedentary time, light intensity activity and MVPA on work days and during work hours were measured before and following a 12 week intervention period. Results For all participants there was a significant reduction in sedentary time on work days (−1.6%, p = 0.006) and during work hours (−1.7%, p = 0.014) and a significant increase in number of breaks/sedentary hour on work days (0.64, p = 0.005) and during work hours (0.72, p = 0.015); there was a concurrent significant increase in light activity during work hours (1.5%, p = 0.012) and MVPA on work days (0.6%, p = 0.012). Conclusions This study explored novel ways to modify work practices to reduce occupational sedentary behaviour. Participatory workplace interventions can reduce sedentary time, increase the frequency of breaks and improve light activity and MVPA of office workers by using a variety of interventions. Trial Registration Australian New Zealand Clinical Trials Registry ACTN12612000743864. PMID:24265734
To burn or not to burn: use of computer-enhanced stimuli to encourage application of sunscreens.
Novick, M
1997-08-01
Skin cancer affects 515,000 Americans every year, causing more than 7,000 deaths. Prior studies attempted, with scant success, to increase general knowledge about protection of the skin and to encourage use of sunscreens. The failure was attributed to the allure of the suntan as a symbol of health and affluence and to the "optimistic bias" (belief in one's own invulnerability) displayed by sunbathers. The study detailed here sought to increase the use by subjects of sunscreen by showing computer-altered images of their own faces, aged and disfigured by lesions. That stimulus was designed to counter false impressions and illusions of sunbathers about the benefits of the sun by demonstrating, immediately and personally, negative effects of sun exposure. Data were collected from thirty adolescents in the form of six weekly logs of sunscreen use and time spent outdoors between 10 AM and 3 PM. Results showed that the computer-altered images motivated increased use of sunscreen in the short term: subjects in the experimental groups used sunscreen almost three times as frequently as those in the control group during the experimental period (P = 0.000). Images of aging and disfiguring by lesions produced a more intense and prolonged modification in behavior than images of aging only.
Energy measurement using flow computers and chromatography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beeson, J.
1995-12-01
Arkla Pipeline Group (APG), along with most transmission companies, went to electronic flow measurement (EFM) to: (1) Increase resolution and accuracy; (2) Real time correction of flow variables; (3) Increase speed in data retrieval; (4) Reduce capital expenditures; and (5) Reduce operation and maintenance expenditures Prior to EFM, mechanical seven day charts were used which yielded 800 pressure and differential pressure readings. EFM yields 1.2-million readings, a 1500 time improvement in resolution and additional flow representation. The total system accuracy of the EFM system is 0.25 % compared with 2 % for the chart system which gives APG improved accuracy.more » A typical APG electronic measurement system includes a microprocessor-based flow computer, a telemetry communications package, and a gas chromatograph. Live relative density (specific gravity), BTU, CO{sub 2}, and N{sub 2} are updated from the chromatograph to the flow computer every six minutes which provides accurate MMBTU computations. Because the gas contract length has changed from years to monthly and from a majority of direct sales to transports both Arkla and its customers wanted access to actual volumes on a much more timely basis than is allowed with charts. The new electronic system allows volumes and other system data to be retrieved continuously, if EFM is on Supervisory Control and Data Acquisition (SCADA) or daily if on dial up telephone. Previously because of chart integration, information was not available for four to six weeks. EFM costs much less than the combined costs of telemetry transmitters, pressure and differential pressure chart recorders, and temperature chart recorder which it replaces. APG will install this equipment on smaller volume stations at a customers expense. APG requires backup measurement on metering facilities this size. It could be another APG flow computer or chart recorder, or the other companies flow computer or chart recorder.« less
An algorithm for fast elastic wave simulation using a vectorized finite difference operator
NASA Astrophysics Data System (ADS)
Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna
2018-07-01
Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.
Hong, Keum-Shik; Khan, Muhammad Jawad
2017-01-01
In this article, non-invasive hybrid brain-computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain-computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.
Hong, Keum-Shik; Khan, Muhammad Jawad
2017-01-01
In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided. PMID:28790910
The Potential Impact of Computer-Aided Assessment Technology in Higher Education
ERIC Educational Resources Information Center
Tshibalo, A. E.
2007-01-01
Distance learning generally separates students from educators, and demands that interventions be put in place to counter the constraints that this distance poses to learners and educators. Furthermore "Increased number of students in Higher Education and the corresponding increase in time spent by staff on assessment has encouraged interest…
Give Your Technology Program a Little "Class"!
ERIC Educational Resources Information Center
Vengersammy, Ormilla
2009-01-01
The Orange County Library System (OCLS) began to offer basic technology classes in July 2000. The computers were funded through a grant awarded by the Bill & Melinda Gates Foundation. Over time, the library staff noticed that the demand for the classes increased, so the offering of classes also increased. When the author arrived at OCLS, her…
NASA Astrophysics Data System (ADS)
Yang, Xuguang; Wang, Lei
In this paper, the magnetic field effects on natural convection of power-law non-Newtonian fluids in rectangular enclosures are numerically studied by the multiple-relaxation-time (MRT) lattice Boltzmann method (LBM). To maintain the locality of the LBM, a local computing scheme for shear rate is used. Thus, all simulations can be easily performed on the Graphics Processing Unit (GPU) using NVIDIA’s CUDA, and high computational efficiency can be achieved. The numerical simulations presented here span a wide range of thermal Rayleigh number (104≤Ra≤106), Hartmann number (0≤Ha≤20), power-law index (0.5≤n≤1.5) and aspect ratio (0.25≤AR≤4.0) to identify the different flow patterns and temperature distributions. The results show that the heat transfer rate is increased with the increase of thermal Rayleigh number, while it is decreased with the increase of Hartmann number, and the average Nusselt number is found to decrease with an increase in the power-law index. Moreover, the effects of aspect ratio have also investigated in detail.
NASA Astrophysics Data System (ADS)
Wang, Ziwei; Jiang, Xiong; Chen, Ti; Hao, Yan; Qiu, Min
2018-05-01
Simulating the unsteady flow of compressor under circumferential inlet distortion and rotor/stator interference would need full-annulus grid with a dual time method. This process is time consuming and needs a large amount of computational resources. Harmonic balance method simulates the unsteady flow in compressor on single passage grid with a series of steady simulations. This will largely increase the computational efficiency in comparison with the dual time method. However, most simulations with harmonic balance method are conducted on the flow under either circumferential inlet distortion or rotor/stator interference. Based on an in-house CFD code, the harmonic balance method is applied in the simulation of flow in the NASA Stage 35 under both circumferential inlet distortion and rotor/stator interference. As the unsteady flow is influenced by two different unsteady disturbances, it leads to the computational instability. The instability can be avoided by coupling the harmonic balance method with an optimizing algorithm. The computational result of harmonic balance method is compared with the result of full-annulus simulation. It denotes that, the harmonic balance method simulates the flow under circumferential inlet distortion and rotor/stator interference as precise as the full-annulus simulation with a speed-up of about 8 times.
NASA Astrophysics Data System (ADS)
Srinath, Srikar; Poyneer, Lisa A.; Rudy, Alexander R.; Ammons, S. M.
2014-08-01
The advent of expensive, large-aperture telescopes and complex adaptive optics (AO) systems has strengthened the need for detailed simulation of such systems from the top of the atmosphere to control algorithms. The credibility of any simulation is underpinned by the quality of the atmosphere model used for introducing phase variations into the incident photons. Hitherto, simulations which incorporate wind layers have relied upon phase screen generation methods that tax the computation and memory capacities of the platforms on which they run. This places limits on parameters of a simulation, such as exposure time or resolution, thus compromising its utility. As aperture sizes and fields of view increase the problem will only get worse. We present an autoregressive method for evolving atmospheric phase that is efficient in its use of computation resources and allows for variability in the power contained in frozen flow or stochastic components of the atmosphere. Users have the flexibility of generating atmosphere datacubes in advance of runs where memory constraints allow to save on computation time or of computing the phase at each time step for long exposure times. Preliminary tests of model atmospheres generated using this method show power spectral density and rms phase in accordance with established metrics for Kolmogorov models.
Multi-GPU Jacobian accelerated computing for soft-field tomography.
Borsic, A; Attardo, E A; Halter, R J
2012-10-01
Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15-20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on four GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 min to 14 s. We regard this as an important step toward gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for EIT, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the adjoint method.
Multi-GPU Jacobian Accelerated Computing for Soft Field Tomography
Borsic, A.; Attardo, E. A.; Halter, R. J.
2012-01-01
Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use Finite Element Models to represent the volume of interest and to solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are three-dimensional. Though the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in Electrical Impedance Tomography applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15 to 20 minutes with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Further, providing high-speed reconstructions are essential for some promising clinical application of EIT. For 3D problems 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In the present work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have a much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on 4 GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 minutes to 14 seconds. We regard this as an important step towards gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for Electrical Impedance Tomography, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the Adjoint Method. PMID:23010857
Sedentary behavior, physical activity, and concentrations of insulin among US adults.
Ford, Earl S; Li, Chaoyang; Zhao, Guixiang; Pearson, William S; Tsai, James; Churilla, James R
2010-09-01
Time spent watching television has been linked to obesity, metabolic syndrome, and diabetes, all conditions characterized to some degree by hyperinsulinemia and insulin resistance. However, limited evidence relates screen time (watching television or using a computer) directly to concentrations of insulin. We examined the cross-sectional associations between time spent watching television or using a computer, physical activity, and serum concentrations of insulin using data from 2800 participants aged at least 20 years of the 2003-2006 National Health and Nutrition Examination Survey. The amount of time spent watching television and using a computer as well as physical activity was self-reported. The unadjusted geometric mean concentration of insulin increased from 6.2 microU/mL among participants who did not watch television to 10.0 microU/mL among those who watched television for 5 or more hours per day (P = .001). After adjustment for age, sex, race or ethnicity, educational status, concentration of cotinine, alcohol intake, physical activity, waist circumference, and body mass index using multiple linear regression analysis, the log-transformed concentrations of insulin were significantly and positively associated with time spent watching television (P = < .001). Reported time spent using a computer was significantly associated with log-transformed concentrations of insulin before but not after accounting for waist circumference and body mass index. Leisure-time physical activity but not transportation or household physical activity was significantly and inversely associated with log-transformed concentrations of insulin. Sedentary behavior, particularly the amount of time spent watching television, may be an important modifiable determinant of concentrations of insulin. Published by Elsevier Inc.
Pen-based computers: Computers without keys
NASA Technical Reports Server (NTRS)
Conklin, Cheryl L.
1994-01-01
The National Space Transportation System (NSTS) is comprised of many diverse and highly complex systems incorporating the latest technologies. Data collection associated with ground processing of the various Space Shuttle system elements is extremely challenging due to the many separate processing locations where data is generated. This presents a significant problem when the timely collection, transfer, collation, and storage of data is required. This paper describes how new technology, referred to as Pen-Based computers, is being used to transform the data collection process at Kennedy Space Center (KSC). Pen-Based computers have streamlined procedures, increased data accuracy, and now provide more complete information than previous methods. The end results is the elimination of Shuttle processing delays associated with data deficiencies.
GPU-accelerated computation of electron transfer.
Höfinger, Siegfried; Acocella, Angela; Pop, Sergiu C; Narumi, Tetsu; Yasuoka, Kenji; Beu, Titus; Zerbetto, Francesco
2012-11-05
Electron transfer is a fundamental process that can be studied with the help of computer simulation. The underlying quantum mechanical description renders the problem a computationally intensive application. In this study, we probe the graphics processing unit (GPU) for suitability to this type of problem. Time-critical components are identified via profiling of an existing implementation and several different variants are tested involving the GPU at increasing levels of abstraction. A publicly available library supporting basic linear algebra operations on the GPU turns out to accelerate the computation approximately 50-fold with minor dependence on actual problem size. The performance gain does not compromise numerical accuracy and is of significant value for practical purposes. Copyright © 2012 Wiley Periodicals, Inc.
Harrison, James H
2004-01-01
Effective pathology practice increasingly requires familiarity with concepts in medical informatics that may cover a broad range of topics, for example, traditional clinical information systems, desktop and Internet computer applications, and effective protocols for computer security. To address this need, the University of Pittsburgh (Pittsburgh, Pa) includes a full-time, 3-week rotation in pathology informatics as a required component of pathology residency training. To teach pathology residents general informatics concepts important in pathology practice. We assess the efficacy of the rotation in communicating these concepts using a short-answer examination administered at the end of the rotation. Because the increasing use of computers and the Internet in education and general communications prior to residency training has the potential to communicate key concepts that might not need additional coverage in the rotation, we have also evaluated incoming residents' informatics knowledge using a similar pretest. This article lists 128 questions that cover a range of topics in pathology informatics at a level appropriate for residency training. These questions were used for pretests and posttests in the pathology informatics rotation in the Pathology Residency Program at the University of Pittsburgh for the years 2000 through 2002. With slight modification, the questions are organized here into 15 topic categories within pathology informatics. The answers provided are brief and are meant to orient the reader to the question and suggest the level of detail appropriate in an answer from a pathology resident. A previously published evaluation of the test results revealed that pretest scores did not increase during the 3-year evaluation period, and self-assessed computer skill level correlated with pretest scores, but all pretest scores were low. Posttest scores increased substantially, and posttest scores did not correlate with the self-assessed computer skill level recorded at pretest time. Even residents who rated themselves high in computer skills lacked many concepts important in pathology informatics, and posttest scores showed that residents with both high and low self-assessed skill levels learned pathology informatics concepts effectively.
Efficient Processing of Data for Locating Lightning Strikes
NASA Technical Reports Server (NTRS)
Medelius, Pedro J.; Starr, Stan
2003-01-01
Two algorithms have been devised to increase the efficiency of processing of data in lightning detection and ranging (LDAR) systems so as to enable the accurate location of lightning strikes in real time. In LDAR, the location of a lightning strike is calculated by solving equations for the differences among the times of arrival (DTOAs) of the lightning signals at multiple antennas as functions of the locations of the antennas and the speed of light. The most difficult part of the problem is computing the DTOAs from digitized versions of the signals received by the various antennas. One way (a time-domain approach) to determine the DTOAs is to compute cross-correlations among variously differentially delayed replicas of the digitized signals and to select, as the DTOAs, those differential delays that yield the maximum correlations. Another way (a frequency-domain approach) to determine the DTOAs involves the computation of cross-correlations among Fourier transforms of variously differentially phased replicas of the digitized signals, along with utilization of the relationship among phase difference, time delay, and frequency.
Challenges and solutions for realistic room simulation
NASA Astrophysics Data System (ADS)
Begault, Durand R.
2002-05-01
Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.
A parallel implementation of an off-lattice individual-based model of multicellular populations
NASA Astrophysics Data System (ADS)
Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe
2015-07-01
As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.
Computational methods in drug discovery
Leelananda, Sumudu P
2016-01-01
The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed. PMID:28144341
Computational methods in drug discovery.
Leelananda, Sumudu P; Lindert, Steffen
2016-01-01
The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein-ligand docking, pharmacophore modeling and QSAR techniques are reviewed.
Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions
NASA Astrophysics Data System (ADS)
Soltani, S. S.; Cvetkovic, V.; Destouni, G.
2017-12-01
The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow hillslope compartments are controlled by topography, and therefore application and further development of the simple "kinematic pathway" approach is promising for their modeling.
High performance hybrid functional Petri net simulations of biological pathway models on CUDA.
Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.
Automated Generation of Message-Passing Programs: An Evaluation Using CAPTools
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Jin, Haoqiang; Yan, Jerry C.; Saini, Subhash (Technical Monitor)
1998-01-01
Scientists at NASA Ames Research Center have been developing computational aeroscience applications on highly parallel architectures over the past ten years. During that same time period, a steady transition of hardware and system software also occurred, forcing us to expend great efforts into migrating and re-coding our applications. As applications and machine architectures become increasingly complex, the cost and time required for this process will become prohibitive. In this paper, we present the first set of results in our evaluation of interactive parallelization tools. In particular, we evaluate CAPTool's ability to parallelize computational aeroscience applications. CAPTools was tested on serial versions of the NAS Parallel Benchmarks and ARC3D, a computational fluid dynamics application, on two platforms: the SGI Origin 2000 and the Cray T3E. This evaluation includes performance, amount of user interaction required, limitations and portability. Based on these results, a discussion on the feasibility of computer aided parallelization of aerospace applications is presented along with suggestions for future work.
Brink, Yolandi; Louw, Quinette; Grimmer, Karen; Jordaan, Esmè
2015-12-01
There is evidence that consistent sitting for prolonged periods is associated with upper quadrant musculoskeletal pain (UQMP). It is unclear whether postural alignment is a significant risk factor. The aim of the prospective study (2010-2011) was to ascertain if three-dimensional sitting postural angles, measured in a real-life school computer classroom setting, predict seated-related UQMP. Asymptomatic Grade 10 high-school students, aged 15-17 years, undertaking Computer Application Technology, were eligible to participate. Using the 3D Posture Analysis Tool, sitting posture was measured while students used desk-top computers. Posture was reported as five upper quadrant angles (Head flexion, Neck flexion; Craniocervical angle, Trunk flexion and Head lateral bending). The Computer Usage Questionnaire measured seated-related UQMP and hours of computer use. The Beck Depression Inventory and the Multidimensional Anxiety Scale for Children assessed psychosocial factors. Sitting posture, computer use and psychosocial factors were measured at baseline. UQMP was measured at six months and one-year follow-up. 211, 190 and 153 students participated at baseline, six months and one-year follow-up respectively. 34.2% students complained of seated-related UQMP during the follow-up period. Increased head flexion (HF) predicted seated-related UQMP developing over time for a small group of students with pain scores greater than the 90th pain percentile, adjusted for age, gender, BMI, computer use and psychosocial factors (p = 0.003). The pain score increased 0.22 points per 1° increase in HF. Classroom ergonomics and postural hygiene should therefore focus on reducing large HF angles among computing adolescents. Copyright © 2015 Elsevier Ltd. All rights reserved.
Performance analysis of a large-grain dataflow scheduling paradigm
NASA Technical Reports Server (NTRS)
Young, Steven D.; Wills, Robert W.
1993-01-01
A paradigm for scheduling computations on a network of multiprocessors using large-grain data flow scheduling at run time is described and analyzed. The computations to be scheduled must follow a static flow graph, while the schedule itself will be dynamic (i.e., determined at run time). Many applications characterized by static flow exist, and they include real-time control and digital signal processing. With the advent of computer-aided software engineering (CASE) tools for capturing software designs in dataflow-like structures, macro-dataflow scheduling becomes increasingly attractive, if not necessary. For parallel implementations, using the macro-dataflow method allows the scheduling to be insulated from the application designer and enables the maximum utilization of available resources. Further, by allowing multitasking, processor utilizations can approach 100 percent while they maintain maximum speedup. Extensive simulation studies are performed on 4-, 8-, and 16-processor architectures that reflect the effects of communication delays, scheduling delays, algorithm class, and multitasking on performance and speedup gains.
Vibration extraction based on fast NCC algorithm and high-speed camera.
Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an
2015-09-20
In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.
Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods
Smith, David S.; Gore, John C.; Yankeelov, Thomas E.; Welch, E. Brian
2012-01-01
Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 40962 or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 10242 and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images. PMID:22481908
Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods.
Smith, David S; Gore, John C; Yankeelov, Thomas E; Welch, E Brian
2012-01-01
Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 4096(2) or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 1024(2) and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images.
Cantürk, İsmail; Özyılmaz, Lale
2018-07-01
This paper presents an approach to postmortem interval (PMI) estimation, which is a very debated and complicated area of forensic science. Most of the reported methods to determine PMI in the literature are not practical because of the need for skilled persons and significant amounts of time, and give unsatisfactory results. Additionally, the error margin of PMI estimation increases proportionally with elapsed time after death. It is crucial to develop practical PMI estimation methods for forensic science. In this study, a computational system is developed to determine the PMI of human subjects by investigating postmortem opacity development of the eye. Relevant features from the eye images were extracted using image processing techniques to reflect gradual opacity development. The features were then investigated to predict the time after death using machine learning methods. The experimental results prove that the development of opacity can be utilized as a practical computational tool to determine PMI for human subjects. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lieberman, Robert; Kwong, Heston; Liu, Brent; Huang, H. K.
2009-02-01
The chest x-ray radiological features of tuberculosis patients are well documented, and the radiological features that change in response to successful pharmaceutical therapy can be followed with longitudinal studies over time. The patients can also be classified as either responsive or resistant to pharmaceutical therapy based on clinical improvement. We have retrospectively collected time series chest x-ray images of 200 patients diagnosed with tuberculosis receiving the standard pharmaceutical treatment. Computer algorithms can be created to utilize image texture features to assess the temporal changes in the chest x-rays of the tuberculosis patients. This methodology provides a framework for a computer-assisted detection (CAD) system that may provide physicians with the ability to detect poor treatment response earlier in pharmaceutical therapy. Early detection allows physicians to respond with more timely treatment alternatives and improved outcomes. Such a system has the potential to increase treatment efficacy for millions of patients each year.
French Meteor Network for High Precision Orbits of Meteoroids
NASA Technical Reports Server (NTRS)
Atreya, P.; Vaubaillon, J.; Colas, F.; Bouley, S.; Gaillard, B.; Sauli, I.; Kwon, M. K.
2011-01-01
There is a lack of precise meteoroids orbit from video observations as most of the meteor stations use off-the-shelf CCD cameras. Few meteoroids orbit with precise semi-major axis are available using film photographic method. Precise orbits are necessary to compute the dust flux in the Earth s vicinity, and to estimate the ejection time of the meteoroids accurately by comparing them with the theoretical evolution model. We investigate the use of large CCD sensors to observe multi-station meteors and to compute precise orbit of these meteoroids. An ideal spatial and temporal resolution to get an accuracy to those similar of photographic plates are discussed. Various problems faced due to the use of large CCD, such as increasing the spatial and the temporal resolution at the same time and computational problems in finding the meteor position are illustrated.
Lattice Boltzmann for Airframe Noise Predictions
NASA Technical Reports Server (NTRS)
Barad, Michael; Kocheemoolayil, Joseph; Kiris, Cetin
2017-01-01
Increase predictive use of High-Fidelity Computational Aero- Acoustics (CAA) capabilities for NASA's next generation aviation concepts. CFD has been utilized substantially in analysis and design for steady-state problems (RANS). Computational resources are extremely challenged for high-fidelity unsteady problems (e.g. unsteady loads, buffet boundary, jet and installation noise, fan noise, active flow control, airframe noise, etc) ü Need novel techniques for reducing the computational resources consumed by current high-fidelity CAA Need routine acoustic analysis of aircraft components at full-scale Reynolds number from first principles Need an order of magnitude reduction in wall time to solution!
Cometary ephemerides - needs and concerns
NASA Technical Reports Server (NTRS)
Yeomans, D. K.
1981-01-01
With the use of narrow field-of-view instrumentation on faint comets, the accuracy requirements upon computed ephemerides are increasing. It is not uncommon for instruments with a one arc minute field-of-view to be tracking a faint comet that is not visible without a substantial integration time. As with all ephemerides of solar syste objects, the computed motion and reduction of these observations, the computed motion of a comet is further depenent upon effects related to the comet's activity. Thus, the ephemeris of an active comet is corrupted by both observational errors and errors due to the comet's activity.
Conditionally Active Min-Max Limit Regulators
NASA Technical Reports Server (NTRS)
Garg, Sanjay (Inventor); May, Ryan D. (Inventor)
2017-01-01
A conditionally active limit regulator may be used to regulate the performance of engines or other limit regulated systems. A computing system may determine whether a variable to be limited is within a predetermined range of a limit value as a first condition. The computing system may also determine whether a current rate of increase or decrease of the variable to be limited is great enough that the variable will reach the limit within a predetermined period of time with no other changes as a second condition. When both conditions are true, the computing system may activate a simulated or physical limit regulator.
Dedicated heterogeneous node scheduling including backfill scheduling
Wood, Robert R [Livermore, CA; Eckert, Philip D [Livermore, CA; Hommes, Gregg [Pleasanton, CA
2006-07-25
A method and system for job backfill scheduling dedicated heterogeneous nodes in a multi-node computing environment. Heterogeneous nodes are grouped into homogeneous node sub-pools. For each sub-pool, a free node schedule (FNS) is created so that the number of to chart the free nodes over time. For each prioritized job, using the FNS of sub-pools having nodes useable by a particular job, to determine the earliest time range (ETR) capable of running the job. Once determined for a particular job, scheduling the job to run in that ETR. If the ETR determined for a lower priority job (LPJ) has a start time earlier than a higher priority job (HPJ), then the LPJ is scheduled in that ETR if it would not disturb the anticipated start times of any HPJ previously scheduled for a future time. Thus, efficient utilization and throughput of such computing environments may be increased by utilizing resources otherwise remaining idle.
Intelligent Mobile Autonomous System
1987-01-01
used intermittently , and each of them characterizes the level of generalization. One cannot discern any another point within the tile, this is a...traversability space is not fast enough to be considered for actual control application. Alternatives to limit the computation time include (1) increasing the...error. (b) It must be concise and easy to "compute". In other words there must exist simple, fast procedures for instandating the "words" or "sentences
NASA Technical Reports Server (NTRS)
Omalley, T. A.
1984-01-01
The use of the coupled cavity traveling wave tube for space communications has led to an increased interest in improving the efficiency of the basic interaction process in these devices through velocity resynchronization and other methods. A flexible, three dimensional, axially symmetric, large signal computer program was developed for use on the IBM 370 time sharing system. A users' manual for this program is included.
Quantum-Theoretical Methods and Studies Relating to Properties of Materials
1989-12-19
particularly sensitive to the behavior of the electron distribution close to the nuclei, which contributes only to E(l). Although the above results were...other condensed phases. So it was a useful test case to test the behavior of the theoretical computations for the gas phase relative to that in the...increasingly complicated and time- comsuming electron-correlation approximations should assure a small error in the theoret- ically computed enthalpy for a
Microfocus computed tomography in medicine
NASA Astrophysics Data System (ADS)
Obodovskiy, A. V.
2018-02-01
Recent advances in the field of high-frequency power schemes for X-ray devices allow the creation of high-resolution instruments. At the department of electronic devices and Equipment of the St. Petersburg State Electrotechnical University, a model of a microfocus computer tomograph was developed. Used equipment allows to receive projection data with an increase up to 100 times. A distinctive feature of the device is the possibility of implementing various schemes for obtaining projection data.
Study of Computational Structures for Multiobject Tracking Algorithms
1986-12-01
MULTIOBJECT TRACKING ALGORITHMS 12. PERSONAL AUTHOR(S) i Allen, Thomas G .; Kurien, Thomas; Washburn, Robert B. Jr. 13a. TYPE OF REPORT 13b. TIME COVERED 14...mentioned possible restructurings of the tracking algorithm that increase the amount of available parallelism ’ g ~. are investigated. This step is extremely...sufficient for our needs here. In the following section we will examine the structure and computational requirements of the track- g , oriented approach
2002-01-01
the fully coupled electrical and optical sys- of carrier is assumed and the minority carriers are not tems in VCSELs (Oyafuso et al. 2000). separated...evolution times the cosine function in Mn 5 ++.(1) weakly depends on the phase space variables. With the increase of the time, the cosine term...can also be applied in phase - coherent devices. Our approach is useful to To obtain S(0) we just have to integrate A Q2 over the study noise in a wide
A distributed, dynamic, parallel computational model: the role of noise in velocity storage
Merfeld, Daniel M.
2012-01-01
Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.
2002-01-01
The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.
Acceleration for 2D time-domain elastic full waveform inversion using a single GPU card
NASA Astrophysics Data System (ADS)
Jiang, Jinpeng; Zhu, Peimin
2018-05-01
Full waveform inversion (FWI) is a challenging procedure due to the high computational cost related to the modeling, especially for the elastic case. The graphics processing unit (GPU) has become a popular device for the high-performance computing (HPC). To reduce the long computation time, we design and implement the GPU-based 2D elastic FWI (EFWI) in time domain using a single GPU card. We parallelize the forward modeling and gradient calculations using the CUDA programming language. To overcome the limitation of relatively small global memory on GPU, the boundary saving strategy is exploited to reconstruct the forward wavefield. Moreover, the L-BFGS optimization method used in the inversion increases the convergence of the misfit function. A multiscale inversion strategy is performed in the workflow to obtain the accurate inversion results. In our tests, the GPU-based implementations using a single GPU device achieve >15 times speedup in forward modeling, and about 12 times speedup in gradient calculation, compared with the eight-core CPU implementations optimized by OpenMP. The test results from the GPU implementations are verified to have enough accuracy by comparing the results obtained from the CPU implementations.
NASA Astrophysics Data System (ADS)
Wang, Bohan; Wang, Hsing-Wen; Guo, Hengchang; Anderson, Erik; Tang, Qinggong; Wu, Tongtong; Falola, Reuben; Smith, Tikina; Andrews, Peter M.; Chen, Yu
2017-12-01
Chronic kidney disease (CKD) is characterized by a progressive loss of renal function over time. Histopathological analysis of the condition of glomeruli and the proximal convolutional tubules over time can provide valuable insights into the progression of CKD. Optical coherence tomography (OCT) is a technology that can analyze the microscopic structures of a kidney in a nondestructive manner. Recently, we have shown that OCT can provide real-time imaging of kidney microstructures in vivo without administering exogenous contrast agents. A murine model of CKD induced by intravenous Adriamycin (ADR) injection is evaluated by OCT. OCT images of the rat kidneys have been captured every week up to eight weeks. Tubular diameter and hypertrophic tubule population of the kidneys at multiple time points after ADR injection have been evaluated through a fully automated computer-vision system. Results revealed that mean tubular diameter and hypertrophic tubule population increase with time in post-ADR injection period. The results suggest that OCT images of the kidney contain abundant information about kidney histopathology. Fully automated computer-aided diagnosis based on OCT has the potential for clinical evaluation of CKD conditions.
Performance of MCNP4A on seven computing platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendricks, J.S.; Brockhoff, R.C.
1994-12-31
The performance of seven computer platforms has been evaluated with the MCNP4A Monte Carlo radiation transport code. For the first time we report timing results using MCNP4A and its new test set and libraries. Comparisons are made on platforms not available to us in previous MCNP timing studies. By using MCNP4A and its 325-problem test set, a widely-used and readily-available physics production code is used; the timing comparison is not limited to a single ``typical`` problem, demonstrating the problem dependence of timing results; the results are reproducible at the more than 100 installations around the world using MCNP; comparison ofmore » performance of other computer platforms to the ones tested in this study is possible because we present raw data rather than normalized results; and a measure of the increase in performance of computer hardware and software over the past two years is possible. The computer platforms reported are the Cray-YMP 8/64, IBM RS/6000-560, Sun Sparc10, Sun Sparc2, HP/9000-735, 4 processor 100 MHz Silicon Graphics ONYX, and Gateway 2000 model 4DX2-66V PC. In 1991 a timing study of MCNP4, the predecessor to MCNP4A, was conducted using ENDF/B-V cross-section libraries, which are export protected. The new study is based upon the new MCNP 25-problem test set which utilizes internationally available data. MCNP4A, its test problems and the test data library are available from the Radiation Shielding and Information Center in Oak Ridge, Tennessee, or from the NEA Data Bank in Saclay, France. Anyone with the same workstation and compiler can get the same test problem sets, the same library files, and the same MCNP4A code from RSIC or NEA and replicate our results. And, because we report raw data, comparison of the performance of other compute platforms and compilers can be made.« less
Efficient coarse simulation of a growing avascular tumor
Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.
2013-01-01
The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128
Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing
NASA Technical Reports Server (NTRS)
Kory, Carol L.
2001-01-01
The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be prohibitively expensive, as it would require manufacturing numerous amplifiers, in addition to acquiring the required digital hardware. As an alternative, the time-domain TWT interaction model developed here provides the capability to establish a computational test bench where ISI or bit error rate can be simulated as a function of TWT operating parameters and component geometries. Intermodulation products, harmonic generation, and backward waves can also be monitored with the model for similar correlations. The advancements in computational capabilities and corresponding potential improvements in TWT performance may prove to be the enabling technologies for realizing unprecedented data rates for near real time transmission of the increasingly larger volumes of data demanded by planned commercial and Government satellite communications applications. This work is in support of the Cross Enterprise Technology Development Program in Headquarters' Advanced Technology & Mission Studies Division and the Air Force Office of Scientific Research Small Business Technology Transfer programs.
Effect of Counterflow Jet on a Supersonic Reentry Capsule
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary C.
2006-01-01
Recent NASA initiatives for space exploration have reinvigorated research on Apollo-like capsule vehicles. Aerothermodynamic characteristics of these capsule configurations during reentry play a crucial role in the performance and safety of the planetary entry probes and the crew exploration vehicles. At issue are the forebody thermal shield protection and afterbody aeroheating predictions. Due to the lack of flight or wind tunnel measurements at hypersonic speed, design decisions on such vehicles would rely heavily on computational results. Validation of current computational tools against experimental measurement thus becomes one of the most important tasks for general hypersonic research. This paper is focused on time-accurate numerical computations of hypersonic flows over a set of capsule configurations, which employ a counterflow jet to offset the detached bow shock. The accompanying increased shock stand-off distance and modified heat transfer characteristics associated with the counterflow jet may provide guidance for future design of hypersonic reentry capsules. The newly emerged space-time conservation element solution element (CESE) method is used to perform time-accurate, unstructured mesh Navier-Stokes computations for all cases investigated. The results show good agreement between experimental and numerical Schlieren pictures. Surface heat flux and aerodynamic force predictions of the capsule configurations are discussed in detail.
A Brief Description of the Kokkos implementation of the SNAP potential in ExaMiniMD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Aidan P.; Trott, Christian Robert
2017-11-01
Within the EXAALT project, the SNAP [1] approach is being used to develop high accuracy potentials for use in large-scale long-time molecular dynamics simulations of materials behavior. In particular, we have developed a new SNAP potential that is suitable for describing the interplay between helium atoms and vacancies in high-temperature tungsten[2]. This model is now being used to study plasma-surface interactions in nuclear fusion reactors for energy production. The high-accuracy of SNAP potentials comes at the price of increased computational cost per atom and increased computational complexity. The increased cost is mitigated by improvements in strong scaling that can bemore » achieved using advanced algorithms [3].« less
Hsu, John; Huang, Jie; Fung, Vicki; Robertson, Nan; Jimison, Holly; Frankel, Richard
2005-01-01
Objective: The aim of this study was to evaluate the impact of introducing health information technology (HIT) on physician-patient interactions during outpatient visits. Design: This was a longitudinal pre-post study: two months before and one and seven months after introduction of examination room computers. Patient questionnaires (n = 313) after primary care visits with physicians (n = 8) within an integrated delivery system. There were three patient satisfaction domains: (1) satisfaction with visit components, (2) comprehension of the visit, and (3) perceptions of the physician's use of the computer. Results: Patients reported that physicians used computers in 82.3% of visits. Compared with baseline, overall patient satisfaction with visits increased seven months after the introduction of computers (odds ratio [OR] = 1.50; 95% confidence interval [CI]: 1.01–2.22), as did satisfaction with physicians' familiarity with patients (OR = 1.60, 95% CI: 1.01–2.52), communication about medical issues (OR = 1.61; 95% CI: 1.05–2.47), and comprehension of decisions made during the visit (OR = 1.63; 95% CI: 1.06–2.50). In contrast, there were no significant changes in patient satisfaction with comprehension of self-care responsibilities, communication about psychosocial issues, or available visit time. Seven months post-introduction, patients were more likely to report that the computer helped the visit run in a more timely manner (OR = 1.76; 95% CI: 1.28–2.42) compared with the first month after introduction. There were no other significant changes in patient perceptions of the computer use over time. Conclusion: The examination room computers appeared to have positive effects on physician-patient interactions related to medical communication without significant negative effects on other areas such as time available for patient concerns. Further study is needed to better understand HIT use during outpatient visits. PMID:15802484
Rural Creativity: A Study of District Mandated Online Professional Development
ERIC Educational Resources Information Center
Johnson, Cynthia; Summerville, Jennifer
2004-01-01
According to the annual industry report in "Training" magazine, money spent on employee training dropped approximately six percent--the first time that training expenditures have dropped since the mid 1990's. At the same time, web-based training increased from 48% of all computer-based training to 61% in just one year (2002-2003). The…
Future in biomolecular computation
NASA Astrophysics Data System (ADS)
Wimmer, E.
1988-01-01
Large-scale computations for biomolecules are dominated by three levels of theory: rigorous quantum mechanical calculations for molecules with up to about 30 atoms, semi-empirical quantum mechanical calculations for systems with up to several hundred atoms, and force-field molecular dynamics studies of biomacromolecules with 10,000 atoms and more including surrounding solvent molecules. It can be anticipated that increased computational power will allow the treatment of larger systems of ever growing complexity. Due to the scaling of the computational requirements with increasing number of atoms, the force-field approaches will benefit the most from increased computational power. On the other hand, progress in methodologies such as density functional theory will enable us to treat larger systems on a fully quantum mechanical level and a combination of molecular dynamics and quantum mechanics can be envisioned. One of the greatest challenges in biomolecular computation is the protein folding problem. It is unclear at this point, if an approach with current methodologies will lead to a satisfactory answer or if unconventional, new approaches will be necessary. In any event, due to the complexity of biomolecular systems, a hierarchy of approaches will have to be established and used in order to capture the wide ranges of length-scales and time-scales involved in biological processes. In terms of hardware development, speed and power of computers will increase while the price/performance ratio will become more and more favorable. Parallelism can be anticipated to become an integral architectural feature in a range of computers. It is unclear at this point, how fast massively parallel systems will become easy enough to use so that new methodological developments can be pursued on such computers. Current trends show that distributed processing such as the combination of convenient graphics workstations and powerful general-purpose supercomputers will lead to a new style of computing in which the calculations are monitored and manipulated as they proceed. The combination of a numeric approach with artificial-intelligence approaches can be expected to open up entirely new possibilities. Ultimately, the most exciding aspect of the future in biomolecular computing will be the unexpected discoveries.
CAMAC throughput of a new RISC-based data acquisition computer at the DIII-D tokamak
NASA Astrophysics Data System (ADS)
Vanderlaan, J. F.; Cummings, J. W.
1993-10-01
The amount of experimental data acquired per plasma discharge at DIII-D has continued to grow. The largest shot size in May 1991 was 49 Mbyte; in May 1992, 66 Mbyte; and in April 1993, 80 Mbyte. The increasing load has prompted the installation of a new Motorola 88100-based MODCOMP computer to supplement the existing core of three older MODCOMP data acquisition CPU's. New Kinetic Systems CAMAC serial highway driver hardware runs on the 88100 VME bus. The new operating system is MODCOMP REAL/IX version of AT&T System V UNIX with real-time extensions and networking capabilities; future plans call for installation of additional computers of this type for tokamak and neutral beam control functions. Experiences with the CAMAC hardware and software will be chronicled, including observation of data throughput. The Enhanced Serial Highway crate controller is advertised as twice as fast as the previous crate controller, and computer I/O speeds are expected to also increase data rates.
2010-01-01
Background Both minimally invasive surgery (MIS) and computer-assisted surgery (CAS) for total hip arthroplasty (THA) have gained popularity in recent years. We conducted a qualitative and systematic review to assess the effectiveness of MIS, CAS and computer-assisted MIS for THA. Methods An extensive computerised literature search of PubMed, Medline, Embase and OVIDSP was conducted. Both randomised clinical trials and controlled clinical trials on the effectiveness of MIS, CAS and computer-assisted MIS for THA were included. Methodological quality was independently assessed by two reviewers. Effect estimates were calculated and a best-evidence synthesis was performed. Results Four high-quality and 14 medium-quality studies with MIS THA as study contrast, and three high-quality and four medium-quality studies with CAS THA as study contrast were included. No studies with computer-assisted MIS for THA as study contrast were identified. Strong evidence was found for a decrease in operative time and intraoperative blood loss for MIS THA, with no difference in complication rates and risk for acetabular outliers. Strong evidence exists that there is no difference in physical functioning, measured either by questionnaires or by gait analysis. Moderate evidence was found for a shorter length of hospital stay after MIS THA. Conflicting evidence was found for a positive effect of MIS THA on pain in the early postoperative period, but that effect diminished after three months postoperatively. Strong evidence was found for an increase in operative time for CAS THA, and limited evidence was found for a decrease in intraoperative blood loss. Furthermore, strong evidence was found for no difference in complication rates, as well as for a significantly lower risk for acetabular outliers. Conclusions The results indicate that MIS THA is a safe surgical procedure, without increases in operative time, blood loss, operative complication rates and component malposition rates. However, the beneficial effect of MIS THA on functional recovery has to be proven. The results also indicate that CAS THA, though resulting in an increase in operative time, may have a positive effect on operative blood loss and operative complication rates. More importantly, the use of CAS results in better positioning of acetabular component of the prosthesis. PMID:20470443
The effect of monitor raster latency on VEPs, ERPs and Brain-Computer Interface performance.
Nagel, Sebastian; Dreher, Werner; Rosenstiel, Wolfgang; Spüler, Martin
2018-02-01
Visual neuroscience experiments and Brain-Computer Interface (BCI) control often require strict timings in a millisecond scale. As most experiments are performed using a personal computer (PC), the latencies that are introduced by the setup should be taken into account and be corrected. As a standard computer monitor uses a rastering to update each line of the image sequentially, this causes a monitor raster latency which depends on the position, on the monitor and the refresh rate. We technically measured the raster latencies of different monitors and present the effects on visual evoked potentials (VEPs) and event-related potentials (ERPs). Additionally we present a method for correcting the monitor raster latency and analyzed the performance difference of a code-modulated VEP BCI speller by correcting the latency. There are currently no other methods validating the effects of monitor raster latency on VEPs and ERPs. The timings of VEPs and ERPs are directly affected by the raster latency. Furthermore, correcting the raster latency resulted in a significant reduction of the target prediction error from 7.98% to 4.61% and also in a more reliable classification of targets by significantly increasing the distance between the most probable and the second most probable target by 18.23%. The monitor raster latency affects the timings of VEPs and ERPs, and correcting resulted in a significant error reduction of 42.23%. It is recommend to correct the raster latency for an increased BCI performance and methodical correctness. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Schaefer, Bastian; Goedecker, Stefan; Goedecker Group Team
Based on Lennard-Jones, Silicon, Sodium-Chloride and Gold clusters, it was found that uphill barrier energies of transition states between directly connected minima tend to increase with increasing structural differences of the two minima. Based on this insight it also turned out that post-processing minima hopping data at a negligible computational cost allows to obtain qualitative topological information on potential energy surfaces that can be stored in so called qualitative connectivity databases. These qualitative connectivity databases are used for generating fingerprint disconnectivity graphs that allow to obtain a first qualitative idea on thermodynamic and kinetic properties of a system of interest. This research was supported by the NCCR MARVEL, funded by the Swiss National Science Foundation. Computer time was provided by the Swiss National Supercomputing Centre (CSCS) under Project ID No. s499.
Silva, Luiz Antonio F.; Barriviera, Mauricio; Januário, Alessandro L.; Bezerra, Ana Cristina B.; Fioravanti, Maria Clorinda S.
2011-01-01
The development of veterinary dentistry has substantially improved the ability to diagnose canine and feline dental abnormalities. Consequently, examinations previously performed only on humans are now available for small animals, thus improving the diagnostic quality. This has increased the need for technical qualification of veterinary professionals and increased technological investments. This study evaluated the use of cone beam computed tomography and intraoral radiography as complementary exams for diagnosing dental abnormalities in dogs and cats. Cone beam computed tomography was provided faster image acquisition with high image quality, was associated with low ionizing radiation levels, enabled image editing, and reduced the exam duration. Our results showed that radiography was an effective method for dental radiographic examination with low cost and fast execution times, and can be performed during surgical procedures. PMID:22122905
A daily huddle facilitates patient transports from a neonatal intensive care unit
Hughes Driscoll, Colleen; El Metwally, Dina
2014-01-01
To improve hospital access for expectant women and newborns in the state of Maryland, a quality improvement team reviewed the patient flow characteristics of our neonatal intensive care unit. We identified inefficiencies in patient discharges, including delays in patient transports. Several patient transport delays were caused by late preparation and delivery of the patient transfer summary. Baseline data collection revealed that transfer summaries were prepared on-time by the resident or nurse practitioner only 41% of the time on average, while the same transfer summaries were signed on-time by the neonatologist 5% of the time on average. Our aim was to improve the rate of on-time transfer summaries to 50% over a four month time period. We performed two PDSA cycles based on feedback from our quality improvement team. In the first cycle, we instituted a daily huddle to increase opportunities for communication about patient transports. In the second cycle, we increased computer access for residents and nurse practitioners preparing the transfer summaries. The on-time summary preparation by residents/nurse practitioners improved to an average of 72% over a nine month period. The same summaries were signed on-time by a neonatologist 26% of the time on average over a nine month period. In conclusion, institution of a daily huddle combined with augmented computer resources significantly increased the percentage of on-time transfer summaries. Current data show a trend toward improved ability to accept patient referrals. Further data collection and analysis is needed to determine the impact of these interventions on access to hospital care for expectant women and newborns in our state. PMID:26734275
Computational model for behavior shaping as an adaptive health intervention strategy.
Berardi, Vincent; Carretero-González, Ricardo; Klepeis, Neil E; Ghanipoor Machiani, Sahar; Jahangiri, Arash; Bellettiere, John; Hovell, Melbourne
2018-03-01
Adaptive behavioral interventions that automatically adjust in real-time to participants' changing behavior, environmental contexts, and individual history are becoming more feasible as the use of real-time sensing technology expands. This development is expected to improve shortcomings associated with traditional behavioral interventions, such as the reliance on imprecise intervention procedures and limited/short-lived effects. JITAI adaptation strategies often lack a theoretical foundation. Increasing the theoretical fidelity of a trial has been shown to increase effectiveness. This research explores the use of shaping, a well-known process from behavioral theory for engendering or maintaining a target behavior, as a JITAI adaptation strategy. A computational model of behavior dynamics and operant conditioning was modified to incorporate the construct of behavior shaping by adding the ability to vary, over time, the range of behaviors that were reinforced when emitted. Digital experiments were performed with this updated model for a range of parameters in order to identify the behavior shaping features that optimally generated target behavior. Narrowing the range of reinforced behaviors continuously in time led to better outcomes compared with a discrete narrowing of the reinforcement window. Rapid narrowing followed by more moderate decreases in window size was more effective in generating target behavior than the inverse scenario. The computational shaping model represents an effective tool for investigating JITAI adaptation strategies. Model parameters must now be translated from the digital domain to real-world experiments so that model findings can be validated.
Childhood CT scans linked to leukemia and brain cancer later in life
Children and young adults scanned multiple times by computed tomography (CT), a commonly used diagnostic tool, have a small increased risk of leukemia and brain tumors in the decade following their first scan.
CAD Services: an Industry Standard Interface for Mechanical CAD Interoperability
NASA Technical Reports Server (NTRS)
Claus, Russell; Weitzer, Ilan
2002-01-01
Most organizations seek to design and develop new products in increasingly shorter time periods. At the same time, increased performance demands require a team-based multidisciplinary design process that may span several organizations. One approach to meet these demands is to use 'Geometry Centric' design. In this approach, design engineers team their efforts through one united representation of the design that is usually captured in a CAD system. Standards-based interfaces are critical to provide uniform, simple, distributed services that enable the 'Geometry Centric' design approach. This paper describes an industry-wide effort, under the Object Management Group's (OMG) Manufacturing Domain Task Force, to define interfaces that enable the interoperability of CAD, Computer Aided Manufacturing (CAM), and Computer Aided Engineering (CAE) tools. This critical link to enable 'Geometry Centric' design is called: Cad Services V1.0. This paper discusses the features of this standard and proposed application.
Cloud-based large-scale air traffic flow optimization
NASA Astrophysics Data System (ADS)
Cao, Yi
The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.
Nonportable computed radiography of the chest--radiologists' acceptance
NASA Astrophysics Data System (ADS)
Gennari, Rose C.; Gur, David; Miketic, Linda M.; Campbell, William L.; Oliver, James H., III; Plunkett, Michael B.
1994-04-01
Following a large ROC study to assess diagnostic accuracy of PA chest computed radiography (CR) images displayed in a variety of formats, we asked nine experienced radiologists to subjectively assess their acceptance of and preferences for display modes in primary diagnosis of erect PA chest images. Our results indicate that radiologists felt somewhat less comfortable interpreting CR images displayed on either laser-printed films or workstations as compared to conventional films. The use of four minified images were thought to somewhat decrease diagnostic confidence, as well as to increase the time of interpretation. The reverse mode (black bone) images increased radiologists' confidence level in the detection of soft tissue abnormalities.
A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations
NASA Technical Reports Server (NTRS)
Dydson, Roger W.; Goodrich, John W.
2000-01-01
Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.
Multimodal airway evaluation in growing patients after rapid maxillary expansion.
Fastuca, R; Meneghel, M; Zecca, P A; Mangano, F; Antonello, M; Nucera, R; Caprioglio, A
2015-06-01
The objective of this study was to evaluate the airway volume of growing patients combining a morphological approach using cone beam computed tomography associated with functional data obtained by polysomnography examination after rapid maxillary expansion treatment. 22 Caucasian patients (mean age 8.3±0.9 years) undergoing rapid maxillary expansion with Haas type expander banded on second deciduous upper molars were enrolled for this prospective study. Cone beam computed tomography scans and polysomnography exams were collected before placing the appliance (T0) and after 12 months (T1). Image processing with airway volume computing and analyses of oxygen saturation and apnoea/hypopnoea index were performed. Airway volume, oxygen saturation and apnea/hypopnea index underwent significant increase over time. However, no significant correlation was seen between their increases. The rapid maxillary expansion treatment induced significant increases in the total airway volume and respiratory performance. Functional respiratory parameters should be included in studies evaluating the RME treatment effects on the respiratory performance.
NASA Astrophysics Data System (ADS)
Tsuda, Kunikazu; Tano, Shunichi; Ichino, Junko
To lower power consumption has becomes a worldwide concern. It is also becoming a bigger area in Computer Systems, such as reflected by the growing use of software-as-a-service and cloud computing whose market has increased since 2000, at the same time, the number of data centers that accumulates and manages the computer has increased rapidly. Power consumption at data centers is accounts for a big share of the entire IT power usage, and is still rapidly increasing. This research focuses on the air-conditioning that occupies accounts for the biggest portion of electric power consumption by data centers, and proposes to develop a technique to lower the power consumption by applying the natural cool air and the snow for control temperature and humidity. We verify those effectiveness of this approach by the experiment. Furthermore, we also examine the extent to which energy reduction is possible when a data center is located in Hokkaido.
Inai, Takuma; Takabayashi, Tomoya; Edama, Mutsuaki; Kubo, Masayoshi
2018-04-27
The association between repetitive hip moment impulse and the progression of hip osteoarthritis is a recently recognized area of study. A sit-to-stand movement is essential for daily life and requires hip extension moment. Although a change in the sit-to-stand movement time may influence the hip moment impulse in the sagittal plane, this effect has not been examined. The purpose of this study was to clarify the relationship between sit-to-stand movement time and hip moment impulse in the sagittal plane. Twenty subjects performed the sit-to-stand movement at a self-selected natural speed. The hip, knee, and ankle joint angles obtained from experimental trials were used to perform two computer simulations. In the first simulation, the actual sit-to-stand movement time obtained from the experiment was entered. In the second simulation, sit-to-stand movement times ranging from 0.5 to 4.0 s at intervals of 0.25 s were entered. Hip joint moments and hip moment impulses in the sagittal plane during sit-to-stand movements were calculated for both computer simulations. The reliability of the simulation model was confirmed, as indicated by the similarities in the hip joint moment waveforms (r = 0.99) and the hip moment impulses in the sagittal plane between the first computer simulation and the experiment. In the second computer simulation, the hip moment impulse in the sagittal plane decreased with a decrease in the sit-to-stand movement time, although the peak hip extension moment increased with a decrease in the movement time. These findings clarify the association between the sit-to-stand movement time and hip moment impulse in the sagittal plane and may contribute to the prevention of the progression of hip osteoarthritis.
El Niæo linked to increase in childhood diarrheal disease, a leading cause of premature death
NASA Astrophysics Data System (ADS)
Showstack, Randy
William Checkley recalled plotting into his computer the data for the number of hospital admissions and the time of year.When we did the analysis and started looking at the relative increase, thats when it hit us, said Checkley, a medical student at Johns Hopkins University in Maryland.
A Comparison of Computer-Assisted Instruction and Field-Based Learning for Youth Rangeland Education
ERIC Educational Resources Information Center
Peterson, Jennifer; Launchbaugh, Karen; Pickering, Michael; Hollenhorst, Steven
2006-01-01
Field-based learning experiences are often used to increase the effectiveness of science curricula. However, time and financial limitations in public schools often hinder a teacher's ability to bring their students into the field for learning, despite increased demands to incorporate more science content into their curricula. In addition, federal…
NASA Astrophysics Data System (ADS)
Tomaro, Robert F.
1998-07-01
The present research is aimed at developing a higher-order, spatially accurate scheme for both steady and unsteady flow simulations using unstructured meshes. The resulting scheme must work on a variety of general problems to ensure the creation of a flexible, reliable and accurate aerodynamic analysis tool. To calculate the flow around complex configurations, unstructured grids and the associated flow solvers have been developed. Efficient simulations require the minimum use of computer memory and computational times. Unstructured flow solvers typically require more computer memory than a structured flow solver due to the indirect addressing of the cells. The approach taken in the present research was to modify an existing three-dimensional unstructured flow solver to first decrease the computational time required for a solution and then to increase the spatial accuracy. The terms required to simulate flow involving non-stationary grids were also implemented. First, an implicit solution algorithm was implemented to replace the existing explicit procedure. Several test cases, including internal and external, inviscid and viscous, two-dimensional, three-dimensional and axi-symmetric problems, were simulated for comparison between the explicit and implicit solution procedures. The increased efficiency and robustness of modified code due to the implicit algorithm was demonstrated. Two unsteady test cases, a plunging airfoil and a wing undergoing bending and torsion, were simulated using the implicit algorithm modified to include the terms required for a moving and/or deforming grid. Secondly, a higher than second-order spatially accurate scheme was developed and implemented into the baseline code. Third- and fourth-order spatially accurate schemes were implemented and tested. The original dissipation was modified to include higher-order terms and modified near shock waves to limit pre- and post-shock oscillations. The unsteady cases were repeated using the higher-order spatially accurate code. The new solutions were compared with those obtained using the second-order spatially accurate scheme. Finally, the increased efficiency of using an implicit solution algorithm in a production Computational Fluid Dynamics flow solver was demonstrated for steady and unsteady flows. A third- and fourth-order spatially accurate scheme has been implemented creating a basis for a state-of-the-art aerodynamic analysis tool.
Parallel, distributed and GPU computing technologies in single-particle electron microscopy
Schmeisser, Martin; Heisen, Burkhard C.; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger
2009-01-01
Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today’s technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined. PMID:19564686
Data Mining and Knowledge Discover - IBM Cognitive Alternatives for NASA KSC
NASA Technical Reports Server (NTRS)
Velez, Victor Hugo
2016-01-01
Skillful tools in cognitive computing to transform industries have been found favorable and profitable for different Directorates at NASA KSC. In this study is shown how cognitive computing systems can be useful for NASA when computers are trained in the same way as humans are to gain knowledge over time. Increasing knowledge through senses, learning and a summation of events is how the applications created by the firm IBM empower the artificial intelligence in a cognitive computing system. NASA has explored and applied for the last decades the artificial intelligence approach specifically with cognitive computing in few projects adopting similar models proposed by IBM Watson. However, the usage of semantic technologies by the dedicated business unit developed by IBM leads these cognitive computing applications to outperform the functionality of the inner tools and present outstanding analysis to facilitate the decision making for managers and leads in a management information system.
Parallel, distributed and GPU computing technologies in single-particle electron microscopy.
Schmeisser, Martin; Heisen, Burkhard C; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger
2009-07-01
Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today's technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined.
A computer vision for animal ecology.
Weinstein, Ben G
2018-05-01
A central goal of animal ecology is to observe species in the natural world. The cost and challenge of data collection often limit the breadth and scope of ecological study. Ecologists often use image capture to bolster data collection in time and space. However, the ability to process these images remains a bottleneck. Computer vision can greatly increase the efficiency, repeatability and accuracy of image review. Computer vision uses image features, such as colour, shape and texture to infer image content. I provide a brief primer on ecological computer vision to outline its goals, tools and applications to animal ecology. I reviewed 187 existing applications of computer vision and divided articles into ecological description, counting and identity tasks. I discuss recommendations for enhancing the collaboration between ecologists and computer scientists and highlight areas for future growth of automated image analysis. © 2017 The Author. Journal of Animal Ecology © 2017 British Ecological Society.
[Computer game addiction: a psychopathological symptom complex in adolescence].
Wölfling, Klaus; Thalemann, Ralf; Grüsser-Sinopoli, Sabine M
2008-07-01
Cases of excessive computer gaming are increasingly reported by practitioners in the psychiatric field. Since there is no standardized definition of this symptom complex, the aim of this study is to access excessive computer gaming in German adolescents as an addictive disorder and its potential negative consequences. Psychopathological computer gaming behavior was diagnosed by applying the adapted diagnostic criteria of substance-related-addictions as defined by the ICD-10. At the same time demographic variables, state of clinical anxiety and underlying cognitive mechanisms were analyzed. 6.3 % of the 221 participating pupils - mostly boys with a low educational background - fulfilled the diagnostic criteria of a behavioral addiction. Clinically diagnosed adolescents exhibited limited cognitive flexibility and were identified to utilize computer gaming as a mood management strategy. These results can be interpreted as a first hint for a prevalence estimation of psychopathological computer gaming in German adolescents.
GPU-accelerated Modeling and Element-free Reverse-time Migration with Gauss Points Partition
NASA Astrophysics Data System (ADS)
Zhen, Z.; Jia, X.
2014-12-01
Element-free method (EFM) has been applied to seismic modeling and migration. Compared with finite element method (FEM) and finite difference method (FDM), it is much cheaper and more flexible because only the information of the nodes and the boundary of the study area are required in computation. In the EFM, the number of Gauss points should be consistent with the number of model nodes; otherwise the accuracy of the intermediate coefficient matrices would be harmed. Thus when we increase the nodes of velocity model in order to obtain higher resolution, we find that the size of the computer's memory will be a bottleneck. The original EFM can deal with at most 81×81 nodes in the case of 2G memory, as tested by Jia and Hu (2006). In order to solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition (GPP), and utilize the GPUs to improve the computation efficiency. Considering the characteristics of the Gaussian points, the GPP method doesn't influence the propagation of seismic wave in the velocity model. To overcome the time-consuming computation of the stiffness matrix (K) and the mass matrix (M), we also use the GPUs in our computation program. We employ the compressed sparse row (CSR) format to compress the intermediate sparse matrices and try to simplify the operations by solving the linear equations with the CULA Sparse's Conjugate Gradient (CG) solver instead of the linear sparse solver 'PARDISO'. It is observed that our strategy can significantly reduce the computational time of K and Mcompared with the algorithm based on CPU. The model tested is Marmousi model. The length of the model is 7425m and the depth is 2990m. We discretize the model with 595x298 nodes, 300x300 Gauss cells and 3x3 Gauss points in each cell. In contrast to the computational time of the conventional EFM, the GPUs-GPP approach can substantially improve the efficiency. The speedup ratio of time consumption of computing K, M is 120 and the speedup ratio time consumption of RTM is 11.5. At the same time, the accuracy of imaging is not harmed. Another advantage of the GPUs-GPP method is its easy applications in other numerical methods such as the FEM. Finally, in the GPUs-GPP method, the arrays require quite limited memory storage, which makes the method promising in dealing with large-scale 3D problems.
Direct Measurements of Smartphone Screen-Time: Relationships with Demographics and Sleep.
Christensen, Matthew A; Bettencourt, Laura; Kaye, Leanne; Moturu, Sai T; Nguyen, Kaylin T; Olgin, Jeffrey E; Pletcher, Mark J; Marcus, Gregory M
2016-01-01
Smartphones are increasingly integrated into everyday life, but frequency of use has not yet been objectively measured and compared to demographics, health information, and in particular, sleep quality. The aim of this study was to characterize smartphone use by measuring screen-time directly, determine factors that are associated with increased screen-time, and to test the hypothesis that increased screen-time is associated with poor sleep. We performed a cross-sectional analysis in a subset of 653 participants enrolled in the Health eHeart Study, an internet-based longitudinal cohort study open to any interested adult (≥ 18 years). Smartphone screen-time (the number of minutes in each hour the screen was on) was measured continuously via smartphone application. For each participant, total and average screen-time were computed over 30-day windows. Average screen-time specifically during self-reported bedtime hours and sleeping period was also computed. Demographics, medical information, and sleep habits (Pittsburgh Sleep Quality Index-PSQI) were obtained by survey. Linear regression was used to obtain effect estimates. Total screen-time over 30 days was a median 38.4 hours (IQR 21.4 to 61.3) and average screen-time over 30 days was a median 3.7 minutes per hour (IQR 2.2 to 5.5). Younger age, self-reported race/ethnicity of Black and "Other" were associated with longer average screen-time after adjustment for potential confounders. Longer average screen-time was associated with shorter sleep duration and worse sleep-efficiency. Longer average screen-times during bedtime and the sleeping period were associated with poor sleep quality, decreased sleep efficiency, and longer sleep onset latency. These findings on actual smartphone screen-time build upon prior work based on self-report and confirm that adults spend a substantial amount of time using their smartphones. Screen-time differs across age and race, but is similar across socio-economic strata suggesting that cultural factors may drive smartphone use. Screen-time is associated with poor sleep. These findings cannot support conclusions on causation. Effect-cause remains a possibility: poor sleep may lead to increased screen-time. However, exposure to smartphone screens, particularly around bedtime, may negatively impact sleep.
Advanced Computation in Plasma Physics
NASA Astrophysics Data System (ADS)
Tang, William
2001-10-01
Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. This talk will review recent progress and future directions for advanced simulations in magnetically-confined plasmas with illustrative examples chosen from areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop MPP's to produce 3-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for tens of thousands time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.
The interplay of attention economics and computer-aided detection marks in screening mammography
NASA Astrophysics Data System (ADS)
Schwartz, Tayler M.; Sridharan, Radhika; Wei, Wei; Lukyanchenko, Olga; Geiser, William; Whitman, Gary J.; Haygood, Tamara Miner
2016-03-01
Introduction: According to attention economists, overabundant information leads to decreased attention for individual pieces of information. Computer-aided detection (CAD) alerts radiologists to findings potentially associated with breast cancer but is notorious for creating an abundance of false-positive marks. We suspected that increased CAD marks do not lengthen mammogram interpretation time, as radiologists will selectively disregard these marks when present in larger numbers. We explore the relevance of attention economics in mammography by examining how the number of CAD marks affects interpretation time. Methods: We performed a retrospective review of bilateral digital screening mammograms obtained between January 1, 2011 and February 28, 2014, using only weekend interpretations to decrease distractions and the likelihood of trainee participation. We stratified data according to reader and used ANOVA to assess the relationship between number of CAD marks and interpretation time. Results: Ten radiologists, with median experience after residency of 12.5 years (range 6 to 24,) interpreted 1849 mammograms. When accounting for number of images, Breast Imaging Reporting and Data System category, and breast density, increasing numbers of CAD marks was correlated with longer interpretation time only for the three radiologists with the fewest years of experience (median 7 years.) Conclusion: For the 7 most experienced readers, increasing CAD marks did not lengthen interpretation time. We surmise that as CAD marks increase, the attention given to individual marks decreases. Experienced radiologists may rapidly dismiss larger numbers of CAD marks as false-positive, having learned that devoting extra attention to such marks does not improve clinical detection.
Cortical Specializations Underlying Fast Computations
Volgushev, Maxim
2016-01-01
The time course of behaviorally relevant environmental events sets temporal constraints on neuronal processing. How does the mammalian brain make use of the increasingly complex networks of the neocortex, while making decisions and executing behavioral reactions within a reasonable time? The key parameter determining the speed of computations in neuronal networks is a time interval that neuronal ensembles need to process changes at their input and communicate results of this processing to downstream neurons. Theoretical analysis identified basic requirements for fast processing: use of neuronal populations for encoding, background activity, and fast onset dynamics of action potentials in neurons. Experimental evidence shows that populations of neocortical neurons fulfil these requirements. Indeed, they can change firing rate in response to input perturbations very quickly, within 1 to 3 ms, and encode high-frequency components of the input by phase-locking their spiking to frequencies up to 300 to 1000 Hz. This implies that time unit of computations by cortical ensembles is only few, 1 to 3 ms, which is considerably faster than the membrane time constant of individual neurons. The ability of cortical neuronal ensembles to communicate on a millisecond time scale allows for complex, multiple-step processing and precise coordination of neuronal activity in parallel processing streams, while keeping the speed of behavioral reactions within environmentally set temporal constraints. PMID:25689988
NASA Astrophysics Data System (ADS)
Bouchet, L.; Amestoy, P.; Buttari, A.; Rouet, F.-H.; Chauvin, M.
2013-02-01
Nowadays, analyzing and reducing the ever larger astronomical datasets is becoming a crucial challenge, especially for long cumulated observation times. The INTEGRAL/SPI X/γ-ray spectrometer is an instrument for which it is essential to process many exposures at the same time in order to increase the low signal-to-noise ratio of the weakest sources. In this context, the conventional methods for data reduction are inefficient and sometimes not feasible at all. Processing several years of data simultaneously requires computing not only the solution of a large system of equations, but also the associated uncertainties. We aim at reducing the computation time and the memory usage. Since the SPI transfer function is sparse, we have used some popular methods for the solution of large sparse linear systems; we briefly review these methods. We use the Multifrontal Massively Parallel Solver (MUMPS) to compute the solution of the system of equations. We also need to compute the variance of the solution, which amounts to computing selected entries of the inverse of the sparse matrix corresponding to our linear system. This can be achieved through one of the latest features of the MUMPS software that has been partly motivated by this work. In this paper we provide a brief presentation of this feature and evaluate its effectiveness on astrophysical problems requiring the processing of large datasets simultaneously, such as the study of the entire emission of the Galaxy. We used these algorithms to solve the large sparse systems arising from SPI data processing and to obtain both their solutions and the associated variances. In conclusion, thanks to these newly developed tools, processing large datasets arising from SPI is now feasible with both a reasonable execution time and a low memory usage.
Real-time orthorectification by FPGA-based hardware acceleration
NASA Astrophysics Data System (ADS)
Kuo, David; Gordon, Don
2010-10-01
Orthorectification that corrects the perspective distortion of remote sensing imagery, providing accurate geolocation and ease of correlation to other images is a valuable first-step in image processing for information extraction. However, the large amount of metadata and the floating-point matrix transformations required to operate on each pixel make this a computation and I/O (Input/Output) intensive process. As result much imagery is either left unprocessed or loses timesensitive value in the long processing cycle. However, the computation on each pixel can be reduced substantially by using computational results of the neighboring pixels and accelerated by special pipelined hardware architecture in one to two orders of magnitude. A specialized coprocessor that is implemented inside an FPGA (Field Programmable Gate Array) chip and surrounded by vendorsupported hardware IP (Intellectual Property) shares the computation workload with CPU through PCI-Express interface. The ultimate speed of one pixel per clock (125 MHz) is achieved by the pipelined systolic array architecture. The optimal partition between software and hardware, the timing profile among image I/O and computation, and the highly automated GUI (Graphical User Interface) that fully exploits this speed increase to maximize overall image production throughput will also be discussed. The software that runs on a workstation with the acceleration hardware orthorectifies 16 Megapixels per second, which is 16 times faster than without the hardware. It turns the production time from months to days. A real-life successful story of an imaging satellite company that adopted such workstations for their orthorectified imagery production will be presented. The potential candidacy of the image processing computation that can be accelerated more efficiently by the same approach will also be analyzed.
Enhanced Contact Graph Routing (ECGR) MACHETE Simulation Model
NASA Technical Reports Server (NTRS)
Segui, John S.; Jennings, Esther H.; Clare, Loren P.
2013-01-01
Contact Graph Routing (CGR) for Delay/Disruption Tolerant Networking (DTN) space-based networks makes use of the predictable nature of node contacts to make real-time routing decisions given unpredictable traffic patterns. The contact graph will have been disseminated to all nodes before the start of route computation. CGR was designed for space-based networking environments where future contact plans are known or are independently computable (e.g., using known orbital dynamics). For each data item (known as a bundle in DTN), a node independently performs route selection by examining possible paths to the destination. Route computation could conceivably run thousands of times a second, so computational load is important. This work refers to the simulation software model of Enhanced Contact Graph Routing (ECGR) for DTN Bundle Protocol in JPL's MACHETE simulation tool. The simulation model was used for performance analysis of CGR and led to several performance enhancements. The simulation model was used to demonstrate the improvements of ECGR over CGR as well as other routing methods in space network scenarios. ECGR moved to using earliest arrival time because it is a global monotonically increasing metric that guarantees the safety properties needed for the solution's correctness since route re-computation occurs at each node to accommodate unpredicted changes (e.g., traffic pattern, link quality). Furthermore, using earliest arrival time enabled the use of the standard Dijkstra algorithm for path selection. The Dijkstra algorithm for path selection has a well-known inexpensive computational cost. These enhancements have been integrated into the open source CGR implementation. The ECGR model is also useful for route metric experimentation and comparisons with other DTN routing protocols particularly when combined with MACHETE's space networking models and Delay Tolerant Link State Routing (DTLSR) model.
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
Algorithmic Complexity. Volume II.
1982-06-01
digital computers, this improvement will go unnoticed if only a few complex products are to be taken, however it can become increasingly important as...computed in the reverse order. If the products are formed moving from the top of the tree downward, and then the divisions are performed going from the...the reverse order, going up the tree. (r- a mod m means that r is the remainder when a is divided by M.) The overall running time of the algorithm is
NASA Technical Reports Server (NTRS)
Omalley, T. A.; Connolly, D. J.
1977-01-01
The use of the coupled cavity traveling wave tube for space communications has led to an increased interest in improving the efficiency of the basic interaction process in these devices through velocity resynchronization and other methods. To analyze these methods, a flexible, large signal computer program for use on the IBM 360/67 time-sharing system has been developed. The present report is a users' manual for this program.
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Lacy, Fred; Carriere, Patrick
2015-05-01
Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.
Roper, Ian P E; Besley, Nicholas A
2016-03-21
The simulation of X-ray emission spectra of transition metal complexes with time-dependent density functional theory (TDDFT) is investigated. X-ray emission spectra can be computed within TDDFT in conjunction with the Tamm-Dancoff approximation by using a reference determinant with a vacancy in the relevant core orbital, and these calculations can be performed using the frozen orbital approximation or with the relaxation of the orbitals of the intermediate core-ionised state included. Both standard exchange-correlation functionals and functionals specifically designed for X-ray emission spectroscopy are studied, and it is shown that the computed spectral band profiles are sensitive to the exchange-correlation functional used. The computed intensities of the spectral bands can be rationalised by considering the metal p orbital character of the valence molecular orbitals. To compute X-ray emission spectra with the correct energy scale allowing a direct comparison with experiment requires the relaxation of the core-ionised state to be included and the use of specifically designed functionals with increased amounts of Hartree-Fock exchange in conjunction with high quality basis sets. A range-corrected functional with increased Hartree-Fock exchange in the short range provides transition energies close to experiment and spectral band profiles that have a similar accuracy to those from standard functionals.
Research on rolling element bearing fault diagnosis based on genetic algorithm matching pursuit
NASA Astrophysics Data System (ADS)
Rong, R. W.; Ming, T. F.
2017-12-01
In order to solve the problem of slow computation speed, matching pursuit algorithm is applied to rolling bearing fault diagnosis, and the improvement are conducted from two aspects that are the construction of dictionary and the way to search for atoms. To be specific, Gabor function which can reflect time-frequency localization characteristic well is used to construct the dictionary, and the genetic algorithm to improve the searching speed. A time-frequency analysis method based on genetic algorithm matching pursuit (GAMP) algorithm is proposed. The way to set property parameters for the improvement of the decomposition results is studied. Simulation and experimental results illustrate that the weak fault feature of rolling bearing can be extracted effectively by this proposed method, at the same time, the computation speed increases obviously.
NASA Technical Reports Server (NTRS)
Ray, R. J.; Hicks, J. W.; Alexander, R. I.
1988-01-01
The X-29A advanced technology demonstrator has shown the practicality and advantages of the capability to compute and display, in real time, aeroperformance flight results. This capability includes the calculation of the in-flight measured drag polar, lift curve, and aircraft specific excess power. From these elements many other types of aeroperformance measurements can be computed and analyzed. The technique can be used to give an immediate postmaneuver assessment of data quality and maneuver technique, thus increasing the productivity of a flight program. A key element of this new method was the concurrent development of a real-time in-flight net thrust algorithm, based on the simplified gross thrust method. This net thrust algorithm allows for the direct calculation of total aircraft drag.
Factors influencing the latency of simple reaction time
Woods, David L.; Wyma, John M.; Yund, E. William; Herron, Timothy J.; Reed, Bruce
2015-01-01
Simple reaction time (SRT), the minimal time needed to respond to a stimulus, is a basic measure of processing speed. SRTs were first measured by Francis Galton in the 19th century, who reported visual SRT latencies below 190 ms in young subjects. However, recent large-scale studies have reported substantially increased SRT latencies that differ markedly in different laboratories, in part due to timing delays introduced by the computer hardware and software used for SRT measurement. We developed a calibrated and temporally precise SRT test to analyze the factors that influence SRT latencies in a paradigm where visual stimuli were presented to the left or right hemifield at varying stimulus onset asynchronies (SOAs). Experiment 1 examined a community sample of 1469 subjects ranging in age from 18 to 65. Mean SRT latencies were short (231, 213 ms when corrected for hardware delays) and increased significantly with age (0.55 ms/year), but were unaffected by sex or education. As in previous studies, SRTs were prolonged at shorter SOAs and were slightly faster for stimuli presented in the visual field contralateral to the responding hand. Stimulus detection time (SDT) was estimated by subtracting movement initiation time, measured in a speeded finger tapping test, from SRTs. SDT latencies averaged 131 ms and were unaffected by age. Experiment 2 tested 189 subjects ranging in age from 18 to 82 years in a different laboratory using a larger range of SOAs. Both SRTs and SDTs were slightly prolonged (by 7 ms). SRT latencies increased with age while SDT latencies remained stable. Precise computer-based measurements of SRT latencies show that processing speed is as fast in contemporary populations as in the Victorian era, and that age-related increases in SRT latencies are due primarily to slowed motor output. PMID:25859198
A computer architecture for intelligent machines
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, G. N.
1992-01-01
The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.