Sample records for significant computational time

  1. Cluster Computing for Embedded/Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  2. Influence of computer work under time pressure on cardiac activity.

    PubMed

    Shi, Ping; Hu, Sijung; Yu, Hongliu

    2015-03-01

    Computer users are often under stress when required to complete computer work within a required time. Work stress has repeatedly been associated with an increased risk for cardiovascular disease. The present study examined the effects of time pressure workload during computer tasks on cardiac activity in 20 healthy subjects. Heart rate, time domain and frequency domain indices of heart rate variability (HRV) and Poincaré plot parameters were compared among five computer tasks and two rest periods. Faster heart rate and decreased standard deviation of R-R interval were noted in response to computer tasks under time pressure. The Poincaré plot parameters showed significant differences between different levels of time pressure workload during computer tasks, and between computer tasks and the rest periods. In contrast, no significant differences were identified for the frequency domain indices of HRV. The results suggest that the quantitative Poincaré plot analysis used in this study was able to reveal the intrinsic nonlinear nature of the autonomically regulated cardiac rhythm. Specifically, heightened vagal tone occurred during the relaxation computer tasks without time pressure. In contrast, the stressful computer tasks with added time pressure stimulated cardiac sympathetic activity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Television viewing, computer game playing, and Internet use and self-reported time to bed and time out of bed in secondary-school children.

    PubMed

    Van den Bulck, Jan

    2004-02-01

    To investigate the relationship between the presence of a television set, a gaming computer, and/or an Internet connection in the room of adolescents and television viewing, computer game playing, and Internet use on the one hand, and time to bed, time up, time spent in bed, and overall tiredness in first- and fourth-year secondary-school children on the other hand. A random sample of students from 15 schools in Flanders, Belgium, yielded 2546 children who completed a questionnaire with questions about media presence in bedrooms; volume of television viewing, computer game playing, and Internet use; time to bed and time up on average weekdays and average weekend days; and questions regarding the level of tiredness in the morning, at school, after a day at school, and after the weekend. Children with a television set in their rooms went to bed significantly later on weekdays and weekend days and got up significantly later on weekend days. Overall, they spent less time in bed on weekdays. Children with a gaming computer in their rooms went to bed significantly later on weekdays. On weekdays, they spent significantly less time in bed. Children who watched more television went to bed later on weekdays and weekend days and got up later on weekend days. They spent less time in bed on weekdays. They reported higher overall levels of being tired. Children who spent more time playing computer games went to bed later on weekdays and weekend days and got up later on weekend days. On weekdays, they actually got up significantly earlier. They spent less time in bed on weekdays and reported higher levels of tiredness. Children who spent more time using the Internet went to bed significantly later during the week and during the weekend. They got up later on weekend days. They spent less time in bed during the week and reported higher levels of tiredness. Going out was also significantly related to sleeping later and less. Concerns about media use should not be limited to television. Computer game playing and Internet use are related to sleep behavior as well. Leisure activities that are unstructured seem to be negatively related to good sleep patterns. Imposing more structure (eg, end times) might reduce impact.

  4. TV Time but Not Computer Time Is Associated with Cardiometabolic Risk in Dutch Young Adults

    PubMed Central

    Altenburg, Teatske M.; de Kroon, Marlou L. A.; Renders, Carry M.; HiraSing, Remy; Chinapaw, Mai J. M.

    2013-01-01

    Background TV time and total sedentary time have been positively related to biomarkers of cardiometabolic risk in adults. We aim to examine the association of TV time and computer time separately with cardiometabolic biomarkers in young adults. Additionally, the mediating role of waist circumference (WC) is studied. Methods and Findings Data of 634 Dutch young adults (18–28 years; 39% male) were used. Cardiometabolic biomarkers included indicators of overweight, blood pressure, blood levels of fasting plasma insulin, cholesterol, glucose, triglycerides and a clustered cardiometabolic risk score. Linear regression analyses were used to assess the cross-sectional association of self-reported TV and computer time with cardiometabolic biomarkers, adjusting for demographic and lifestyle factors. Mediation by WC was checked using the product-of-coefficient method. TV time was significantly associated with triglycerides (B = 0.004; CI = [0.001;0.05]) and insulin (B = 0.10; CI = [0.01;0.20]). Computer time was not significantly associated with any of the cardiometabolic biomarkers. We found no evidence for WC to mediate the association of TV time or computer time with cardiometabolic biomarkers. Conclusions We found a significantly positive association of TV time with cardiometabolic biomarkers. In addition, we found no evidence for WC as a mediator of this association. Our findings suggest a need to distinguish between TV time and computer time within future guidelines for screen time. PMID:23460900

  5. Electromagnetic Navigational Bronchoscopy Reduces the Time Required for Localization and Resection of Lung Nodules.

    PubMed

    Bolton, William David; Cochran, Thomas; Ben-Or, Sharon; Stephenson, James E; Ellis, William; Hale, Allyson L; Binks, Andrew P

    The aims of the study were to evaluate electromagnetic navigational bronchoscopy (ENB) and computed tomography-guided placement as localization techniques for minimally invasive resection of small pulmonary nodules and determine whether electromagnetic navigational bronchoscopy is a safer and more effective method than computed tomography-guided localization. We performed a retrospective review of our thoracic surgery database to identify patients who underwent minimally invasive resection for a pulmonary mass and used either electromagnetic navigational bronchoscopy or computed tomography-guided localization techniques between July 2011 and May 2015. Three hundred eighty-three patients had a minimally invasive resection during our study period, 117 of whom underwent electromagnetic navigational bronchoscopy or computed tomography localization (electromagnetic navigational bronchoscopy = 81; computed tomography = 36). There was no significant difference between computed tomography and electromagnetic navigational bronchoscopy patient groups with regard to age, sex, race, pathology, nodule size, or location. Both computed tomography and electromagnetic navigational bronchoscopy were 100% successful at localizing the mass, and there was no difference in the type of definitive surgical resection (wedge, segmentectomy, or lobectomy) (P = 0.320). Postoperative complications occurred in 36% of all patients, but there were no complications related to the localization procedures. In terms of localization time and surgical time, there was no difference between groups. However, the down/wait time between localization and resection was significant (computed tomography = 189 minutes; electromagnetic navigational bronchoscopy = 27 minutes); this explains why the difference in total time (sum of localization, down, and surgery) was significant (P < 0.001). We found electromagnetic navigational bronchoscopy to be as safe and effective as computed tomography-guided wire placement and to provide a significantly decreased down time between localization and surgical resection.

  6. Computer-assisted versus conventional free fibula flap technique for craniofacial reconstruction: an outcomes comparison.

    PubMed

    Seruya, Mitchel; Fisher, Mark; Rodriguez, Eduardo D

    2013-11-01

    There has been rising interest in computer-aided design/computer-aided manufacturing for preoperative planning and execution of osseous free flap reconstruction. The purpose of this study was to compare outcomes between computer-assisted and conventional fibula free flap techniques for craniofacial reconstruction. A two-center, retrospective review was carried out on patients who underwent fibula free flap surgery for craniofacial reconstruction from 2003 to 2012. Patients were categorized by the type of reconstructive technique: conventional (between 2003 and 2009) or computer-aided design/computer-aided manufacturing (from 2010 to 2012). Demographics, surgical factors, and perioperative and long-term outcomes were compared. A total of 68 patients underwent microsurgical craniofacial reconstruction: 58 conventional and 10 computer-aided design and manufacturing fibula free flaps. By demographics, patients undergoing the computer-aided design/computer-aided manufacturing method were significantly older and had a higher rate of radiotherapy exposure compared with conventional patients. Intraoperatively, the median number of osteotomies was significantly higher (2.0 versus 1.0, p=0.002) and the median ischemia time was significantly shorter (120 minutes versus 170 minutes, p=0.004) for the computer-aided design/computer-aided manufacturing technique compared with conventional techniques; operative times were shorter for patients undergoing the computer-aided design/computer-aided manufacturing technique, although this did not reach statistical significance. Perioperative and long-term outcomes were equivalent for the two groups, notably, hospital length of stay, recipient-site infection, partial and total flap loss, and rate of soft-tissue and bony tissue revisions. Microsurgical craniofacial reconstruction using a computer-assisted fibula flap technique yielded significantly shorter ischemia times amidst a higher number of osteotomies compared with conventional techniques. Therapeutic, III.

  7. The Use of Computer Technology by Older Adults.

    ERIC Educational Resources Information Center

    Galusha, Jill M.

    The older adult (55+) population is becoming a significant presence in the personal computer market. Seniors have the discretionary income, experience, interest, and free time to make use of computers in interesting ways. A literature review found that older adults make use of computers in significant numbers: 30 percent of computer owners are…

  8. CATS--Computer Assisted Teaching in Science.

    ERIC Educational Resources Information Center

    Barron, Marcelline A.

    This document contains the listings for 46 computer programs which are designed to teach various concepts in chemistry and physics. Significant time was spent in writing programs in which students would input chemical and physical data from their laboratory experiments. No significant time was spent writing drill and practice programs other than…

  9. Computer games and prosocial behaviour.

    PubMed

    Mengel, Friederike

    2014-01-01

    We relate different self-reported measures of computer use to individuals' propensity to cooperate in the Prisoner's dilemma. The average cooperation rate is positively related to the self-reported amount participants spend playing computer games. None of the other computer time use variables (including time spent on social media, browsing internet, working etc.) are significantly related to cooperation rates.

  10. The Interrupted Time Series as Quasi-Experiment: Three Tests of Significance. A Fortran Program for the CDC 3400 Computer.

    ERIC Educational Resources Information Center

    Sween, Joyce; Campbell, Donald T.

    Computational formulae for the following three tests of significance, useful in the interrupted time series design, are given: (1) a "t" test (Mood, 1950) for the significance of the first post-change observation from a value predicted by a linear fit of the pre-change observations; (2) an "F" test (Walker and Lev, 1953) of the…

  11. Computer versus paper--does it make any difference in test performance?

    PubMed

    Karay, Yassin; Schauber, Stefan K; Stosch, Christoph; Schüttpelz-Brauns, Katrin

    2015-01-01

    CONSTRUCT: In this study, we examine the differences in test performance between the paper-based and the computer-based version of the Berlin formative Progress Test. In this context it is the first study that allows controlling for students' prior performance. Computer-based tests make possible a more efficient examination procedure for test administration and review. Although university staff will benefit largely from computer-based tests, the question arises if computer-based tests influence students' test performance. A total of 266 German students from the 9th and 10th semester of medicine (comparable with the 4th-year North American medical school schedule) participated in the study (paper = 132, computer = 134). The allocation of the test format was conducted as a randomized matched-pair design in which students were first sorted according to their prior test results. The organizational procedure, the examination conditions, the room, and seating arrangements, as well as the order of questions and answers, were identical in both groups. The sociodemographic variables and pretest scores of both groups were comparable. The test results from the paper and computer versions did not differ. The groups remained within the allotted time, but students using the computer version (particularly the high performers) needed significantly less time to complete the test. In addition, we found significant differences in guessing behavior. Low performers using the computer version guess significantly more than low-performing students in the paper-pencil version. Participants in computer-based tests are not at a disadvantage in terms of their test results. The computer-based test required less processing time. The reason for the longer processing time when using the paper-pencil version might be due to the time needed to write the answer down, controlling for transferring the answer correctly. It is still not known why students using the computer version (particularly low-performing students) guess at a higher rate. Further studies are necessary to understand this finding.

  12. Individual and family environmental correlates of television and computer time in 10- to 12-year-old European children: the ENERGY-project.

    PubMed

    Verloigne, Maïté; Van Lippevelde, Wendy; Bere, Elling; Manios, Yannis; Kovács, Éva; Grillenberger, Monika; Maes, Lea; Brug, Johannes; De Bourdeaudhuij, Ilse

    2015-09-18

    The aim was to investigate which individual and family environmental factors are related to television and computer time separately in 10- to-12-year-old children within and across five European countries (Belgium, Germany, Greece, Hungary, Norway). Data were used from the ENERGY-project. Children and one of their parents completed a questionnaire, including questions on screen time behaviours and related individual and family environmental factors. Family environmental factors included social, political, economic and physical environmental factors. Complete data were obtained from 2022 child-parent dyads (53.8 % girls, mean child age 11.2 ± 0.8 years; mean parental age 40.5 ± 5.1 years). To examine the association between individual and family environmental factors (i.e. independent variables) and television/computer time (i.e. dependent variables) in each country, multilevel regression analyses were performed using MLwiN 2.22, adjusting for children's sex and age. In all countries, children reported more television and/or computer time, if children and their parents thought that the maximum recommended level for watching television and/or using the computer was higher and if children had a higher preference for television watching and/or computer use and a lower self-efficacy to control television watching and/or computer use. Most physical and economic environmental variables were not significantly associated with television or computer time. Slightly more individual factors were related to children's computer time and more parental social environmental factors to children's television time. We also found different correlates across countries: parental co-participation in television watching was significantly positively associated with children's television time in all countries, except for Greece. A higher level of parental television and computer time was only associated with a higher level of children's television and computer time in Hungary. Having rules regarding children's television time was related to less television time in all countries, except for Belgium and Norway. Most evidence was found for an association between screen time and individual and parental social environmental factors, which means that future interventions aiming to reduce screen time should focus on children's individual beliefs and habits as well parental social factors. As we identified some different correlates for television and computer time and across countries, cross-European interventions could make small adaptations per specific screen time activity and lay different emphases per country.

  13. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  14. The Overdominance of Computers

    ERIC Educational Resources Information Center

    Monke, Lowell W.

    2006-01-01

    Most schools are unwilling to consider decreasing computer use at school because they fear that without screen time, students will not be prepared for the demands of a high-tech 21st century. Monke argues that having young children spend a significant amount of time on computers in school is harmful, particularly when children spend so much…

  15. SU-E-T-222: Computational Optimization of Monte Carlo Simulation On 4D Treatment Planning Using the Cloud Computing Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, J

    Purpose: This study evaluated the efficiency of 4D lung radiation treatment planning using Monte Carlo simulation on the cloud. The EGSnrc Monte Carlo code was used in dose calculation on the 4D-CT image set. Methods: 4D lung radiation treatment plan was created by the DOSCTP linked to the cloud, based on the Amazon elastic compute cloud platform. Dose calculation was carried out by Monte Carlo simulation on the 4D-CT image set on the cloud, and results were sent to the FFD4D image deformation program for dose reconstruction. The dependence of computing time for treatment plan on the number of computemore » node was optimized with variations of the number of CT image set in the breathing cycle and dose reconstruction time of the FFD4D. Results: It is found that the dependence of computing time on the number of compute node was affected by the diminishing return of the number of node used in Monte Carlo simulation. Moreover, the performance of the 4D treatment planning could be optimized by using smaller than 10 compute nodes on the cloud. The effects of the number of image set and dose reconstruction time on the dependence of computing time on the number of node were not significant, as more than 15 compute nodes were used in Monte Carlo simulations. Conclusion: The issue of long computing time in 4D treatment plan, requiring Monte Carlo dose calculations in all CT image sets in the breathing cycle, can be solved using the cloud computing technology. It is concluded that the optimized number of compute node selected in simulation should be between 5 and 15, as the dependence of computing time on the number of node is significant.« less

  16. Daily computer usage correlated with undergraduate students' musculoskeletal symptoms.

    PubMed

    Chang, Che-Hsu Joe; Amick, Benjamin C; Menendez, Cammie Chaumont; Katz, Jeffrey N; Johnson, Peter W; Robertson, Michelle; Dennerlein, Jack Tigh

    2007-06-01

    A pilot prospective study was performed to examine the relationships between daily computer usage time and musculoskeletal symptoms on undergraduate students. For three separate 1-week study periods distributed over a semester, 27 students reported body part-specific musculoskeletal symptoms three to five times daily. Daily computer usage time for the 24-hr period preceding each symptom report was calculated from computer input device activities measured directly by software loaded on each participant's primary computer. General Estimating Equation models tested the relationships between daily computer usage and symptom reporting. Daily computer usage longer than 3 hr was significantly associated with an odds ratio 1.50 (1.01-2.25) of reporting symptoms. Odds of reporting symptoms also increased with quartiles of daily exposure. These data suggest a potential dose-response relationship between daily computer usage time and musculoskeletal symptoms.

  17. The relationship between playing computer or video games with mental health and social relationships among students in guidance schools, Kermanshah.

    PubMed

    Reshadat, S; Ghasemi, S R; Ahmadian, M; RajabiGilan, N

    2014-01-09

    Computer or video games are a popular recreational activity and playing them may constitute a large part of leisure time. This cross-sectional study aimed to evaluate the relationship between playing computer or video games with mental health and social relationships among students in guidance schools in Kermanshah, Islamic Republic of Iran, in 2012. Our total sample was 573 students and our tool was the General Health Questionnaire (GHQ-28) and social relationships questionnaires. Survey respondents reported spending an average of 71.07 (SD 72.1) min/day on computer or video games. There was a significant relationship between time spent playing games and general mental health (P < 0.04) and depression (P < 0.03). There was also a significant difference between playing and not playing computer or video games with social relationships and their subscales, including trans-local relationships (P < 0.0001) and association relationships (P < 0.01) among all participants. There was also a significant relationship between social relationships and time spent playing games (P < 0.02) and its dimensions, except for family relationships.

  18. Incorporating the gas analyzer response time in gas exchange computations.

    PubMed

    Mitchell, R R

    1979-11-01

    A simple method for including the gas analyzer response time in the breath-by-breath computation of gas exchange rates is described. The method uses a difference equation form of a model for the gas analyzer in the computation of oxygen uptake and carbon dioxide production and avoids a numerical differentiation required to correct the gas fraction wave forms. The effect of not accounting for analyzer response time is shown to be a 20% underestimation in gas exchange rate. The present method accurately measures gas exchange rate, is relatively insensitive to measurement errors in the analyzer time constant, and does not significantly increase the computation time.

  19. CUDA-based real time surgery simulation.

    PubMed

    Liu, Youquan; De, Suvranu

    2008-01-01

    In this paper we present a general software platform that enables real time surgery simulation on the newly available compute unified device architecture (CUDA)from NVIDIA. CUDA-enabled GPUs harness the power of 128 processors which allow data parallel computations. Compared to the previous GPGPU, it is significantly more flexible with a C language interface. We report implementation of both collision detection and consequent deformation computation algorithms. Our test results indicate that the CUDA enables a twenty times speedup for collision detection and about fifteen times speedup for deformation computation on an Intel Core 2 Quad 2.66 GHz machine with GeForce 8800 GTX.

  20. Arranging computer architectures to create higher-performance controllers

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1988-01-01

    Techniques for integrating microprocessors, array processors, and other intelligent devices in control systems are reviewed, with an emphasis on the (re)arrangement of components to form distributed or parallel processing systems. Consideration is given to the selection of the host microprocessor, increasing the power and/or memory capacity of the host, multitasking software for the host, array processors to reduce computation time, the allocation of real-time and non-real-time events to different computer subsystems, intelligent devices to share the computational burden for real-time events, and intelligent interfaces to increase communication speeds. The case of a helicopter vibration-suppression and stabilization controller is analyzed as an example, and significant improvements in computation and throughput rates are demonstrated.

  1. A tool for modeling concurrent real-time computation

    NASA Technical Reports Server (NTRS)

    Sharma, D. D.; Huang, Shie-Rei; Bhatt, Rahul; Sridharan, N. S.

    1990-01-01

    Real-time computation is a significant area of research in general, and in AI in particular. The complexity of practical real-time problems demands use of knowledge-based problem solving techniques while satisfying real-time performance constraints. Since the demands of a complex real-time problem cannot be predicted (owing to the dynamic nature of the environment) powerful dynamic resource control techniques are needed to monitor and control the performance. A real-time computation model for a real-time tool, an implementation of the QP-Net simulator on a Symbolics machine, and an implementation on a Butterfly multiprocessor machine are briefly described.

  2. The relationship between TV/computer time and adolescents' health-promoting behavior: a secondary data analysis.

    PubMed

    Chen, Mei-Yen; Liou, Yiing-Mei; Wu, Jen-Yee

    2008-03-01

    Television and computers provide significant benefits for learning about the world. Some studies have linked excessive television (TV) watching or computer game playing to disadvantage of health status or some unhealthy behavior among adolescents. However, the relationships between watching TV/playing computer games and adolescents adopting health promoting behavior were limited. This study aimed to discover the relationship between time spent on watching TV and on leisure use of computers and adolescents' health promoting behavior, and associated factors. This paper used secondary data analysis from part of a health promotion project in Taoyuan County, Taiwan. A cross-sectional design was used and purposive sampling was conducted among adolescents in the original project. A total of 660 participants answered the questions appropriately for this work between January and June 2004. Findings showed the mean age of the respondents was 15.0 +/- 1.7 years. The mean numbers of TV watching hours were 2.28 and 4.07 on weekdays and weekends respectively. The mean hours of leisure (non-academic) computer use were 1.64 and 3.38 on weekdays and weekends respectively. Results indicated that adolescents spent significant time watching TV and using the computer, which was negatively associated with adopting health-promoting behaviors such as life appreciation, health responsibility, social support and exercise behavior. Moreover, being boys, being overweight, living in a rural area, and being middle-school students were significantly associated with spending long periods watching TV and using the computer. Therefore, primary health care providers should record the TV and non-academic computer time of youths when conducting health promotion programs, and educate parents on how to become good and healthy electronic media users.

  3. The expanded role of computers in Space Station Freedom real-time operations

    NASA Technical Reports Server (NTRS)

    Crawford, R. Paul; Cannon, Kathleen V.

    1990-01-01

    The challenges that NASA and its international partners face in their real-time operation of the Space Station Freedom necessitate an increased role on the part of computers. In building the operational concepts concerning the role of the computer, the Space Station program is using lessons learned experience from past programs, knowledge of the needs of future space programs, and technical advances in the computer industry. The computer is expected to contribute most significantly in real-time operations by forming a versatile operating architecture, a responsive operations tool set, and an environment that promotes effective and efficient utilization of Space Station Freedom resources.

  4. Specialized computer architectures for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  5. Health information technology and physician-patient interactions: impact of computers on communication during outpatient primary care visits.

    PubMed

    Hsu, John; Huang, Jie; Fung, Vicki; Robertson, Nan; Jimison, Holly; Frankel, Richard

    2005-01-01

    The aim of this study was to evaluate the impact of introducing health information technology (HIT) on physician-patient interactions during outpatient visits. This was a longitudinal pre-post study: two months before and one and seven months after introduction of examination room computers. Patient questionnaires (n = 313) after primary care visits with physicians (n = 8) within an integrated delivery system. There were three patient satisfaction domains: (1) satisfaction with visit components, (2) comprehension of the visit, and (3) perceptions of the physician's use of the computer. Patients reported that physicians used computers in 82.3% of visits. Compared with baseline, overall patient satisfaction with visits increased seven months after the introduction of computers (odds ratio [OR] = 1.50; 95% confidence interval [CI]: 1.01-2.22), as did satisfaction with physicians' familiarity with patients (OR = 1.60, 95% CI: 1.01-2.52), communication about medical issues (OR = 1.61; 95% CI: 1.05-2.47), and comprehension of decisions made during the visit (OR = 1.63; 95% CI: 1.06-2.50). In contrast, there were no significant changes in patient satisfaction with comprehension of self-care responsibilities, communication about psychosocial issues, or available visit time. Seven months post-introduction, patients were more likely to report that the computer helped the visit run in a more timely manner (OR = 1.76; 95% CI: 1.28-2.42) compared with the first month after introduction. There were no other significant changes in patient perceptions of the computer use over time. The examination room computers appeared to have positive effects on physician-patient interactions related to medical communication without significant negative effects on other areas such as time available for patient concerns. Further study is needed to better understand HIT use during outpatient visits.

  6. PACE 2: Pricing and Cost Estimating Handbook

    NASA Technical Reports Server (NTRS)

    Stewart, R. D.; Shepherd, T.

    1977-01-01

    An automatic data processing system to be used for the preparation of industrial engineering type manhour and material cost estimates has been established. This computer system has evolved into a highly versatile and highly flexible tool which significantly reduces computation time, eliminates computational errors, and reduces typing and reproduction time for estimators and pricers since all mathematical and clerical functions are automatic once basic inputs are derived.

  7. WINCADRE (COMPUTER-AIDED DATA REVIEW AND EVALUATION)

    EPA Science Inventory

    WinCADRE (Computer-Aided Data Review and Evaluation) is a Windows -based program designed for computer-assisted data validation. WinCADRE is a powerful tool which significantly decreases data validation turnaround time. The electronic-data-deliverable format has been designed ...

  8. Computing Finite-Time Lyapunov Exponents with Optimally Time Dependent Reduction

    NASA Astrophysics Data System (ADS)

    Babaee, Hessam; Farazmand, Mohammad; Sapsis, Themis; Haller, George

    2016-11-01

    We present a method to compute Finite-Time Lyapunov Exponents (FTLE) of a dynamical system using Optimally Time-Dependent (OTD) reduction recently introduced by H. Babaee and T. P. Sapsis. The OTD modes are a set of finite-dimensional, time-dependent, orthonormal basis {ui (x , t) } |i=1N that capture the directions associated with transient instabilities. The evolution equation of the OTD modes is derived from a minimization principle that optimally approximates the most unstable directions over finite times. To compute the FTLE, we evolve a single OTD mode along with the nonlinear dynamics. We approximate the FTLE from the reduced system obtained from projecting the instantaneous linearized dynamics onto the OTD mode. This results in a significant reduction in the computational cost compared to conventional methods for computing FTLE. We demonstrate the efficiency of our method for double Gyre and ABC flows. ARO project 66710-EG-YIP.

  9. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  10. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  11. Girls' physical activity and sedentary behaviors: Does sexual maturation matter? A cross-sectional study with HBSC 2010 Portuguese survey.

    PubMed

    Marques, Adilson; Branquinho, Cátia; De Matos, Margarida Gaspar

    2016-07-01

    The purpose of this study was to examine the relationship between girls' sexual maturation (age of menarche) and physical activity and sedentary behaviors. Data were collected from a national representative sample of girls in 2010 (pre-menarcheal girls n = 583, post-menarcheal girls n = 741). Physical activity (times/week and hours/week) and screen-based sedentary time (minutes/day) including television/video/DVD watching, playing videogames, and computer use were self-reported. Pre-menarcheal girls engaged significantly more times in physical activity in the last 7 days than post-menarcheal girls (3.5 ± 1.9 times/week vs. 3.0 ± 1.7 times/week, P < 0.001). There was no significant difference between pre-menarcheal and post-menarcheal girls in time (hours/week) spent in physical activity. Post-menarcheal girls spent significantly more minutes per day than pre-menarcheal girls watching TV, playing videogames, and using computers on weekdays (TV: 165.2 ± 105.8 vs. 136.0 ± 106.3, P < 0.001; videogames 72.0 ± 84.8 vs. 60.3 ± 78.9, P = 0.015; computer: 123.3 ± 103.9 vs. 82.8 ± 95.8, P < 0.001) and on weekends (TV: 249.0 ± 116.2 vs. 209.3 ± 124.8, P < 0.001; videogames: 123.0 ± 114.0 vs. 104.7 ± 103.5, P = 0.020; computer: 177.0 ± 122.2 vs. 119.7 ± 112.7, P < 0.001). After adjusting analyses for age, BMI, and socioeconomic status, differences were still significant for physical activity and for computer use. Specific interventions should be designed for girls to increase their physical activity participation and decrease time spent on the computer, for post-menarcheal girls in particular. Am. J. Hum. Biol. 28:471-475, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  12. Elucidating Reaction Mechanisms on Quantum Computers

    NASA Astrophysics Data System (ADS)

    Wiebe, Nathan; Reiher, Markus; Svore, Krysta; Wecker, Dave; Troyer, Matthias

    We show how a quantum computer can be employed to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical-computer simulations for such problems, to significantly increase their accuracy and enable hitherto intractable simulations. Detailed resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. This demonstrates that quantum computers will realistically be able to tackle important problems in chemistry that are both scientifically and economically significant.

  13. WINCADRE INORGANIC (WINDOWS COMPUTER-AIDED DATA REVIEW AND EVALUATION)

    EPA Science Inventory

    WinCADRE (Computer-Aided Data Review and Evaluation) is a Windows -based program designed for computer-assisted data validation. WinCADRE is a powerful tool which significantly decreases data validation turnaround time. The electronic-data-deliverable format has been designed in...

  14. The Case for Modular Redundancy in Large-Scale High Performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2009-01-01

    Recent investigations into resilience of large-scale high-performance computing (HPC) systems showed a continuous trend of decreasing reliability and availability. Newly installed systems have a lower mean-time to failure (MTTF) and a higher mean-time to recover (MTTR) than their predecessors. Modular redundancy is being used in many mission critical systems today to provide for resilience, such as for aerospace and command \\& control systems. The primary argument against modular redundancy for resilience in HPC has always been that the capability of a HPC system, and respective return on investment, would be significantly reduced. We argue that modular redundancy can significantly increasemore » compute node availability as it removes the impact of scale from single compute node MTTR. We further argue that single compute nodes can be much less reliable, and therefore less expensive, and still be highly available, if their MTTR/MTTF ratio is maintained.« less

  15. Sedentary behavior, physical activity, and concentrations of insulin among US adults.

    PubMed

    Ford, Earl S; Li, Chaoyang; Zhao, Guixiang; Pearson, William S; Tsai, James; Churilla, James R

    2010-09-01

    Time spent watching television has been linked to obesity, metabolic syndrome, and diabetes, all conditions characterized to some degree by hyperinsulinemia and insulin resistance. However, limited evidence relates screen time (watching television or using a computer) directly to concentrations of insulin. We examined the cross-sectional associations between time spent watching television or using a computer, physical activity, and serum concentrations of insulin using data from 2800 participants aged at least 20 years of the 2003-2006 National Health and Nutrition Examination Survey. The amount of time spent watching television and using a computer as well as physical activity was self-reported. The unadjusted geometric mean concentration of insulin increased from 6.2 microU/mL among participants who did not watch television to 10.0 microU/mL among those who watched television for 5 or more hours per day (P = .001). After adjustment for age, sex, race or ethnicity, educational status, concentration of cotinine, alcohol intake, physical activity, waist circumference, and body mass index using multiple linear regression analysis, the log-transformed concentrations of insulin were significantly and positively associated with time spent watching television (P = < .001). Reported time spent using a computer was significantly associated with log-transformed concentrations of insulin before but not after accounting for waist circumference and body mass index. Leisure-time physical activity but not transportation or household physical activity was significantly and inversely associated with log-transformed concentrations of insulin. Sedentary behavior, particularly the amount of time spent watching television, may be an important modifiable determinant of concentrations of insulin. Published by Elsevier Inc.

  16. The electromagnetic modeling of thin apertures using the finite-difference time-domain technique

    NASA Technical Reports Server (NTRS)

    Demarest, Kenneth R.

    1987-01-01

    A technique which computes transient electromagnetic responses of narrow apertures in complex conducting scatterers was implemented as an extension of previously developed Finite-Difference Time-Domain (FDTD) computer codes. Although these apertures are narrow with respect to the wavelengths contained within the power spectrum of excitation, this technique does not require significantly more computer resources to attain the increased resolution at the apertures. In the report, an analytical technique which utilizes Babinet's principle to model the apertures is developed, and an FDTD computer code which utilizes this technique is described.

  17. "Micro" Politics: Mapping the Origins of Schools Computing as a Field of Education Policy

    ERIC Educational Resources Information Center

    Selwyn, Neil

    2013-01-01

    This paper examines the emergence of schools "micro-computing" in the UK between 1977 and 1984--a period of significant educational, technological and political change. During this time, computing developed rapidly from a niche activity in a few select schools to the state subsidized purchasing of a "computer in every school"…

  18. Computer-mediated support group intervention for parents.

    PubMed

    Bragadóttir, Helga

    2008-01-01

    The purpose of this study was to evaluate the feasibility of a computer-mediated support group (CMSG) intervention for parents whose children had been diagnosed with cancer. An evaluative one-group, before-and-after research design. A CMSG, an unstructured listserve group where participants used their E-mail for communication, was conducted over a 4-month period. Participation in the CMSG was offered to parents in Iceland whose children had completed cancer treatment in the past 5 years. Outcome measures were done: before the intervention (Time 1), after 2 months of intervention (Time 2) and after 4 months of intervention (Time 3) when the project ended. Measures included: demographic and background variables; health related vulnerability factors of parents: anxiety, depression, somatization, and stress; perceived mutual support; and use of the CMSG. Data were collected from November 2002 to June 2003. Twenty-one of 58 eligible parents participated in the study, with 71% retention rate for both post-tests. Mothers' depression decreased significantly from Time 2 to Time 3 (p<.03). Fathers' anxiety decreased significantly from Time 1 to Time 3 (p<.01). Fathers' stress decreased significantly from Time 2 to Time 3 (p<.02). To some extent, mothers and fathers perceived mutual support from participating in the CMSG. Both mothers and fathers used the CMSG by reading messages. Messages were primarily written by mothers. Study findings support further development of CMSGs for parents whose children have been diagnosed with cancer. Using computer technology for support is particularly useful for dispersed populations and groups that have restrictions on their time. Computer-mediated support groups have been shown to be a valuable addition to, or substitute for, a traditional face-to-face mutual support group and might suit both genders equally.

  19. Preferred computer activities among individuals with dementia: a pilot study.

    PubMed

    Tak, Sunghee H; Zhang, Hongmei; Hong, Song Hee

    2015-03-01

    Computers offer new activities that are easily accessible, cognitively stimulating, and enjoyable for individuals with dementia. The current descriptive study examined preferred computer activities among nursing home residents with different severity levels of dementia. A secondary data analysis was conducted using activity observation logs from 15 study participants with dementia (severe = 115 logs, moderate = 234 logs, and mild = 124 logs) who participated in a computer activity program. Significant differences existed in preferred computer activities among groups with different severity levels of dementia. Participants with severe dementia spent significantly more time watching slide shows with music than those with both mild and moderate dementia (F [2,12] = 9.72, p = 0.003). Preference in playing games also differed significantly across the three groups. It is critical to consider individuals' interests and functional abilities when computer activities are provided for individuals with dementia. A practice guideline for tailoring computer activities is detailed. Copyright 2015, SLACK Incorporated.

  20. Evaluating the impact of computer-generated rounding reports on physician workflow in the nursing home: a feasibility time-motion study.

    PubMed

    Thorpe-Jamison, Patrice T; Culley, Colleen M; Perera, Subashan; Handler, Steven M

    2013-05-01

    To determine the feasibility and impact of a computer-generated rounding report on physician rounding time and perceived barriers to providing clinical care in the nursing home (NH) setting. Three NHs located in Pittsburgh, PA. Ten attending NH physicians. Time-motion method to record the time taken to gather data (pre-rounding), to evaluate patients (rounding), and document their findings/develop an assessment and plan (post-rounding). Additionally, surveys were used to determine the physicians' perception of barriers to providing optimal clinical care, as well as physician satisfaction before and after the use of a computer-generated rounding report. Ten physicians were observed during half-day sessions both before and 4 weeks after they were introduced to a computer-generated rounding report. A total of 69 distinct patients were evaluated during the 20 physician observation sessions. Each physician evaluated, on average, four patients before implementation and three patients after implementation. The observations showed a significant increase (P = .03) in the pre-rounding time, and no significant difference in the rounding (P = .09) or post-rounding times (P = .29). Physicians reported that information was more accessible (P = .03) following the implementation of the computer-generated rounding report. Most (80%) physicians stated that they would prefer to use the computer-generated rounding report rather than the paper-based process. The present study provides preliminary data suggesting that the use of a computer-generated rounding report can decrease some perceived barriers to providing optimal care in the NH. Although the rounding report did not improve rounding time efficiency, most NH physicians would prefer to use the computer-generated report rather than the current paper-based process. Improving the accuracy and harmonization of medication information with the electronic medication administration record and rounding reports, as well as improving facility network speeds might improve the effectiveness of this technology. Copyright © 2013 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.

  1. Health Information Technology and Physician-Patient Interactions: Impact of Computers on Communication during Outpatient Primary Care Visits

    PubMed Central

    Hsu, John; Huang, Jie; Fung, Vicki; Robertson, Nan; Jimison, Holly; Frankel, Richard

    2005-01-01

    Objective: The aim of this study was to evaluate the impact of introducing health information technology (HIT) on physician-patient interactions during outpatient visits. Design: This was a longitudinal pre-post study: two months before and one and seven months after introduction of examination room computers. Patient questionnaires (n = 313) after primary care visits with physicians (n = 8) within an integrated delivery system. There were three patient satisfaction domains: (1) satisfaction with visit components, (2) comprehension of the visit, and (3) perceptions of the physician's use of the computer. Results: Patients reported that physicians used computers in 82.3% of visits. Compared with baseline, overall patient satisfaction with visits increased seven months after the introduction of computers (odds ratio [OR] = 1.50; 95% confidence interval [CI]: 1.01–2.22), as did satisfaction with physicians' familiarity with patients (OR = 1.60, 95% CI: 1.01–2.52), communication about medical issues (OR = 1.61; 95% CI: 1.05–2.47), and comprehension of decisions made during the visit (OR = 1.63; 95% CI: 1.06–2.50). In contrast, there were no significant changes in patient satisfaction with comprehension of self-care responsibilities, communication about psychosocial issues, or available visit time. Seven months post-introduction, patients were more likely to report that the computer helped the visit run in a more timely manner (OR = 1.76; 95% CI: 1.28–2.42) compared with the first month after introduction. There were no other significant changes in patient perceptions of the computer use over time. Conclusion: The examination room computers appeared to have positive effects on physician-patient interactions related to medical communication without significant negative effects on other areas such as time available for patient concerns. Further study is needed to better understand HIT use during outpatient visits. PMID:15802484

  2. Computational Methods for Stability and Control (COMSAC): The Time Has Come

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Biedron, Robert T.; Ball, Douglas N.; Bogue, David R.; Chung, James; Green, Bradford E.; Grismer, Matthew J.; Brooks, Gregory P.; Chambers, Joseph R.

    2005-01-01

    Powerful computational fluid dynamics (CFD) tools have emerged that appear to offer significant benefits as an adjunct to the experimental methods used by the stability and control community to predict aerodynamic parameters. The decreasing costs for and increasing availability of computing hours are making these applications increasingly viable as time goes on and the cost of computing continues to drop. This paper summarizes the efforts of four organizations to utilize high-end computational fluid dynamics (CFD) tools to address the challenges of the stability and control arena. General motivation and the backdrop for these efforts will be summarized as well as examples of current applications.

  3. The relationship between computer games and quality of life in adolescents.

    PubMed

    Dolatabadi, Nayereh Kasiri; Eslami, Ahmad Ali; Mostafavi, Firooze; Hassanzade, Akbar; Moradi, Azam

    2013-01-01

    Term of doing computer games among teenagers is growing rapidly. This popular phenomenon can cause physical and psychosocial issues in them. Therefore, this study examined the relationship between computer games and quality of life domains in adolescents aging 12-15 years. In a cross-sectional study using the 2-stage stratified cluster sampling method, 444 male and female students in Borkhar were selected. The data collection tool consisted of 1) World Health Organization Quality Of Life - BREF questionnaire and 2) personal information questionnaire. The data were analyzed by Pearson correlation, Spearman correlation, chi-square, independent t-tests and analysis of covariance. The total mean score of quality of life in students was 67.11±13.34. The results showed a significant relationship between the age of starting to play games and the overall quality of life score and its fourdomains (range r=-0.13 to -0.18). The mean of overall quality of life score in computer game users was 68.27±13.03 while it was 64.81±13.69 among those who did not play computer games and the difference was significant (P=0.01). There were significant differences in environmental and mental health domains between the two groups (P<0.05). However, there was no significant relationship between BMI with the time spent and the type of computer games. Playing computer games for a short time under parental supervision can have positive effects on quality of life in adolescents. However, spending long hours for playing computer games may have negative long-term effects.

  4. The feasibility of using computer graphics in environmental evaluations : interim report, documenting historic site locations using computer graphics.

    DOT National Transportation Integrated Search

    1981-01-01

    This report describes a method for locating historic site information using a computer graphics program. If adopted for use by the Virginia Department of Highways and Transportation, this method should significantly reduce the time now required to de...

  5. Far-field radiation patterns of aperture antennas by the Winograd Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Heisler, R.

    1978-01-01

    A more time-efficient algorithm for computing the discrete Fourier transform, the Winograd Fourier transform (WFT), is described. The WFT algorithm is compared with other transform algorithms. Results indicate that the WFT algorithm in antenna analysis appears to be a very successful application. Significant savings in cpu time will improve the computer turn around time and circumvent the need to resort to weekend runs.

  6. Evaluation of E-Rat, a Computer-based Rat Dissection in Terms of Student Learning Outcomes.

    ERIC Educational Resources Information Center

    Predavec, Martin

    2001-01-01

    Presents a study that used computer-based rat anatomy to compare student learning outcomes from computer-based instruction with a conventional dissection. Indicates that there was a significant relationship between the time spent on both classes and the marks gained. Shows that computer-based instruction can be a viable alternative to the use of…

  7. A Detailed Analysis over Some Important Issues towards Using Computer Technology into the EFL Classrooms

    ERIC Educational Resources Information Center

    Gilakjani, Abbas Pourhosein

    2014-01-01

    Computer technology has changed the ways we work, learn, interact and spend our leisure time. Computer technology has changed every aspect of our daily life--how and where we get our news, how we order goods and services, and how we communicate. This study investigates some of the significant issues concerning the use of computer technology…

  8. Fast algorithms for computing phylogenetic divergence time.

    PubMed

    Crosby, Ralph W; Williams, Tiffani L

    2017-12-06

    The inference of species divergence time is a key step in most phylogenetic studies. Methods have been available for the last ten years to perform the inference, but the performance of the methods does not yet scale well to studies with hundreds of taxa and thousands of DNA base pairs. For example a study of 349 primate taxa was estimated to require over 9 months of processing time. In this work, we present a new algorithm, AncestralAge, that significantly improves the performance of the divergence time process. As part of AncestralAge, we demonstrate a new method for the computation of phylogenetic likelihood and our experiments show a 90% improvement in likelihood computation time on the aforementioned dataset of 349 primates taxa with over 60,000 DNA base pairs. Additionally, we show that our new method for the computation of the Bayesian prior on node ages reduces the running time for this computation on the 349 taxa dataset by 99%. Through the use of these new algorithms we open up the ability to perform divergence time inference on large phylogenetic studies.

  9. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  10. Easing The Calculation Of Bolt-Circle Coordinates

    NASA Technical Reports Server (NTRS)

    Burley, Richard K.

    1995-01-01

    Bolt Circle Calculation (BOLT-CALC) computer program used to reduce significant time consumed in manually computing trigonometry of rectangular Cartesian coordinates of holes in bolt circle as shown on blueprint or drawing. Eliminates risk of computational errors, particularly in cases involving many holes or in cases in which coordinates expressed to many significant digits. Program assists in many practical situations arising in machine shops. Written in BASIC. Also successfully compiled and implemented by use of Microsoft's QuickBasic v4.0.

  11. Experimental and Modeling Studies of Plasma Injection by an Electrothermal Igniter Into a Solid Propellant Gun Charge

    DTIC Science & Technology

    2006-06-01

    computed and measured pressure traces show significant pressure wave action and oscillations at early times. The simulations seem to overprotect ...significant pressure wave action and oscillations at early times. The simulations seem to overprotect both the early-time peak pressure at the rear

  12. Aerothermodynamic Design Sensitivities for a Reacting Gas Flow Solver on an Unstructured Mesh Using a Discrete Adjoint Formulation

    NASA Astrophysics Data System (ADS)

    Thompson, Kyle Bonner

    An algorithm is described to efficiently compute aerothermodynamic design sensitivities using a decoupled variable set. In a conventional approach to computing design sensitivities for reacting flows, the species continuity equations are fully coupled to the conservation laws for momentum and energy. In this algorithm, the species continuity equations are solved separately from the mixture continuity, momentum, and total energy equations. This decoupling simplifies the implicit system, so that the flow solver can be made significantly more efficient, with very little penalty on overall scheme robustness. Most importantly, the computational cost of the point implicit relaxation is shown to scale linearly with the number of species for the decoupled system, whereas the fully coupled approach scales quadratically. Also, the decoupled method significantly reduces the cost in wall time and memory in comparison to the fully coupled approach. This decoupled approach for computing design sensitivities with the adjoint system is demonstrated for inviscid flow in chemical non-equilibrium around a re-entry vehicle with a retro-firing annular nozzle. The sensitivities of the surface temperature and mass flow rate through the nozzle plenum are computed with respect to plenum conditions and verified against sensitivities computed using a complex-variable finite-difference approach. The decoupled scheme significantly reduces the computational time and memory required to complete the optimization, making this an attractive method for high-fidelity design of hypersonic vehicles.

  13. Accelerating sino-atrium computer simulations with graphic processing units.

    PubMed

    Zhang, Hong; Xiao, Zheng; Lin, Shien-fong

    2015-01-01

    Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.

  14. Accelerating the discovery of space-time patterns of infectious diseases using parallel computing.

    PubMed

    Hohl, Alexander; Delmelle, Eric; Tang, Wenwu; Casas, Irene

    2016-11-01

    Infectious diseases have complex transmission cycles, and effective public health responses require the ability to monitor outbreaks in a timely manner. Space-time statistics facilitate the discovery of disease dynamics including rate of spread and seasonal cyclic patterns, but are computationally demanding, especially for datasets of increasing size, diversity and availability. High-performance computing reduces the effort required to identify these patterns, however heterogeneity in the data must be accounted for. We develop an adaptive space-time domain decomposition approach for parallel computation of the space-time kernel density. We apply our methodology to individual reported dengue cases from 2010 to 2011 in the city of Cali, Colombia. The parallel implementation reaches significant speedup compared to sequential counterparts. Density values are visualized in an interactive 3D environment, which facilitates the identification and communication of uneven space-time distribution of disease events. Our framework has the potential to enhance the timely monitoring of infectious diseases. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. A 3D staggered-grid finite difference scheme for poroelastic wave equation

    NASA Astrophysics Data System (ADS)

    Zhang, Yijie; Gao, Jinghuai

    2014-10-01

    Three dimensional numerical modeling has been a viable tool for understanding wave propagation in real media. The poroelastic media can better describe the phenomena of hydrocarbon reservoirs than acoustic and elastic media. However, the numerical modeling in 3D poroelastic media demands significantly more computational capacity, including both computational time and memory. In this paper, we present a 3D poroelastic staggered-grid finite difference (SFD) scheme. During the procedure, parallel computing is implemented to reduce the computational time. Parallelization is based on domain decomposition, and communication between processors is performed using message passing interface (MPI). Parallel analysis shows that the parallelized SFD scheme significantly improves the simulation efficiency and 3D decomposition in domain is the most efficient. We also analyze the numerical dispersion and stability condition of the 3D poroelastic SFD method. Numerical results show that the 3D numerical simulation can provide a real description of wave propagation.

  16. Floating-Point Modules Targeted for Use with RC Compilation Tools

    NASA Technical Reports Server (NTRS)

    Sahin, Ibrahin; Gloster, Clay S.

    2000-01-01

    Reconfigurable Computing (RC) has emerged as a viable computing solution for computationally intensive applications. Several applications have been mapped to RC system and in most cases, they provided the smallest published execution time. Although RC systems offer significant performance advantages over general-purpose processors, they require more application development time than general-purpose processors. This increased development time of RC systems provides the motivation to develop an optimized module library with an assembly language instruction format interface for use with future RC system that will reduce development time significantly. In this paper, we present area/performance metrics for several different types of floating point (FP) modules that can be utilized to develop complex FP applications. These modules are highly pipelined and optimized for both speed and area. Using these modules, and example application, FP matrix multiplication, is also presented. Our results and experiences show, that with these modules, 8-10X speedup over general-purpose processors can be achieved.

  17. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    PubMed

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks including irregular, Poisson-like spike times, and a tight balance between excitation and inhibition. These results significantly increase the biological plausibility of the spike-based approach to network computation, and uncover how several components of biological networks may work together to efficiently carry out computation. Copyright © 2015 the authors 0270-6474/15/3510112-23$15.00/0.

  18. Speedup computation of HD-sEMG signals using a motor unit-specific electrical source model.

    PubMed

    Carriou, Vincent; Boudaoud, Sofiane; Laforet, Jeremy

    2018-01-23

    Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.

  19. A Review of High-Performance Computational Strategies for Modeling and Imaging of Electromagnetic Induction Data

    NASA Astrophysics Data System (ADS)

    Newman, Gregory A.

    2014-01-01

    Many geoscientific applications exploit electrostatic and electromagnetic fields to interrogate and map subsurface electrical resistivity—an important geophysical attribute for characterizing mineral, energy, and water resources. In complex three-dimensional geologies, where many of these resources remain to be found, resistivity mapping requires large-scale modeling and imaging capabilities, as well as the ability to treat significant data volumes, which can easily overwhelm single-core and modest multicore computing hardware. To treat such problems requires large-scale parallel computational resources, necessary for reducing the time to solution to a time frame acceptable to the exploration process. The recognition that significant parallel computing processes must be brought to bear on these problems gives rise to choices that must be made in parallel computing hardware and software. In this review, some of these choices are presented, along with the resulting trade-offs. We also discuss future trends in high-performance computing and the anticipated impact on electromagnetic (EM) geophysics. Topics discussed in this review article include a survey of parallel computing platforms, graphics processing units to multicore CPUs with a fast interconnect, along with effective parallel solvers and associated solver libraries effective for inductive EM modeling and imaging.

  20. The relationship between computer games and quality of life in adolescents

    PubMed Central

    Dolatabadi, Nayereh Kasiri; Eslami, Ahmad Ali; Mostafavi, Firooze; Hassanzade, Akbar; Moradi, Azam

    2013-01-01

    Background: Term of doing computer games among teenagers is growing rapidly. This popular phenomenon can cause physical and psychosocial issues in them. Therefore, this study examined the relationship between computer games and quality of life domains in adolescents aging 12-15 years. Materials and Methods: In a cross-sectional study using the 2-stage stratified cluster sampling method, 444 male and female students in Borkhar were selected. The data collection tool consisted of 1) World Health Organization Quality Of Life – BREF questionnaire and 2) personal information questionnaire. The data were analyzed by Pearson correlation, Spearman correlation, chi-square, independent t-tests and analysis of covariance. Findings: The total mean score of quality of life in students was 67.11±13.34. The results showed a significant relationship between the age of starting to play games and the overall quality of life score and its fourdomains (range r=–0.13 to –0.18). The mean of overall quality of life score in computer game users was 68.27±13.03 while it was 64.81±13.69 among those who did not play computer games and the difference was significant (P=0.01). There were significant differences in environmental and mental health domains between the two groups (P<0.05). However, there was no significant relationship between BMI with the time spent and the type of computer games. Conclusion: Playing computer games for a short time under parental supervision can have positive effects on quality of life in adolescents. However, spending long hours for playing computer games may have negative long-term effects. PMID:24083270

  1. Association between neighborhood socioeconomic status and screen time among pre-school children: a cross-sectional study

    PubMed Central

    2010-01-01

    Background Sedentary behavior is considered a separate construct from physical activity and engaging in sedentary behaviors results in health effects independent of physical activity levels. A major source of sedentary behavior in children is time spent viewing TV or movies, playing video games, and using computers. To date no study has examined the impact of neighborhood socioeconomic status (SES) on pre-school children's screen time behavior. Methods Proxy reports of weekday and weekend screen time (TV/movies, video games, and computer use) were completed by 1633 parents on their 4-5 year-old children in Edmonton, Alberta between November, 2005 and August, 2007. Postal codes were used to classified neighborhoods into low, medium or high SES. Multiple linear and logistic regression models were conducted to examine relationships between screen time and neighborhood SES. Results Girls living in low SES neighborhoods engaged in significantly more weekly overall screen time and TV/movie minutes compared to girls living in high SES neighborhoods. The same relationship was not observed in boys. Children living in low SES neighborhoods were significantly more likely to be video game users and less likely to be computer users compared to children living in high SES neighborhoods. Also, children living in medium SES neighborhoods were significantly less likely to be computer users compared to children living in high SES neighborhoods. Conclusions Some consideration should be given to providing alternative activity opportunities for children, especially girls who live in lower SES neighborhoods. Also, future research should continue to investigate the independent effects of neighborhood SES on screen time as well as the potential mediating variables for this relationship. PMID:20573262

  2. Computer Simulations Improve University Instructional Laboratories1

    PubMed Central

    2004-01-01

    Laboratory classes are commonplace and essential in biology departments but can sometimes be cumbersome, unreliable, and a drain on time and resources. As university intakes increase, pressure on budgets and staff time can often lead to reduction in practical class provision. Frequently, the ability to use laboratory equipment, mix solutions, and manipulate test animals are essential learning outcomes, and “wet” laboratory classes are thus appropriate. In others, however, interpretation and manipulation of the data are the primary learning outcomes, and here, computer-based simulations can provide a cheaper, easier, and less time- and labor-intensive alternative. We report the evaluation of two computer-based simulations of practical exercises: the first in chromosome analysis, the second in bioinformatics. Simulations can provide significant time savings to students (by a factor of four in our first case study) without affecting learning, as measured by performance in assessment. Moreover, under certain circumstances, performance can be improved by the use of simulations (by 7% in our second case study). We concluded that the introduction of these simulations can significantly enhance student learning where consideration of the learning outcomes indicates that it might be appropriate. In addition, they can offer significant benefits to teaching staff. PMID:15592599

  3. Comparing portable computers with bedside computers when administering medications using bedside medication verification.

    PubMed

    Ludwig-Beymer, Patti; Williams, Phillip; Stimac, Ellen

    2012-01-01

    This research examined bedside medication verification administration in 2 adult critical care units, using portable computers and permanent bedside computers. There were no differences in the number of near-miss errors, the time to administer the medications, or nurse perception of ease of medication administration, care of patients, or reliability of technology. The percentage of medications scanned was significantly higher with the use of permanent bedside computers, and nurses using permanent bedside computers were more likely to agree that the computer was always available.

  4. Simulation of a master-slave event set processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comfort, J.C.

    1984-03-01

    Event set manipulation may consume a considerable amount of the computation time spent in performing a discrete-event simulation. One way of minimizing this time is to allow event set processing to proceed in parallel with the remainder of the simulation computation. The paper describes a multiprocessor simulation computer, in which all non-event set processing is performed by the principal processor (called the host). Event set processing is coordinated by a front end processor (the master) and actually performed by several other functionally identical processors (the slaves). A trace-driven simulation program modeling this system was constructed, and was run with tracemore » output taken from two different simulation programs. Output from this simulation suggests that a significant reduction in run time may be realized by this approach. Sensitivity analysis was performed on the significant parameters to the system (number of slave processors, relative processor speeds, and interprocessor communication times). A comparison between actual and simulation run times for a one-processor system was used to assist in the validation of the simulation. 7 references.« less

  5. Computed Tomography Measuring Inside Machines

    NASA Technical Reports Server (NTRS)

    Wozniak, James F.; Scudder, Henry J.; Anders, Jeffrey E.

    1995-01-01

    Computed tomography applied to obtain approximate measurements of radial distances from centerline of turbopump to leading edges of diffuser vanes in turbopump. Use of computed tomography has significance beyond turbopump application: example of general concept of measuring internal dimensions of assembly of parts without having to perform time-consuming task of taking assembly apart and measuring internal parts on coordinate-measuring machine.

  6. Practical Algorithms for the Longest Common Extension Problem

    NASA Astrophysics Data System (ADS)

    Ilie, Lucian; Tinta, Liviu

    The Longest Common Extension problem considers a string s and computes, for each of a number of pairs (i,j), the longest substring of s that starts at both i and j. It appears as a subproblem in many fundamental string problems and can be solved by linear-time preprocessing of the string that allows (worst-case) constant-time computation for each pair. The two known approaches use powerful algorithms: either constant-time computation of the Lowest Common Ancestor in trees or constant-time computation of Range Minimum Queries (RMQ) in arrays. We show here that, from practical point of view, such complicated approaches are not needed. We give two very simple algorithms for this problem that require no preprocessing. The first needs only the string and is significantly faster than all previous algorithms on the average. The second combines the first with a direct RMQ computation on the Longest Common Prefix array. It takes advantage of the superior speed of the cache memory and is the fastest on virtually all inputs.

  7. Evaluation of an F100 multivariable control using a real-time engine simulation

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Skira, C.; Soeder, J. F.

    1977-01-01

    A multivariable control design for the F100 turbofan engine was evaluated, as part of the F100 multivariable control synthesis (MVCS) program. The evaluation utilized a real-time, hybrid computer simulation of the engine and a digital computer implementation of the control. Significant results of the evaluation are presented and recommendations concerning future engine testing of the control are made.

  8. An assessment of the real-time application capabilities of the SIFT computer system

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1982-01-01

    The real-time capabilities of the SIFT computer system, a highly reliable multicomputer architecture developed to support the flight controls of a relaxed static stability aircraft, are discussed. The SIFT computer system was designed to meet extremely high reliability requirements and to facilitate a formal proof of its correctness. Although SIFT represents a significant achievement in fault-tolerant system research it presents an unusual and restrictive interface to its users. The characteristics of the user interface and its impact on application system design are assessed.

  9. Comparative Analysis Between Computed and Conventional Inferior Alveolar Nerve Block Techniques.

    PubMed

    Araújo, Gabriela Madeira; Barbalho, Jimmy Charles Melo; Dias, Tasiana Guedes de Souza; Santos, Thiago de Santana; Vasconcellos, Ricardo José de Holanda; de Morais, Hécio Henrique Araújo

    2015-11-01

    The aim of this randomized, double-blind, controlled trial was to compare the computed and conventional inferior alveolar nerve block techniques in symmetrically positioned inferior third molars. Both computed and conventional anesthetic techniques were performed in 29 healthy patients (58 surgeries) aged between 18 and 40 years. The anesthetic of choice was 2% lidocaine with 1: 200,000 epinephrine. The Visual Analogue Scale assessed the pain variable after anesthetic infiltration. Patient satisfaction was evaluated using the Likert Scale. Heart and respiratory rates, mean time to perform technique, and the need for additional anesthesia were also evaluated. Pain variable means were higher for the conventional technique as compared with computed, 3.45 ± 2.73 and 2.86 ± 1.96, respectively, but no statistically significant differences were found (P > 0.05). Patient satisfaction showed no statistically significant differences. The average computed technique runtime and the conventional were 3.85 and 1.61 minutes, respectively, showing statistically significant differences (P <0.001). The computed anesthetic technique showed lower mean pain perception, but did not show statistically significant differences when contrasted to the conventional technique.

  10. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Bilsky, A. V.; Lozhkin, V. A.; Markovich, D. M.; Tokarev, M. P.

    2013-04-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART.

  11. Algorithms of GPU-enabled reactive force field (ReaxFF) molecular dynamics.

    PubMed

    Zheng, Mo; Li, Xiaoxia; Guo, Li

    2013-04-01

    Reactive force field (ReaxFF), a recent and novel bond order potential, allows for reactive molecular dynamics (ReaxFF MD) simulations for modeling larger and more complex molecular systems involving chemical reactions when compared with computation intensive quantum mechanical methods. However, ReaxFF MD can be approximately 10-50 times slower than classical MD due to its explicit modeling of bond forming and breaking, the dynamic charge equilibration at each time-step, and its one order smaller time-step than the classical MD, all of which pose significant computational challenges in simulation capability to reach spatio-temporal scales of nanometers and nanoseconds. The very recent advances of graphics processing unit (GPU) provide not only highly favorable performance for GPU enabled MD programs compared with CPU implementations but also an opportunity to manage with the computing power and memory demanding nature imposed on computer hardware by ReaxFF MD. In this paper, we present the algorithms of GMD-Reax, the first GPU enabled ReaxFF MD program with significantly improved performance surpassing CPU implementations on desktop workstations. The performance of GMD-Reax has been benchmarked on a PC equipped with a NVIDIA C2050 GPU for coal pyrolysis simulation systems with atoms ranging from 1378 to 27,283. GMD-Reax achieved speedups as high as 12 times faster than Duin et al.'s FORTRAN codes in Lammps on 8 CPU cores and 6 times faster than the Lammps' C codes based on PuReMD in terms of the simulation time per time-step averaged over 100 steps. GMD-Reax could be used as a new and efficient computational tool for exploiting very complex molecular reactions via ReaxFF MD simulation on desktop workstations. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Numerical algorithm comparison for the accurate and efficient computation of high-incidence vortical flow

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    1991-01-01

    Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.

  13. Computer measurement of particle sizes in electron microscope images

    NASA Technical Reports Server (NTRS)

    Hall, E. L.; Thompson, W. B.; Varsi, G.; Gauldin, R.

    1976-01-01

    Computer image processing techniques have been applied to particle counting and sizing in electron microscope images. Distributions of particle sizes were computed for several images and compared to manually computed distributions. The results of these experiments indicate that automatic particle counting within a reasonable error and computer processing time is feasible. The significance of the results is that the tedious task of manually counting a large number of particles can be eliminated while still providing the scientist with accurate results.

  14. Testing for nonlinearity in time series: The method of surrogate data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theiler, J.; Galdrikian, B.; Longtin, A.

    1991-01-01

    We describe a statistical approach for identifying nonlinearity in time series; in particular, we want to avoid claims of chaos when simpler models (such as linearly correlated noise) can explain the data. The method requires a careful statement of the null hypothesis which characterizes a candidate linear process, the generation of an ensemble of surrogate'' data sets which are similar to the original time series but consistent with the null hypothesis, and the computation of a discriminating statistic for the original and for each of the surrogate data sets. The idea is to test the original time series against themore » null hypothesis by checking whether the discriminating statistic computed for the original time series differs significantly from the statistics computed for each of the surrogate sets. We present algorithms for generating surrogate data under various null hypotheses, and we show the results of numerical experiments on artificial data using correlation dimension, Lyapunov exponent, and forecasting error as discriminating statistics. Finally, we consider a number of experimental time series -- including sunspots, electroencephalogram (EEG) signals, and fluid convection -- and evaluate the statistical significance of the evidence for nonlinear structure in each case. 56 refs., 8 figs.« less

  15. Optimum space shuttle launch times relative to natural environment

    NASA Technical Reports Server (NTRS)

    King, R. L.

    1977-01-01

    Three sets of meteorological criteria were analyzed to determine the probabilities of favorable launch and landing conditions. Probabilities were computed for every 3 hours on a yearly basis using 14 years of weather data. These temporal probability distributions, applicable to the three sets of weather criteria encompassing benign, moderate and severe weather conditions, were computed for both Kennedy Space Center (KSC) and Edwards Air Force Base. In addition, conditional probabilities were computed for unfavorable weather conditions occurring after a delay which may or may not be due to weather conditions. Also, for KSC, the probabilities of favorable landing conditions at various times after favorable launch conditions have prevailed have been computed so that mission probabilities may be more accurately computed for those time periods when persistence strongly correlates weather conditions. Moreover, the probabilities and conditional probabilities of the occurrence of both favorable and unfavorable events for each individual criterion were computed to indicate the significance of each weather element to the overall result.

  16. Use of television, videogames, and computer among children and adolescents in Italy.

    PubMed

    Patriarca, Alessandro; Di Giuseppe, Gabriella; Albano, Luciana; Marinelli, Paolo; Angelillo, Italo F

    2009-05-13

    This survey determined the practices about television (video inclusive), videogames, and computer use in children and adolescents in Italy. A self-administered anonymous questionnaire covered socio-demographics; behaviour about television, videogames, computer, and sports; parental control over television, videogames, and computer. Overall, 54.1% and 61% always ate lunch or dinner in front of the television, 89.5% had a television in the bedroom while 52.5% of them always watched television there, and 49% indicated that parents controlled the content of what was watched on television. The overall mean length of time daily spent on television viewing (2.8 hours) and the frequency of watching for at least two hours per day (74.9%) were significantly associated with older age, always ate lunch or dinner while watching television, spent more time playing videogames and using computer. Those with parents from a lower socio-economic level were also more likely to spend more minutes viewing television. Two-thirds played videogames for 1.6 daily hours and more time was spent by those younger, males, with parents that do not control them, who watched more television, and who spent more time at the computer. The computer was used by 85% of the sample for 1.6 daily hours and those older, with a computer in the bedroom, with a higher number of computers in home, who view more television and play videogames were more likely to use the computer. Immediate and comprehensive actions are needed in order to diminish time spent at the television, videogames, and computer.

  17. Use of television, videogames, and computer among children and adolescents in Italy

    PubMed Central

    Patriarca, Alessandro; Di Giuseppe, Gabriella; Albano, Luciana; Marinelli, Paolo; Angelillo, Italo F

    2009-01-01

    Background This survey determined the practices about television (video inclusive), videogames, and computer use in children and adolescents in Italy. Methods A self-administered anonymous questionnaire covered socio-demographics; behaviour about television, videogames, computer, and sports; parental control over television, videogames, and computer. Results Overall, 54.1% and 61% always ate lunch or dinner in front of the television, 89.5% had a television in the bedroom while 52.5% of them always watched television there, and 49% indicated that parents controlled the content of what was watched on television. The overall mean length of time daily spent on television viewing (2.8 hours) and the frequency of watching for at least two hours per day (74.9%) were significantly associated with older age, always ate lunch or dinner while watching television, spent more time playing videogames and using computer. Those with parents from a lower socio-economic level were also more likely to spend more minutes viewing television. Two-thirds played videogames for 1.6 daily hours and more time was spent by those younger, males, with parents that do not control them, who watched more television, and who spent more time at the computer. The computer was used by 85% of the sample for 1.6 daily hours and those older, with a computer in the bedroom, with a higher number of computers in home, who view more television and play videogames were more likely to use the computer. Conclusion Immediate and comprehensive actions are needed in order to diminish time spent at the television, videogames, and computer. PMID:19439070

  18. Talking with the alien: interaction with computers in the GP consultation.

    PubMed

    Dowell, Anthony; Stubbe, Maria; Scott-Dowell, Kathy; Macdonald, Lindsay; Dew, Kevin

    2013-01-01

    This study examines New Zealand GPs' interaction with computers in routine consultations. Twenty-eight video-recorded consultations from 10 GPs were analysed in micro-detail to explore: (i) how doctors divide their time and attention between computer and patient; (ii) the different roles ascribed to the computer; and (iii) how computer use influences the interactional flow of the consultation. All GPs engaged with the computer in some way for at least 20% of each consultation, and on average spent 12% of time totally focussed on the computer. Patterns of use varied; most GPs inputted all or most notes during the consultation, but a few set aside dedicated time afterwards. The computer acted as an additional participant enacting roles like information repository and legitimiser of decisions. Computer use also altered some of the normal 'rules of engagement' between doctor and patient. Long silences and turning away interrupted the smooth flow of conversation, but various 'multitasking' strategies allowed GPs to remain engaged with patients during episodes of computer use (e.g. signposting, online commentary, verbalising while typing, social chat). Conclusions were that use of computers has many benefits but also significantly influences the fine detail of the GP consultation. Doctors must consciously develop strategies to manage this impact.

  19. The position of a standard optical computer mouse affects cardiorespiratory responses during the operation of a computer under time constraints.

    PubMed

    Sako, Shunji; Sugiura, Hiromichi; Tanoue, Hironori; Kojima, Makoto; Kono, Mitsunobu; Inaba, Ryoichi

    2014-08-01

    This study investigated the association between task-induced stress and fatigue by examining the cardiovascular responses of subjects using different mouse positions while operating a computer under time constraints. The study was participated by 16 young, healthy men and examined the use of optical mouse devices affixed to laptop computers. Two mouse positions were investigated: (1) the distal position (DP), in which the subjects place their forearms on the desk accompanied by the abduction and flexion of their shoulder joints, and (2) the proximal position (PP), in which the subjects place only their wrists on the desk without using an armrest. The subjects continued each task for 16 min. We assessed differences in several characteristics according to mouse position, including expired gas values, autonomic nerve activities (based on cardiorespiratory responses), operating efficiencies (based on word counts), and fatigue levels (based on the visual analog scale - VAS). Oxygen consumption (VO(2)), the ratio of inspiration time to respiration time (T(i)/T(total)), respiratory rate (RR), minute ventilation (VE), and the ratio of expiration to inspiration (Te/T(i)) were significantly lower when the participants were performing the task in the DP than those obtained in the PP. Tidal volume (VT), carbon dioxide output rates (VCO(2)/VE), and oxygen extraction fractions (VO(2)/VE) were significantly higher for the DP than they were for the PP. No significant difference in VAS was observed between the positions; however, as the task progressed, autonomic nerve activities were lower and operating efficiencies were significantly higher for the DP than they were for the PP. Our results suggest that the DP has fewer effects on cardiorespiratory functions, causes lower levels of sympathetic nerve activity and mental stress, and produces a higher total workload than the PP. This suggests that the DP is preferable to the PP when operating a computer.

  20. Computer multitasking with Desqview 386 in a family practice.

    PubMed Central

    Davis, A E

    1990-01-01

    Computers are now widely used in medical practice for accounting and secretarial tasks. However, it has been much more difficult to use computers in more physician-related activities of daily practice. I investigated the Desqview multitasking system on a 386 computer as a solution to this problem. Physician-directed tasks of management of patient charts, retrieval of reference information, word processing, appointment scheduling and office organization were each managed by separate programs. Desqview allowed instantaneous switching back and forth between the various programs. I compared the time and cost savings and the need for physician input between Desqview 386, a 386 computer alone and an older, XT computer. Desqview significantly simplified the use of computer programs for medical information management and minimized the necessity for physician intervention. The time saved was 15 minutes per day; the costs saved were estimated to be $5000 annually. PMID:2383848

  1. Look-ahead Dynamic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-20

    Look-ahead dynamic simulation software system incorporates the high performance parallel computing technologies, significantly reduces the solution time for each transient simulation case, and brings the dynamic simulation analysis into on-line applications to enable more transparency for better reliability and asset utilization. It takes the snapshot of the current power grid status, functions in parallel computing the system dynamic simulation, and outputs the transient response of the power system in real time.

  2. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  3. Visual Fatigue Induced by Viewing a Tablet Computer with a High-resolution Display.

    PubMed

    Kim, Dong Ju; Lim, Chi Yeon; Gu, Namyi; Park, Choul Yong

    2017-10-01

    In the present study, the visual discomfort induced by smart mobile devices was assessed in normal and healthy adults. Fifty-nine volunteers (age, 38.16 ± 10.23 years; male : female = 19 : 40) were exposed to tablet computer screen stimuli (iPad Air, Apple Inc.) for 1 hour. Participants watched a movie or played a computer game on the tablet computer. Visual fatigue and discomfort were assessed using an asthenopia questionnaire, tear film break-up time, and total ocular wavefront aberration before and after viewing smart mobile devices. Based on the questionnaire, viewing smart mobile devices for 1 hour significantly increased mean total asthenopia score from 19.59 ± 8.58 to 22.68 ± 9.39 (p < 0.001). Specifically, the scores for five items (tired eyes, sore/aching eyes, irritated eyes, watery eyes, and hot/burning eye) were significantly increased by viewing smart mobile devices. Tear film break-up time significantly decreased from 5.09 ± 1.52 seconds to 4.63 ± 1.34 seconds (p = 0.003). However, total ocular wavefront aberration was unchanged. Visual fatigue and discomfort were significantly induced by viewing smart mobile devices, even though the devices were equipped with state-of-the-art display technology. © 2017 The Korean Ophthalmological Society

  4. Visual Fatigue Induced by Viewing a Tablet Computer with a High-resolution Display

    PubMed Central

    Kim, Dong Ju; Lim, Chi-Yeon; Gu, Namyi

    2017-01-01

    Purpose In the present study, the visual discomfort induced by smart mobile devices was assessed in normal and healthy adults. Methods Fifty-nine volunteers (age, 38.16 ± 10.23 years; male : female = 19 : 40) were exposed to tablet computer screen stimuli (iPad Air, Apple Inc.) for 1 hour. Participants watched a movie or played a computer game on the tablet computer. Visual fatigue and discomfort were assessed using an asthenopia questionnaire, tear film break-up time, and total ocular wavefront aberration before and after viewing smart mobile devices. Results Based on the questionnaire, viewing smart mobile devices for 1 hour significantly increased mean total asthenopia score from 19.59 ± 8.58 to 22.68 ± 9.39 (p < 0.001). Specifically, the scores for five items (tired eyes, sore/aching eyes, irritated eyes, watery eyes, and hot/burning eye) were significantly increased by viewing smart mobile devices. Tear film break-up time significantly decreased from 5.09 ± 1.52 seconds to 4.63 ± 1.34 seconds (p = 0.003). However, total ocular wavefront aberration was unchanged. Conclusions Visual fatigue and discomfort were significantly induced by viewing smart mobile devices, even though the devices were equipped with state-of-the-art display technology. PMID:28914003

  5. New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program

    NASA Technical Reports Server (NTRS)

    Strain, D.; Levy, R.

    1986-01-01

    The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.

  6. Quantitative colorectal cancer perfusion measurement using dynamic contrast-enhanced multidetector-row computed tomography: effect of acquisition time and implications for protocols.

    PubMed

    Goh, Vicky; Halligan, Steve; Hugill, Jo-Ann; Gartner, Louise; Bartram, Clive I

    2005-01-01

    To determine the effect of acquisition time on quantitative colorectal cancer perfusion measurement. Dynamic contrast-enhanced computed tomography (CT) was performed prospectively in 10 patients with histologically proven colorectal cancer using 4-detector row CT (Lightspeed Plus; GE Healthcare Technologies, Waukesha, WI). Tumor blood flow, blood volume, mean transit time, and permeability were assessed for 3 acquisition times (45, 65, and 130 seconds). Mean values for all 4 perfusion parameters for each acquisition time were compared using the paired t test. Significant differences in permeability values were noted between acquisitions of 45 seconds and 65 and 130 seconds, respectively (P=0.02, P=0.007). There was no significant difference for values of blood volume, blood flow, and mean transit time between any of the acquisition times. Scan acquisitions of 45 seconds are too short for reliable permeability measurement in the abdomen. Longer acquisition times are required.

  7. Microscopes and computers combined for analysis of chromosomes

    NASA Technical Reports Server (NTRS)

    Butler, J. W.; Butler, M. K.; Stroud, A. N.

    1969-01-01

    Scanning machine CHLOE, developed for photographic use, is combined with a digital computer to obtain quantitative and statistically significant data on chromosome shapes, distribution, density, and pairing. CHLOE permits data acquisition about a chromosome complement to be obtained two times faster than by manual pairing.

  8. Evaluation of a continuous-rotation, high-speed scanning protocol for micro-computed tomography.

    PubMed

    Kerl, Hans Ulrich; Isaza, Cristina T; Boll, Hanne; Schambach, Sebastian J; Nolte, Ingo S; Groden, Christoph; Brockmann, Marc A

    2011-01-01

    Micro-computed tomography is used frequently in preclinical in vivo research. Limiting factors are radiation dose and long scan times. The purpose of the study was to compare a standard step-and-shoot to a continuous-rotation, high-speed scanning protocol. Micro-computed tomography of a lead grid phantom and a rat femur was performed using a step-and-shoot and a continuous-rotation protocol. Detail discriminability and image quality were assessed by 3 radiologists. The signal-to-noise ratio and the modulation transfer function were calculated, and volumetric analyses of the femur were performed. The radiation dose of the scan protocols was measured using thermoluminescence dosimeters. The 40-second continuous-rotation protocol allowed a detail discriminability comparable to the step-and-shoot protocol at significantly lower radiation doses. No marked differences in volumetric or qualitative analyses were observed. Continuous-rotation micro-computed tomography significantly reduces scanning time and radiation dose without relevantly reducing image quality compared with a normal step-and-shoot protocol.

  9. Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers

    NASA Technical Reports Server (NTRS)

    Guru Prasad, K.; Kane, J. H.

    1992-01-01

    The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.

  10. Using a Cloud Computing System to Reduce Door-to-Balloon Time in Acute ST-Elevation Myocardial Infarction Patients Transferred for Percutaneous Coronary Intervention.

    PubMed

    Ho, Chi-Kung; Chen, Fu-Cheng; Chen, Yung-Lung; Wang, Hui-Ting; Lee, Chien-Ho; Chung, Wen-Jung; Lin, Cheng-Jui; Hsueh, Shu-Kai; Hung, Shin-Chiang; Wu, Kuan-Han; Liu, Chu-Feng; Kung, Chia-Te; Cheng, Cheng-I

    2017-01-01

    This study evaluated the impact on clinical outcomes using a cloud computing system to reduce percutaneous coronary intervention hospital door-to-balloon (DTB) time for ST segment elevation myocardial infarction (STEMI). A total of 369 patients before and after implementation of the transfer protocol were enrolled. Of these patients, 262 were transferred through protocol while the other 107 patients were transferred through the traditional referral process. There were no significant differences in DTB time, pain to door of STEMI receiving center arrival time, and pain to balloon time between the two groups. Pain to electrocardiography time in patients with Killip I/II and catheterization laboratory to balloon time in patients with Killip III/IV were significantly reduced in transferred through protocol group compared to in traditional referral process group (both p < 0.05). There were also no remarkable differences in the complication rate and 30-day mortality between two groups. The multivariate analysis revealed that the independent predictors of 30-day mortality were elderly patients, advanced Killip score, and higher level of troponin-I. This study showed that patients transferred through our present protocol could reduce pain to electrocardiography and catheterization laboratory to balloon time in Killip I/II and III/IV patients separately. However, this study showed that using a cloud computing system in our present protocol did not reduce DTB time.

  11. An Approach for Dynamic Grids

    NASA Technical Reports Server (NTRS)

    Slater, John W.; Liou, Meng-Sing; Hindman, Richard G.

    1994-01-01

    An approach is presented for the generation of two-dimensional, structured, dynamic grids. The grid motion may be due to the motion of the boundaries of the computational domain or to the adaptation of the grid to the transient, physical solution. A time-dependent grid is computed through the time integration of the grid speeds which are computed from a system of grid speed equations. The grid speed equations are derived from the time-differentiation of the grid equations so as to ensure that the dynamic grid maintains the desired qualities of the static grid. The grid equations are the Euler-Lagrange equations derived from a variational statement for the grid. The dynamic grid method is demonstrated for a model problem involving boundary motion, an inviscid flow in a converging-diverging nozzle during startup, and a viscous flow over a flat plate with an impinging shock wave. It is shown that the approach is more accurate for transient flows than an approach in which the grid speeds are computed using a finite difference with respect to time of the grid. However, the approach requires significantly more computational effort.

  12. Next Steps in Network Time Synchronization For Navy Shipboard Applications

    DTIC Science & Technology

    2008-12-01

    40th Annual Precise Time and Time Interval (PTTI) Meeting NEXT STEPS IN NETWORK TIME SYNCHRONIZATION FOR NAVY SHIPBOARD APPLICATIONS...dynamic manner than in previous designs. This new paradigm creates significant network time synchronization challenges. The Navy has been...deploying the Network Time Protocol (NTP) in shipboard computing infrastructures to meet the current network time synchronization requirements

  13. Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds

    NASA Technical Reports Server (NTRS)

    Jardin, Matthew R.

    2004-01-01

    A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air-traffic-control automation, thousands of wind-optimal routes may need to be computed and checked for conflicts in just a few minutes. These factors motivated the need for a more efficient wind-optimal routing algorithm.

  14. Time-Dependent Computed Tomographic Perfusion Thresholds for Patients With Acute Ischemic Stroke.

    PubMed

    d'Esterre, Christopher D; Boesen, Mari E; Ahn, Seong Hwan; Pordeli, Pooneh; Najm, Mohamed; Minhas, Priyanka; Davari, Paniz; Fainardi, Enrico; Rubiera, Marta; Khaw, Alexander V; Zini, Andrea; Frayne, Richard; Hill, Michael D; Demchuk, Andrew M; Sajobi, Tolulope T; Forkert, Nils D; Goyal, Mayank; Lee, Ting Y; Menon, Bijoy K

    2015-12-01

    Among patients with acute ischemic stroke, we determine computed tomographic perfusion (CTP) thresholds associated with follow-up infarction at different stroke onset-to-CTP and CTP-to-reperfusion times. Acute ischemic stroke patients with occlusion on computed tomographic angiography were acutely imaged with CTP. Noncontrast computed tomography and magnectic resonance diffusion-weighted imaging between 24 and 48 hours were used to delineate follow-up infarction. Reperfusion was assessed on conventional angiogram or 4-hour repeat computed tomographic angiography. Tmax, cerebral blood flow, and cerebral blood volume derived from delay-insensitive CTP postprocessing were analyzed using receiver-operator characteristic curves to derive optimal thresholds for combined patient data (pooled analysis) and individual patients (patient-level analysis) based on time from stroke onset-to-CTP and CTP-to-reperfusion. One-way ANOVA and locally weighted scatterplot smoothing regression was used to test whether the derived optimal CTP thresholds were different by time. One hundred and thirty-two patients were included. Tmax thresholds of >16.2 and >15.8 s and absolute cerebral blood flow thresholds of <8.9 and <7.4 mL·min(-1)·100 g(-1) were associated with infarct if reperfused <90 min from CTP with onset <180 min. The discriminative ability of cerebral blood volume was modest. No statistically significant relationship was noted between stroke onset-to-CTP time and the optimal CTP thresholds for all parameters based on discrete or continuous time analysis (P>0.05). A statistically significant relationship existed between CTP-to-reperfusion time and the optimal thresholds for cerebral blood flow (P<0.001; r=0.59 and 0.77 for gray and white matter, respectively) and Tmax (P<0.001; r=-0.68 and -0.60 for gray and white matter, respectively) parameters. Optimal CTP thresholds associated with follow-up infarction depend on time from imaging to reperfusion. © 2015 American Heart Association, Inc.

  15. Price and cost estimation

    NASA Technical Reports Server (NTRS)

    Stewart, R. D.

    1979-01-01

    Price and Cost Estimating Program (PACE II) was developed to prepare man-hour and material cost estimates. Versatile and flexible tool significantly reduces computation time and errors and reduces typing and reproduction time involved in preparation of cost estimates.

  16. Chimera grids in the simulation of three-dimensional flowfields in turbine-blade-coolant passages

    NASA Technical Reports Server (NTRS)

    Stephens, M. A.; Rimlinger, M. J.; Shih, T. I.-P.; Civinskas, K. C.

    1993-01-01

    When computing flows inside geometrically complex turbine-blade coolant passages, the structure of the grid system used can affect significantly the overall time and cost required to obtain solutions. This paper addresses this issue while evaluating and developing computational tools for the design and analysis of coolant-passages, and is divided into two parts. In the first part, the various types of structured and unstructured grids are compared in relation to their ability to provide solutions in a timely and cost-effective manner. This comparison shows that the overlapping structured grids, known as Chimera grids, can rival and in some instances exceed the cost-effectiveness of unstructured grids in terms of both the man hours needed to generate grids and the amount of computer memory and CPU time needed to obtain solutions. In the second part, a computational tool utilizing Chimera grids was used to compute the flow and heat transfer in two different turbine-blade coolant passages that contain baffles and numerous pin fins. These computations showed the versatility and flexibility offered by Chimera grids.

  17. Brain-Inspired Photonic Signal Processor for Generating Periodic Patterns and Emulating Chaotic Systems

    NASA Astrophysics Data System (ADS)

    Antonik, Piotr; Haelterman, Marc; Massar, Serge

    2017-05-01

    Reservoir computing is a bioinspired computing paradigm for processing time-dependent signals. Its hardware implementations have received much attention because of their simplicity and remarkable performance on a series of benchmark tasks. In previous experiments, the output was uncoupled from the system and, in most cases, simply computed off-line on a postprocessing computer. However, numerical investigations have shown that feeding the output back into the reservoir opens the possibility of long-horizon time-series forecasting. Here, we present a photonic reservoir computer with output feedback, and we demonstrate its capacity to generate periodic time series and to emulate chaotic systems. We study in detail the effect of experimental noise on system performance. In the case of chaotic systems, we introduce several metrics, based on standard signal-processing techniques, to evaluate the quality of the emulation. Our work significantly enlarges the range of tasks that can be solved by hardware reservoir computers and, therefore, the range of applications they could potentially tackle. It also raises interesting questions in nonlinear dynamics and chaos theory.

  18. Computational Systems Toxicology: recapitulating the logistical dynamics of cellular response networks in virtual tissue models (Eurotox_2017)

    EPA Science Inventory

    Translating in vitro data and biological information into a predictive model for human toxicity poses a significant challenge. This is especially true for complex adaptive systems such as the embryo where cellular dynamics are precisely orchestrated in space and time. Computer ce...

  19. Conformational dynamics of proanthocyanidins: physical and computational approaches

    Treesearch

    Fred L. Tobiason; Richard W. Hemingway; T. Hatano

    1998-01-01

    The interaction of plant polyphenols with proteins accounts for a good part of their commercial (e.g., leather manufacture) and biological (e.g., antimicrobial activity) significance. The interplay between observations of physical data such as crystal structure, NMR analyses, and time-resolved fluorescence with results of computational chemistry approaches has been...

  20. Adaptation of a program for nonlinear finite element analysis to the CDC STAR 100 computer

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Ogilvie, P. L.

    1978-01-01

    The conversion of a nonlinear finite element program to the CDC STAR 100 pipeline computer is discussed. The program called DYCAST was developed for the crash simulation of structures. Initial results with the STAR 100 computer indicated that significant gains in computation time are possible for operations on gloval arrays. However, for element level computations that do not lend themselves easily to long vector processing, the STAR 100 was slower than comparable scalar computers. On this basis it is concluded that in order for pipeline computers to impact the economic feasibility of large nonlinear analyses it is absolutely essential that algorithms be devised to improve the efficiency of element level computations.

  1. Extending the length and time scales of Gram-Schmidt Lyapunov vector computations

    NASA Astrophysics Data System (ADS)

    Costa, Anthony B.; Green, Jason R.

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram-Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram-Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard-Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram-Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  2. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.

    1985-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  3. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1987-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  4. Experience with a sophisticated computer based authoring system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, P.R.

    1984-04-01

    In the November 1982 issue of ADCIS SIG CBT Newsletter the editor arrives at two conclusions regarding Computer Based Authoring Systems (CBAS): (1) CBAS drastically reduces programming time and the need for expert programmers, and (2) CBAS appears to have minimal impact on initial lesson design. Both of these comments have significant impact on any Cost-Benefit analysis for Computer-Based Training. The first tends to improve cost-effectiveness but only toward the limits imposed by the second. Westinghouse Hanford Company (WHC) recently purchased a sophisticated CBAS, the WISE/SMART system from Wicat (Orem, UT), for use in the Nuclear Power Industry. This reportmore » details our experience with this system relative to Items (1) and (2) above; lesson design time will be compared with lesson input time. Also provided will be the WHC experience in the use of subject matter experts (though computer neophytes) for the design and inputting of CBT materials.« less

  5. Computer-Based Mathematics Instructions for Engineering Students

    NASA Technical Reports Server (NTRS)

    Khan, Mustaq A.; Wall, Curtiss E.

    1996-01-01

    Almost every engineering course involves mathematics in one form or another. The analytical process of developing mathematical models is very important for engineering students. However, the computational process involved in the solution of some mathematical problems may be very tedious and time consuming. There is a significant amount of mathematical software such as Mathematica, Mathcad, and Maple designed to aid in the solution of these instructional problems. The use of these packages in classroom teaching can greatly enhance understanding, and save time. Integration of computer technology in mathematics classes, without de-emphasizing the traditional analytical aspects of teaching, has proven very successful and is becoming almost essential. Sample computer laboratory modules are developed for presentation in the classroom setting. This is accomplished through the use of overhead projectors linked to graphing calculators and computers. Model problems are carefully selected from different areas.

  6. Lifestyle and health-related quality of life: a cross-sectional study among civil servants in China.

    PubMed

    Xu, Jun; Qiu, Jincai; Chen, Jie; Zou, Liai; Feng, Liyi; Lu, Yan; Wei, Qian; Zhang, Jinhua

    2012-05-04

    Health-related quality of life (HRQoL) has been increasingly acknowledged as a valid and appropriate indicator of public health and chronic morbidity. However, limited research was conducted among Chinese civil servants owing to the different lifestyle. The aim of the study was to evaluate the HRQoL among Chinese civil servants and to identify factors might be associated with their HRQoL. A cross-sectional study was conducted to investigate HRQoL of 15,000 civil servants in China using stratified random sampling methods. Independent-Samples t-Test, one-way ANOVA, and multiple stepwise regression were used to analyse the influencing factors and the HRQoL of the civil servants. A univariate analysis showed that there were significant differences among physical component summary (PCS), mental component summary (MCS), and TS between lifestyle factors, such as smoking, drinking alcohol, having breakfast, sleep time, physical exercise, work time, operating computers, and sedentariness (P < 0.05). Multiple stepwise regressions showed that there were significant differences among TS between lifestyle factors, such as breakfast, sleep time, physical exercise, operating computers, sedentariness, work time, and drinking (P < 0.05). In this study, using Short Form 36 items (SF-36), we assessed the association of HRQoL with lifestyle factors, including smoking, drinking alcohol, having breakfast, sleep time, physical exercise, work time, operating computers, and sedentariness in China. The performance of the questionnaire in the large-scale survey is satisfactory and provides a large picture of the HRQoL status in Chinese civil servants. Our results indicate that lifestyle factors such as smoking, drinking alcohol, having breakfast, sleep time, physical exercise, work time, operating computers, and sedentariness affect the HRQoL of civil servants in China.

  7. Lifestyle and health-related quality of life: A cross-sectional study among civil servants in China

    PubMed Central

    2012-01-01

    Background Health-related quality of life (HRQoL) has been increasingly acknowledged as a valid and appropriate indicator of public health and chronic morbidity. However, limited research was conducted among Chinese civil servants owing to the different lifestyle. The aim of the study was to evaluate the HRQoL among Chinese civil servants and to identify factors might be associated with their HRQoL. Methods A cross-sectional study was conducted to investigate HRQoL of 15,000 civil servants in China using stratified random sampling methods. Independent-Samples t-Test, one-way ANOVA, and multiple stepwise regression were used to analyse the influencing factors and the HRQoL of the civil servants. Results A univariate analysis showed that there were significant differences among physical component summary (PCS), mental component summary (MCS), and TS between lifestyle factors, such as smoking, drinking alcohol, having breakfast, sleep time, physical exercise, work time, operating computers, and sedentariness (P < 0.05). Multiple stepwise regressions showed that there were significant differences among TS between lifestyle factors, such as breakfast, sleep time, physical exercise, operating computers, sedentariness, work time, and drinking (P < 0.05). Conclusion In this study, using Short Form 36 items (SF-36), we assessed the association of HRQoL with lifestyle factors, including smoking, drinking alcohol, having breakfast, sleep time, physical exercise, work time, operating computers, and sedentariness in China. The performance of the questionnaire in the large-scale survey is satisfactory and provides a large picture of the HRQoL status in Chinese civil servants. Our results indicate that lifestyle factors such as smoking, drinking alcohol, having breakfast, sleep time, physical exercise, work time, operating computers, and sedentariness affect the HRQoL of civil servants in China. PMID:22559315

  8. A window-based time series feature extraction method.

    PubMed

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-10-01

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Quantum computing: a prime modality in neurosurgery's future.

    PubMed

    Lee, Brian; Liu, Charles Y; Apuzzo, Michael L J

    2012-11-01

    With each significant development in the field of neurosurgery, our dependence on computers, small and large, has continuously increased. From something as mundane as bipolar cautery to sophisticated intraoperative navigation with real-time magnetic resonance imaging-assisted surgical guidance, both technologies, however simple or complex, require computational processing power to function. The next frontier for neurosurgery involves developing a greater understanding of the brain and furthering our capabilities as surgeons to directly affect brain circuitry and function. This has come in the form of implantable devices that can electronically and nondestructively influence the cortex and nuclei with the purpose of restoring neuronal function and improving quality of life. We are now transitioning from devices that are turned on and left alone, such as vagus nerve stimulators and deep brain stimulators, to "smart" devices that can listen and react to the body as the situation may dictate. The development of quantum computers and their potential to be thousands, if not millions, of times faster than current "classical" computers, will significantly affect the neurosciences, especially the field of neurorehabilitation and neuromodulation. Quantum computers may advance our understanding of the neural code and, in turn, better develop and program implantable neural devices. When quantum computers reach the point where we can actually implant such devices in patients, the possibilities of what can be done to interface and restore neural function will be limitless. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Dry eye syndrome among computer users

    NASA Astrophysics Data System (ADS)

    Gajta, Aurora; Turkoanje, Daniela; Malaescu, Iosif; Marin, Catalin-Nicolae; Koos, Marie-Jeanne; Jelicic, Biljana; Milutinovic, Vuk

    2015-12-01

    Dry eye syndrome is characterized by eye irritation due to changes of the tear film. Symptoms include itching, foreign body sensations, mucous discharge and transitory vision blurring. Less occurring symptoms include photophobia and eye tiredness. Aim of the work was to determine the quality of the tear film and ocular dryness potential risk in persons who spend more than 8 hours using computers and possible correlations between severity of symptoms (dry eyes symptoms anamnesis) and clinical signs assessed by: Schirmer test I, TBUT (Tears break-up time), TFT (Tear ferning test). The results show that subjects using computer have significantly shorter TBUT (less than 5 s for 56 % of subjects and less than 10 s for 37 % of subjects), TFT type II/III in 50 % of subjects and type III 31% of subjects was found when compared to computer non users (TFT type I and II was present in 85,71% of subjects). Visual display terminal use, more than 8 hours daily, has been identified as a significant risk factor for dry eye. It's been advised to all persons who spend substantial time using computers to use artificial tears drops in order to minimize the symptoms of dry eyes syndrome and prevents serious complications.

  11. Evaluation of user input methods for manipulating a tablet personal computer in sterile techniques.

    PubMed

    Yamada, Akira; Komatsu, Daisuke; Suzuki, Takeshi; Kurozumi, Masahiro; Fujinaga, Yasunari; Ueda, Kazuhiko; Kadoya, Masumi

    2017-02-01

    To determine a quick and accurate user input method for manipulating tablet personal computers (PCs) in sterile techniques. We evaluated three different manipulation methods, (1) Computer mouse and sterile system drape, (2) Fingers and sterile system drape, and (3) Digitizer stylus and sterile ultrasound probe cover with a pinhole, in terms of the central processing unit (CPU) performance, manipulation performance, and contactlessness. A significant decrease in CPU score ([Formula: see text]) and an increase in CPU temperature ([Formula: see text]) were observed when a system drape was used. The respective mean times taken to select a target image from an image series (ST) and the mean times for measuring points on an image (MT) were [Formula: see text] and [Formula: see text] s for the computer mouse method, [Formula: see text] and [Formula: see text] s for the finger method, and [Formula: see text] and [Formula: see text] s for the digitizer stylus method, respectively. The ST for the finger method was significantly longer than for the digitizer stylus method ([Formula: see text]). The MT for the computer mouse method was significantly longer than for the digitizer stylus method ([Formula: see text]). The mean success rate for measuring points on an image was significantly lower for the finger method when the diameter of the target was equal to or smaller than 8 mm than for the other methods. No significant difference in the adenosine triphosphate amount at the surface of the tablet PC was observed before, during, or after manipulation via the digitizer stylus method while wearing starch-powdered sterile gloves ([Formula: see text]). Quick and accurate manipulation of tablet PCs in sterile techniques without CPU load is feasible using a digitizer stylus and sterile ultrasound probe cover with a pinhole.

  12. Decoherence in adiabatic quantum computation

    NASA Astrophysics Data System (ADS)

    Albash, Tameem; Lidar, Daniel A.

    2015-06-01

    Recent experiments with increasingly larger numbers of qubits have sparked renewed interest in adiabatic quantum computation, and in particular quantum annealing. A central question that is repeatedly asked is whether quantum features of the evolution can survive over the long time scales used for quantum annealing relative to standard measures of the decoherence time. We reconsider the role of decoherence in adiabatic quantum computation and quantum annealing using the adiabatic quantum master-equation formalism. We restrict ourselves to the weak-coupling and singular-coupling limits, which correspond to decoherence in the energy eigenbasis and in the computational basis, respectively. We demonstrate that decoherence in the instantaneous energy eigenbasis does not necessarily detrimentally affect adiabatic quantum computation, and in particular that a short single-qubit T2 time need not imply adverse consequences for the success of the quantum adiabatic algorithm. We further demonstrate that boundary cancellation methods, designed to improve the fidelity of adiabatic quantum computing in the closed-system setting, remain beneficial in the open-system setting. To address the high computational cost of master-equation simulations, we also demonstrate that a quantum Monte Carlo algorithm that explicitly accounts for a thermal bosonic bath can be used to interpolate between classical and quantum annealing. Our study highlights and clarifies the significantly different role played by decoherence in the adiabatic and circuit models of quantum computing.

  13. [Musculoskeletal disorders among university student computer users].

    PubMed

    Lorusso, A; Bruno, S; L'Abbate, N

    2009-01-01

    Musculoskeletal disorders are a common problem among computer users. Many epidemiological studies have shown that ergonomic factors and aspects of work organization play an important role in the development of these disorders. We carried out a cross-sectional survey to estimate the prevalence of musculoskeletal symptoms among university students using personal computers and to investigate the features of occupational exposure and the prevalence of symptoms throughout the study course. Another objective was to assess the students' level of knowledge of computer ergonomics and the relevant health risks. A questionnaire was distributed to 183 students attending the lectures for second and fourth year courses of the Faculty of Architecture. Data concerning personal characteristics, ergonomic and organizational aspects of computer use, and the presence of musculoskeletal symptoms in the neck and upper limbs were collected. Exposure to risk factors such as daily duration of computer use, time spent at the computer without breaks, duration of mouse use and poor workstation ergonomics was significantly higher among students of the fourth year course. Neck pain was the most commonly reported symptom (69%), followed by hand/wrist (53%), shoulder (49%) and arm (8%) pain. The prevalence of symptoms in the neck and hand/wrist area was signifcantly higher in the students of the fourth year course. In our survey we found high prevalence of musculoskeletal symptoms among university students using computers for long time periods on a daily basis. Exposure to computer-related ergonomic and organizational risk factors, and the prevalence ofmusculoskeletal symptoms both seem to increase significantly throughout the study course. Furthermore, we found that the level of perception of computer-related health risks among the students was low. Our findings suggest the need for preventive intervention consisting of education in computer ergonomics.

  14. Parenting style, the home environment, and screen time of 5-year-old children; the 'be active, eat right' study.

    PubMed

    Veldhuis, Lydian; van Grieken, Amy; Renders, Carry M; Hirasing, Remy A; Raat, Hein

    2014-01-01

    The global increase in childhood overweight and obesity has been ascribed partly to increases in children's screen time. Parents have a large influence on their children's screen time. Studies investigating parenting and early childhood screen time are limited. In this study, we investigated associations of parenting style and the social and physical home environment on watching TV and using computers or game consoles among 5-year-old children. This study uses baseline data concerning 5-year-old children (n = 3067) collected for the 'Be active, eat right' study. Children of parents with a higher score on the parenting style dimension involvement, were more likely to spend >30 min/day on computers or game consoles. Overall, families with an authoritative or authoritarian parenting style had lower percentages of children's screen time compared to families with an indulgent or neglectful style, but no significant difference in OR was found. In families with rules about screen time, children were less likely to watch TV>2 hrs/day and more likely to spend >30 min/day on computers or game consoles. The number of TVs and computers or game consoles in the household was positively associated with screen time, and children with a TV or computer or game console in their bedroom were more likely to watch TV>2 hrs/day or spend >30 min/day on computers or game consoles. The magnitude of the association between parenting style and screen time of 5-year-olds was found to be relatively modest. The associations found between the social and physical environment and children's screen time are independent of parenting style. Interventions to reduce children's screen time might be most effective when they support parents specifically with introducing family rules related to screen time and prevent the presence of a TV or computer or game console in the child's room.

  15. Parenting Style, the Home Environment, and Screen Time of 5-Year-Old Children; The ‘Be Active, Eat Right’ Study

    PubMed Central

    Veldhuis, Lydian; van Grieken, Amy; Renders, Carry M.; HiraSing, Remy A.; Raat, Hein

    2014-01-01

    Introduction The global increase in childhood overweight and obesity has been ascribed partly to increases in children's screen time. Parents have a large influence on their children's screen time. Studies investigating parenting and early childhood screen time are limited. In this study, we investigated associations of parenting style and the social and physical home environment on watching TV and using computers or game consoles among 5-year-old children. Methods This study uses baseline data concerning 5-year-old children (n = 3067) collected for the ‘Be active, eat right’ study. Results Children of parents with a higher score on the parenting style dimension involvement, were more likely to spend >30 min/day on computers or game consoles. Overall, families with an authoritative or authoritarian parenting style had lower percentages of children's screen time compared to families with an indulgent or neglectful style, but no significant difference in OR was found. In families with rules about screen time, children were less likely to watch TV>2 hrs/day and more likely to spend >30 min/day on computers or game consoles. The number of TVs and computers or game consoles in the household was positively associated with screen time, and children with a TV or computer or game console in their bedroom were more likely to watch TV>2 hrs/day or spend >30 min/day on computers or game consoles. Conclusion The magnitude of the association between parenting style and screen time of 5-year-olds was found to be relatively modest. The associations found between the social and physical environment and children's screen time are independent of parenting style. Interventions to reduce children's screen time might be most effective when they support parents specifically with introducing family rules related to screen time and prevent the presence of a TV or computer or game console in the child's room. PMID:24533092

  16. Using a Cloud Computing System to Reduce Door-to-Balloon Time in Acute ST-Elevation Myocardial Infarction Patients Transferred for Percutaneous Coronary Intervention

    PubMed Central

    Ho, Chi-Kung; Wang, Hui-Ting; Lee, Chien-Ho; Chung, Wen-Jung; Lin, Cheng-Jui; Hsueh, Shu-Kai; Hung, Shin-Chiang; Wu, Kuan-Han; Liu, Chu-Feng; Kung, Chia-Te

    2017-01-01

    Background This study evaluated the impact on clinical outcomes using a cloud computing system to reduce percutaneous coronary intervention hospital door-to-balloon (DTB) time for ST segment elevation myocardial infarction (STEMI). Methods A total of 369 patients before and after implementation of the transfer protocol were enrolled. Of these patients, 262 were transferred through protocol while the other 107 patients were transferred through the traditional referral process. Results There were no significant differences in DTB time, pain to door of STEMI receiving center arrival time, and pain to balloon time between the two groups. Pain to electrocardiography time in patients with Killip I/II and catheterization laboratory to balloon time in patients with Killip III/IV were significantly reduced in transferred through protocol group compared to in traditional referral process group (both p < 0.05). There were also no remarkable differences in the complication rate and 30-day mortality between two groups. The multivariate analysis revealed that the independent predictors of 30-day mortality were elderly patients, advanced Killip score, and higher level of troponin-I. Conclusions This study showed that patients transferred through our present protocol could reduce pain to electrocardiography and catheterization laboratory to balloon time in Killip I/II and III/IV patients separately. However, this study showed that using a cloud computing system in our present protocol did not reduce DTB time. PMID:28900621

  17. Preoperative computed tomography angiography for planning DIEP flap breast reconstruction reduces operative time and overall complications

    PubMed Central

    Rozen, Warren Matthew; Chowdhry, Muhammad; Band, Bassam; Ramakrishnan, Venkat V.; Griffiths, Matthew

    2016-01-01

    Background The approach and operative techniques associated with breast reconstruction have steadily been refined since its inception, with abdominal perforator-based flaps becoming the gold standard reconstructive option for women undergoing breast cancer surgery. The current study comprises a cohort of 632 patients, in whom specific operative times are recorded by a blinded observer, and aims to address the potential benefits seen with the use of computer tomography (CT) scanning preoperatively on operative outcomes, complications and surgical times. Methods A prospectively recorded, retrospective review was undertaken of patients undergoing autologous breast reconstruction with a DIEP flap at the St Andrews Centre over a 4-year period from 2010 to 2014. Computed tomography angiography (CTA) scanning of patients began in September 2012 and thus 2 time periods were compared: 2 years prior to the use of CTA scans and 2 years afterwards. For all patients, key variables were collected including patient demographics, operative times, flap harvest time, pedicle length, surgeon experience and complications. Results In group 1, comprising patients within the period prior to CTA scans, 265 patients underwent 312 flaps; whilst in group 2, the immediately following 2 years, 275 patients had 320 flaps. The use of preoperative CTA scans demonstrated a significant reduction in flap harvest time of 13 minutes (P<0.013). This significant time saving was seen in all flap modifications: unilateral, bilateral and bipedicled DIEP flaps. The greatest time saving was seen in bipedicle flaps, with a 35-minute time saving. The return to theatre rate significantly dropped from 11.2% to 6.9% following the use of CTA scans, but there was no difference in the total failure rate. Conclusions The study has demonstrated both a benefit to flap harvest time as well as overall operative times when using preoperative CTA. The use of CTA was associated with a significant reduction in complications requiring a return to theatre in the immediate postoperative period. Modern scanners and techniques can reduce the level of ionising radiation, facilitating patients being able to benefit from the advantages that this preoperative planning can convey. PMID:27047777

  18. A parallel computing engine for a class of time critical processes.

    PubMed

    Nabhan, T M; Zomaya, A Y

    1997-01-01

    This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.

  19. Automatic Grading of 3D Computer Animation Laboratory Assignments

    ERIC Educational Resources Information Center

    Lamberti, Fabrizio; Sanna, Andrea; Paravati, Gianluca; Carlevaris, Gilles

    2014-01-01

    Assessment is a delicate task in the overall teaching process because it may require significant time and may be prone to subjectivity. Subjectivity is especially true for disciplines in which perceptual factors play a key role in the evaluation. In previous decades, computer-based assessment techniques were developed to address the…

  20. Point Cloud-Based Automatic Assessment of 3D Computer Animation Courseworks

    ERIC Educational Resources Information Center

    Paravati, Gianluca; Lamberti, Fabrizio; Gatteschi, Valentina; Demartini, Claudio; Montuschi, Paolo

    2017-01-01

    Computer-supported assessment tools can bring significant benefits to both students and teachers. When integrated in traditional education workflows, they may help to reduce the time required to perform the evaluation and consolidate the perception of fairness of the overall process. When integrated within on-line intelligent tutoring systems,…

  1. Decision support in psychiatry – a comparison between the diagnostic outcomes using a computerized decision support system versus manual diagnosis

    PubMed Central

    Bergman, Lars G; Fors, Uno GH

    2008-01-01

    Background Correct diagnosis in psychiatry may be improved by novel diagnostic procedures. Computerized Decision Support Systems (CDSS) are suggested to be able to improve diagnostic procedures, but some studies indicate possible problems. Therefore, it could be important to investigate CDSS systems with regard to their feasibility to improve diagnostic procedures as well as to save time. Methods This study was undertaken to compare the traditional 'paper and pencil' diagnostic method SCID1 with the computer-aided diagnostic system CB-SCID1 to ascertain processing time and accuracy of diagnoses suggested. 63 clinicians volunteered to participate in the study and to solve two paper-based cases using either a CDSS or manually. Results No major difference between paper and pencil and computer-supported diagnosis was found. Where a difference was found it was in favour of paper and pencil. For example, a significantly shorter time was found for paper and pencil for the difficult case, as compared to computer support. A significantly higher number of correct diagnoses were found in the diffilt case for the diagnosis 'Depression' using the paper and pencil method. Although a majority of the clinicians found the computer method supportive and easy to use, it took a longer time and yielded fewer correct diagnoses than with paper and pencil. Conclusion This study could not detect any major difference in diagnostic outcome between traditional paper and pencil methods and computer support for psychiatric diagnosis. Where there were significant differences, traditional paper and pencil methods were better than the tested CDSS and thus we conclude that CDSS for diagnostic procedures may interfere with diagnosis accuracy. A limitation was that most clinicians had not previously used the CDSS system under study. The results of this study, however, confirm that CDSS development for diagnostic purposes in psychiatry has much to deal with before it can be used for routine clinical purposes. PMID:18261222

  2. Whole-body computed tomography in trauma patients: optimization of the patient scanning position significantly shortens examination time while maintaining diagnostic image quality.

    PubMed

    Hickethier, Tilman; Mammadov, Kamal; Baeßler, Bettina; Lichtenstein, Thorsten; Hinkelbein, Jochen; Smith, Lucy; Plum, Patrick Sven; Chon, Seung-Hun; Maintz, David; Chang, De-Hua

    2018-01-01

    The study was conducted to compare examination time and artifact vulnerability of whole-body computed tomographies (wbCTs) for trauma patients using conventional or optimized patient positioning. Examination time was measured in 100 patients scanned with conventional protocol (Group A: arms positioned alongside the body for head and neck imaging and over the head for trunk imaging) and 100 patients scanned with optimized protocol (Group B: arms flexed on a chest pillow without repositioning). Additionally, influence of two different scanning protocols on image quality in the most relevant body regions was assessed by two blinded readers. Total wbCT duration was about 35% or 3:46 min shorter in B than in A. Artifacts in aorta (27 vs 6%), liver (40 vs 8%) and spleen (27 vs 5%) occurred significantly more often in B than in A. No incident of non-diagnostic image quality was reported, and no significant differences for lungs and spine were found. An optimized wbCT positioning protocol for trauma patients allows a significant reduction of examination time while still maintaining diagnostic image quality.

  3. A fast algorithm for vertex-frequency representations of signals on graphs

    PubMed Central

    Jestrović, Iva; Coyle, James L.; Sejdić, Ervin

    2016-01-01

    The windowed Fourier transform (short time Fourier transform) and the S-transform are widely used signal processing tools for extracting frequency information from non-stationary signals. Previously, the windowed Fourier transform had been adopted for signals on graphs and has been shown to be very useful for extracting vertex-frequency information from graphs. However, high computational complexity makes these algorithms impractical. We sought to develop a fast windowed graph Fourier transform and a fast graph S-transform requiring significantly shorter computation time. The proposed schemes have been tested with synthetic test graph signals and real graph signals derived from electroencephalography recordings made during swallowing. The results showed that the proposed schemes provide significantly lower computation time in comparison with the standard windowed graph Fourier transform and the fast graph S-transform. Also, the results showed that noise has no effect on the results of the algorithm for the fast windowed graph Fourier transform or on the graph S-transform. Finally, we showed that graphs can be reconstructed from the vertex-frequency representations obtained with the proposed algorithms. PMID:28479645

  4. Effects of brain-computer interface-based functional electrical stimulation on brain activation in stroke patients: a pilot randomized controlled trial.

    PubMed

    Chung, EunJung; Kim, Jung-Hee; Park, Dae-Sung; Lee, Byoung-Hee

    2015-03-01

    [Purpose] This study sought to determine the effects of brain-computer interface-based functional electrical stimulation (BCI-FES) on brain activation in patients with stroke. [Subjects] The subjects were randomized to in a BCI-FES group (n=5) and a functional electrical stimulation (FES) group (n=5). [Methods] Patients in the BCI-FES group received ankle dorsiflexion training with FES for 30 minutes per day, 5 times under the brain-computer interface-based program. The FES group received ankle dorsiflexion training with FES for the same amount of time. [Results] The BCI-FES group demonstrated significant differences in the frontopolar regions 1 and 2 attention indexes, and frontopolar 1 activation index. The FES group demonstrated no significant differences. There were significant differences in the frontopolar 1 region activation index between the two groups after the interventions. [Conclusion] The results of this study suggest that BCI-FES training may be more effective in stimulating brain activation than only FES training in patients recovering from stroke.

  5. Flexible structure control experiments using a real-time workstation for computer-aided control engineering

    NASA Technical Reports Server (NTRS)

    Stieber, Michael E.

    1989-01-01

    A Real-Time Workstation for Computer-Aided Control Engineering has been developed jointly by the Communications Research Centre (CRC) and Ruhr-Universitaet Bochum (RUB), West Germany. The system is presently used for the development and experimental verification of control techniques for large space systems with significant structural flexibility. The Real-Time Workstation essentially is an implementation of RUB's extensive Computer-Aided Control Engineering package KEDDC on an INTEL micro-computer running under the RMS real-time operating system. The portable system supports system identification, analysis, control design and simulation, as well as the immediate implementation and test of control systems. The Real-Time Workstation is currently being used by CRC to study control/structure interaction on a ground-based structure called DAISY, whose design was inspired by a reflector antenna. DAISY emulates the dynamics of a large flexible spacecraft with the following characteristics: rigid body modes, many clustered vibration modes with low frequencies and extremely low damping. The Real-Time Workstation was found to be a very powerful tool for experimental studies, supporting control design and simulation, and conducting and evaluating tests withn one integrated environment.

  6. Computing at the speed limit (supercomputers)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernhard, R.

    1982-07-01

    The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less

  7. Dual computer monitors to increase efficiency of conducting systematic reviews.

    PubMed

    Wang, Zhen; Asi, Noor; Elraiyah, Tarig A; Abu Dabrh, Abd Moain; Undavalli, Chaitanya; Glasziou, Paul; Montori, Victor; Murad, Mohammad Hassan

    2014-12-01

    Systematic reviews (SRs) are the cornerstone of evidence-based medicine. In this study, we evaluated the effectiveness of using two computer screens on the efficiency of conducting SRs. A cohort of reviewers before and after using dual monitors were compared with a control group that did not use dual monitors. The outcomes were time spent for abstract screening, full-text screening and data extraction, and inter-rater agreement. We adopted multivariate difference-in-differences linear regression models. A total of 60 SRs conducted by 54 reviewers were included in this analysis. We found a significant reduction of 23.81 minutes per article in data extraction in the intervention group relative to the control group (95% confidence interval: -46.03, -1.58, P = 0.04), which was a 36.85% reduction in time. There was no significant difference in time spent on abstract screening, full-text screening, or inter-rater agreement between the two groups. Using dual monitors when conducting SRs is associated with significant reduction of time spent on data extraction. No significant difference was observed on time spent on abstract screening or full-text screening. Using dual monitors is one strategy that may improve the efficiency of conducting SRs. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Materials constitutive models for nonlinear analysis of thermally cycled structures

    NASA Technical Reports Server (NTRS)

    Kaufman, A.; Hunt, L. E.

    1982-01-01

    Effects of inelastic materials models on computed stress-strain solutions for thermally loaded structures were studied by performing nonlinear (elastoplastic creep) and elastic structural analyses on a prismatic, double edge wedge specimen of IN 100 alloy that was subjected to thermal cycling in fluidized beds. Four incremental plasticity creep models (isotropic, kinematic, combined isotropic kinematic, and combined plus transient creep) were exercised for the problem by using the MARC nonlinear, finite element computer program. Maximum total strain ranges computed from the elastic and nonlinear analyses agreed within 5 percent. Mean cyclic stresses, inelastic strain ranges, and inelastic work were significantly affected by the choice of inelastic constitutive model. The computing time per cycle for the nonlinear analyses was more than five times that required for the elastic analysis.

  9. Forward Period Analysis Method of the Periodic Hamiltonian System.

    PubMed

    Wang, Pengfei

    2016-01-01

    Using the forward period analysis (FPA), we obtain the period of a Morse oscillator and mathematical pendulum system, with the accuracy of 100 significant digits. From these results, the long-term [0, 1060] (time unit) solutions, ranging from the Planck time to the age of the universe, are computed reliably and quickly with a parallel multiple-precision Taylor series (PMT) scheme. The application of FPA to periodic systems can greatly reduce the computation time of long-term reliable simulations. This scheme provides an efficient way to generate reference solutions, against which long-term simulations using other schemes can be tested.

  10. Spectral decontamination of a real-time helicopter simulation

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. E.

    1983-01-01

    Nonlinear mathematical models of a rotor system, referred to as rotating blade-element models, produce steady-state, high-frequency harmonics of significant magnitude. In a discrete simulation model, certain of these harmonics may be incompatible with realistic real-time computational constraints because of their aliasing into the operational low-pass region. However, the energy is an aliased harmonic may be suppressed by increasing the computation rate of an isolated, causal nonlinearity and using an appropriate filter. This decontamination technique is applied to Sikorsky's real-time model of the Black Hawk helicopter, as supplied to NASA for handling-qualities investigations.

  11. [Relationship between employees' management factor of visual display terminal (VDT) work time and 28-item General Health Questionnaire (GHQ-28) at one Japanese IT company's computer worksite].

    PubMed

    Sugimura, Hisamichi; Horiguchi, Itsuko; Shimizu, Takashi; Marui, Eiji

    2007-09-01

    We studied 1365 male workers at a Japanese computer worksite in 2004 to determine the relationship between employees' time management factor of visual display terminal (VDT) work and General Health Questionnaire (GHQ) score. We developed questionnaires concerning age, management factor of VDT work time (total daily VDT work time, duration of continuous work), other work-related conditions (commuting time, job rank, type of job, hours of monthly overtime), lifestyle (smoking, alcohol consumption, exercise, having breakfast, sleeping hours), and the Japanese version of 28-item General Health Questionnaire (GHQ). Multivariate logistic regression analyses were performed to estimate the odds ratios (ORs) of the high-GHQ groups (>6.0) associated with age and the time management factor of VDT work. Multivariate logistic regression analyses indicated lower ORs for certain groups: workers older than 50 years old had significantly a lower OR than those younger than 30 years old; workers sleeping less than 6 h showed a lower OR than those sleeping more than 6 h. In contrast, significantly higher ORs were shown for workers with continuous work durations of more than 3 h compared with those with less than 1 h, those with more than 25 h/mo overtime compared with those with less, those doing VDT work of more than 7.5 h/day compared with those doing less than 4.5 h/day, and those with more than 25 h/mo of overtime compared with those with less. Male Japanese computer workers' GHQ scores are significantly associated with time management factors of VDT work.

  12. Impact of increasing social media use on sitting time and body mass index.

    PubMed

    Alley, Stephanie; Wellens, Pauline; Schoeppe, Stephanie; de Vries, Hein; Rebar, Amanda L; Short, Camille E; Duncan, Mitch J; Vandelanotte, Corneel

    2017-08-01

    Issue addressed Sedentary behaviours, in particular sitting, increases the risk of cardiovascular disease, type 2 diabetes, obesity and poorer mental health status. In Australia, 70% of adults sit for more than 8h per day. The use of social media applications (e.g. Facebook, Twitter, and Instagram) is on the rise; however, no studies have explored the association of social media use with sitting time and body mass index (BMI). Methods Cross-sectional self-report data on demographics, BMI and sitting time were collected from 1140 participants in the 2013 Queensland Social Survey. Generalised linear models were used to estimate associations of a social media score calculated from social media use, perceived importance of social media, and number of social media contacts with sitting time and BMI. Results Participants with a high social media score had significantly greater sitting times while using a computer in leisure time and significantly greater total sitting time on non-workdays. However, no associations were found between social media score and sitting to view TV, use motorised transport, work or participate in other leisure activities; or total workday, total sitting time or BMI. Conclusions These results indicate that social media use is associated with increased sitting time while using a computer, and total sitting time on non-workdays. So what? The rise in social media use may have a negative impact on health by contributing to computer sitting and total sitting time on non-workdays. Future longitudinal research with a representative sample and objective sitting measures is needed to confirm findings.

  13. Does familiarity with computers affect computerized neuropsychological test performance?

    PubMed

    Iverson, Grant L; Brooks, Brian L; Ashton, V Lynn; Johnson, Lynda G; Gualtieri, C Thomas

    2009-07-01

    The purpose of this study was to determine whether self-reported computer familiarity is related to performance on computerized neurocognitive testing. Participants were 130 healthy adults who self-reported whether their computer use was "some" (n = 65) or "frequent" (n = 65). The two groups were individually matched on age, education, sex, and race. All completed the CNS Vital Signs (Gualtieri & Johnson, 2006b) computerized neurocognitive battery. There were significant differences on 6 of the 23 scores, including scores derived from the Symbol-Digit Coding Test, Stroop Test, and the Shifting Attention Test. The two groups were also significantly different on the Psychomotor Speed (Cohen's d = 0.37), Reaction Time (d = 0.68), Complex Attention (d = 0.40), and Cognitive Flexibility (d = 0.64) domain scores. People with "frequent" computer use performed better than people with "some" computer use on some tests requiring rapid visual scanning and keyboard work.

  14. Interactive vs passive screen time and nighttime sleep duration among school-aged children.

    PubMed

    Yland, Jennifer; Guan, Stanford; Emanuele, Erin; Hale, Lauren

    2015-09-01

    Insufficient sleep among school-aged children is a growing concern, as numerous studies have shown that chronic short sleep duration increases the risk of poor academic performance and specific adverse health outcomes. We examined the association between weekday nighttime sleep duration and 3 types of screen exposure: television, computer use, and video gaming. We used age 9 data from an ethnically diverse national birth cohort study, the Fragile Families and Child Wellbeing Study, to assess the association between screen time and sleep duration among 9-year-olds, using screen time data reported by both the child (n = 3269) and by the child's primary caregiver (n= 2770). Within the child-reported models, children who watched more than 2 hours of television per day had shorter sleep duration by approximately 11 minutes per night compared to those who watched less than 2 hours of television (β = -0.18; P < .001). Using the caregiver-reported models, both television and computer use were associated with reduced sleep duration. For both child- and parent-reported screen time measures, we did not find statistically significant differences in effect size across various types of screen time. Screen time from televisions and computers is associated with reduced sleep duration among 9-year-olds, using 2 sources of estimates of screen time exposure (child and parent reports). No specific type or use of screen time resulted in significantly shorter sleep duration than another, suggesting that caution should be advised against excessive use of all screens.

  15. Near-realtime simulations of biolelectric activity in small mammalian hearts using graphical processing units

    PubMed Central

    Vigmond, Edward J.; Boyle, Patrick M.; Leon, L. Joshua; Plank, Gernot

    2014-01-01

    Simulations of cardiac bioelectric phenomena remain a significant challenge despite continual advancements in computational machinery. Spanning large temporal and spatial ranges demands millions of nodes to accurately depict geometry, and a comparable number of timesteps to capture dynamics. This study explores a new hardware computing paradigm, the graphics processing unit (GPU), to accelerate cardiac models, and analyzes results in the context of simulating a small mammalian heart in real time. The ODEs associated with membrane ionic flow were computed on traditional CPU and compared to GPU performance, for one to four parallel processing units. The scalability of solving the PDE responsible for tissue coupling was examined on a cluster using up to 128 cores. Results indicate that the GPU implementation was between 9 and 17 times faster than the CPU implementation and scaled similarly. Solving the PDE was still 160 times slower than real time. PMID:19964295

  16. Target Identification Using Harmonic Wavelet Based ISAR Imaging

    NASA Astrophysics Data System (ADS)

    Shreyamsha Kumar, B. K.; Prabhakar, B.; Suryanarayana, K.; Thilagavathi, V.; Rajagopal, R.

    2006-12-01

    A new approach has been proposed to reduce the computations involved in the ISAR imaging, which uses harmonic wavelet-(HW) based time-frequency representation (TFR). Since the HW-based TFR falls into a category of nonparametric time-frequency (T-F) analysis tool, it is computationally efficient compared to parametric T-F analysis tools such as adaptive joint time-frequency transform (AJTFT), adaptive wavelet transform (AWT), and evolutionary AWT (EAWT). Further, the performance of the proposed method of ISAR imaging is compared with the ISAR imaging by other nonparametric T-F analysis tools such as short-time Fourier transform (STFT) and Choi-Williams distribution (CWD). In the ISAR imaging, the use of HW-based TFR provides similar/better results with significant (92%) computational advantage compared to that obtained by CWD. The ISAR images thus obtained are identified using a neural network-based classification scheme with feature set invariant to translation, rotation, and scaling.

  17. Evaluation of a finite multipole expansion technique for the computation of electrostatic potentials of dibenzo-p-dioxins and related systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, J.S.; Grice, M.E.; Politzer, P.

    1990-01-01

    The electrostatic potential V(r) that the nuclei and electrons of a molecule create in the surrounding space is well established as a guide in the study of molecular reactivity, and particularly, of biological recognition processes. Its rigorous computation is, however, very demanding of computer time for large molecules, such as those of interest in recognition interactions. The authors have accordingly investigated the use of an approximate finite multicenter multipole expansion technique to determine its applicability for producing reliable electrostatic potentials of dibenzo-p-dioxins and related molecules, with significantly reduced amounts of computer time, at distances of interest in recognition studies. Amore » comparative analysis of the potentials of three dibenzo-q-dioxins and a substituted naphthalene molecule computed using both the multipole expansion technique and GAUSSIAN 82 at the STO-5G level has been carried out. Overall they found that regions of negative and positive V(r) at 1.75 A above the molecular plane are very well reproduced by the multipole expansion technique, with up to a twenty-fold improvement in computer time.« less

  18. Efficient Computation of Closed-loop Frequency Response for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1997-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, full-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open and closed loop loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, a speed-up of almost two orders of magnitude was observed while accuracy improved by up to 5 decimal places.

  19. Upper extremities and spinal musculoskeletal disorders and risk factors in students using computers

    PubMed Central

    Calik, Bilge Basakci; Yagci, Nesrin; Gursoy, Suleyman; Zencir, Mehmet

    2014-01-01

    Objective: To examine the effects of computer usage on the musculoskeletal system discomforts (MSD) of Turkish university students, the possible risk factors and study implications (SI). Methods: The study comprised a total of 871 students. Demographic information was recorded and the Student Specific Cornell Musculoskeletal Discomfort Questionnaire (SsCMDQ) was used to evaluate musculoskeletal system discomforts. Results: The neck, lower back and upper back areas were determined to be the most affected areas and percentages for SI were 21.6%, 19.3% and 16.3% respectively. Significant differences were found to be daily computer usage time for the lower back, total usage time for the neck, being female and below the age of 21 years (p<0.05) had an increased risk. Conclusions: The neck, lower back and upper back areas were found to be the most affected areas due to computer usage in university students. Risk factors for MSD were seen to be daily and total computer usage time, female gender and age below 21 years and these were deemed to cause study interference PMID:25674139

  20. Computation of Asteroid Proper Elements on the Grid

    NASA Astrophysics Data System (ADS)

    Novakovic, B.; Balaz, A.; Knezevic, Z.; Potocnik, M.

    2009-12-01

    A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  1. Gigaflop architecture, a hardware perspective

    NASA Technical Reports Server (NTRS)

    Feierbach, G. F.

    1978-01-01

    Any super computer built in the early 1980s will use components that are available by fall 1978. The architecture of such a system cannot depart radically from current super computers if the software experience painfully acquired from these computers in the 70's is to apply. Given the above constraints, 10 billion floating point operations per second (BFLOPS) are attainable and a problem memory of 512 million (64 bit) words could be supported by the technology of the time. In contrast to this, industry is likely to respond with commercially available machines with a performance of less than 150 MFLOPS. This is due to self-imposed constraints on the manufacturers to provide upward compatible architectures (same instruction set) and systems which can be sold in significant volumes. Since this computing speed is inadequate to meet the demands of computational fluid dynamics, a special processor is required. Issues which are felt to be significant in the pursuit of maximum compute capability in this special processor are discussed.

  2. Screen media time usage of 12-16 year-old Spanish school adolescents: Effects of personal and socioeconomic factors, season and type of day.

    PubMed

    Devís-Devís, José; Peiró-Velert, Carmen; Beltrán-Carrillo, Vicente J; Tomás, José Manuel

    2009-04-01

    This study examined screen media time usage (SMTU) and its association with personal and socioeconomic factors, as well as the effect of season and type of day, in a Spanish sample of 12-16 year-old school adolescents (N=323). The research design was a cross-sectional survey, in which an interviewer-administered recall questionnaire was used. Statistical analyses included repeated measures analyses of variance, analysis of covariance and structural equation models. Results showed an average of 2.52h per day of total SMTU and partial times of 1.73h per day in TV viewing, 0.27h per day in computer/videogames, and 0.52h per day in mobile use. Four significant predictors of SMTU emerged. Firstly, the type of school was associated with the three media of our study, particularly students from state/public school spent more time on them than their private schools counterparts. Secondly, older adolescents (14-16 years old) were more likely to use computer/videogame and mobile phone than younger adolescents. Thirdly, the more accessibility to household technology the more probable computer/videogames and mobile phone were used. Finally, boys spent significantly more time in mobile phone than girls. Additionally, results revealed that adolescents seemed to consume more TV and computer/videogames in autumn than in winter, and more TV and mobile phones on weekends than on weekdays, especially among state school students. Findings from this study contribute to the existing knowledge on adolescents' SMTU patterns that can be transferred to families and policies.

  3. Application of ubiquitous computing in personal health monitoring systems.

    PubMed

    Kunze, C; Grossmann, U; Stork, W; Müller-Glaser, K D

    2002-01-01

    A possibility to significantly reduce the costs of public health systems is to increasingly use information technology. The Laboratory for Information Processing Technology (ITIV) at the University of Karlsruhe is developing a personal health monitoring system, which should improve health care and at the same time reduce costs by combining micro-technological smart sensors with personalized, mobile computing systems. In this paper we present how ubiquitous computing theory can be applied in the health-care domain.

  4. Optimization of a centrifugal compressor impeller using CFD: the choice of simulation model parameters

    NASA Astrophysics Data System (ADS)

    Neverov, V. V.; Kozhukhov, Y. V.; Yablokov, A. M.; Lebedev, A. A.

    2017-08-01

    Nowadays the optimization using computational fluid dynamics (CFD) plays an important role in the design process of turbomachines. However, for the successful and productive optimization it is necessary to define a simulation model correctly and rationally. The article deals with the choice of a grid and computational domain parameters for optimization of centrifugal compressor impellers using computational fluid dynamics. Searching and applying optimal parameters of the grid model, the computational domain and solver settings allows engineers to carry out a high-accuracy modelling and to use computational capability effectively. The presented research was conducted using Numeca Fine/Turbo package with Spalart-Allmaras and Shear Stress Transport turbulence models. Two radial impellers was investigated: the high-pressure at ψT=0.71 and the low-pressure at ψT=0.43. The following parameters of the computational model were considered: the location of inlet and outlet boundaries, type of mesh topology, size of mesh and mesh parameter y+. Results of the investigation demonstrate that the choice of optimal parameters leads to the significant reduction of the computational time. Optimal parameters in comparison with non-optimal but visually similar parameters can reduce the calculation time up to 4 times. Besides, it is established that some parameters have a major impact on the result of modelling.

  5. Investigating the influence of eating habits, body weight and television programme preferences on television viewing time and domestic computer usage.

    PubMed

    Raptou, Elena; Papastefanou, Georgios; Mattas, Konstadinos

    2017-01-01

    The present study explored the influence of eating habits, body weight and television programme preference on television viewing time and domestic computer usage, after adjusting for sociodemographic characteristics and home media environment indicators. In addition, potential substitution or complementarity in screen time was investigated. Individual level data were collected via questionnaires that were administered to a random sample of 2,946 Germans. The econometric analysis employed a seemingly unrelated bivariate ordered probit model to conjointly estimate television viewing time and time engaged in domestic computer usage. Television viewing and domestic computer usage represent two independent behaviours in both genders and across all age groups. Dietary habits have a significant impact on television watching with less healthy food choices associated with increasing television viewing time. Body weight is found to be positively correlated with television screen time in both men and women, and overweight individuals have a higher propensity for heavy television viewing. Similar results were obtained for age groups where an increasing body mass index (BMI) in adults over 24 years old is more likely to be positively associated with a higher duration of television watching. With respect to dietary habits of domestic computer users, participants aged over 24 years of both genders seem to adopt more healthy dietary patterns. A downward trend in the BMI of domestic computer users was observed in women and adults aged 25-60 years. On the contrary, young domestic computer users 18-24 years old have a higher body weight than non-users. Television programme preferences also affect television screen time with clear differences to be observed between genders and across different age groups. In order to reduce total screen time, health interventions should target different types of screen viewing audiences separately.

  6. Finite-Element Methods for Real-Time Simulation of Surgery

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay

    2003-01-01

    Two finite-element methods have been developed for mathematical modeling of the time-dependent behaviors of deformable objects and, more specifically, the mechanical responses of soft tissues and organs in contact with surgical tools. These methods may afford the computational efficiency needed to satisfy the requirement to obtain computational results in real time for simulating surgical procedures as described in Simulation System for Training in Laparoscopic Surgery (NPO-21192) on page 31 in this issue of NASA Tech Briefs. Simulation of the behavior of soft tissue in real time is a challenging problem because of the complexity of soft-tissue mechanics. The responses of soft tissues are characterized by nonlinearities and by spatial inhomogeneities and rate and time dependences of material properties. Finite-element methods seem promising for integrating these characteristics of tissues into computational models of organs, but they demand much central-processing-unit (CPU) time and memory, and the demand increases with the number of nodes and degrees of freedom in a given finite-element model. Hence, as finite-element models become more realistic, it becomes more difficult to compute solutions in real time. In both of the present methods, one uses approximate mathematical models trading some accuracy for computational efficiency and thereby increasing the feasibility of attaining real-time up36 NASA Tech Briefs, October 2003 date rates. The first of these methods is based on modal analysis. In this method, one reduces the number of differential equations by selecting only the most significant vibration modes of an object (typically, a suitable number of the lowest-frequency modes) for computing deformations of the object in response to applied forces.

  7. Decomposition method for fast computation of gigapixel-sized Fresnel holograms on a graphics processing unit cluster.

    PubMed

    Jackin, Boaz Jessie; Watanabe, Shinpei; Ootsu, Kanemitsu; Ohkawa, Takeshi; Yokota, Takashi; Hayasaki, Yoshio; Yatagai, Toyohiko; Baba, Takanobu

    2018-04-20

    A parallel computation method for large-size Fresnel computer-generated hologram (CGH) is reported. The method was introduced by us in an earlier report as a technique for calculating Fourier CGH from 2D object data. In this paper we extend the method to compute Fresnel CGH from 3D object data. The scale of the computation problem is also expanded to 2 gigapixels, making it closer to real application requirements. The significant feature of the reported method is its ability to avoid communication overhead and thereby fully utilize the computing power of parallel devices. The method exhibits three layers of parallelism that favor small to large scale parallel computing machines. Simulation and optical experiments were conducted to demonstrate the workability and to evaluate the efficiency of the proposed technique. A two-times improvement in computation speed has been achieved compared to the conventional method, on a 16-node cluster (one GPU per node) utilizing only one layer of parallelism. A 20-times improvement in computation speed has been estimated utilizing two layers of parallelism on a very large-scale parallel machine with 16 nodes, where each node has 16 GPUs.

  8. GPU-computing in econophysics and statistical physics

    NASA Astrophysics Data System (ADS)

    Preis, T.

    2011-03-01

    A recent trend in computer science and related fields is general purpose computing on graphics processing units (GPUs), which can yield impressive performance. With multiple cores connected by high memory bandwidth, today's GPUs offer resources for non-graphics parallel processing. This article provides a brief introduction into the field of GPU computing and includes examples. In particular computationally expensive analyses employed in financial market context are coded on a graphics card architecture which leads to a significant reduction of computing time. In order to demonstrate the wide range of possible applications, a standard model in statistical physics - the Ising model - is ported to a graphics card architecture as well, resulting in large speedup values.

  9. Linear static structural and vibration analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.

    1993-01-01

    Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.

  10. Critical Thinking Outcomes of Computer-Assisted Instruction versus Written Nursing Process.

    ERIC Educational Resources Information Center

    Saucier, Bonnie L.; Stevens, Kathleen R.; Williams, Gail B.

    2000-01-01

    Nursing students (n=43) who used clinical case studies via computer-assisted instruction (CAI) were compared with 37 who used the written nursing process (WNP). California Critical Thinking Skills Test results did not show significant increases in critical thinking. The WNP method was more time consuming; the CAI group was more satisfied. Use of…

  11. A Computational Approach to Quantifiers as an Explanation for Some Language Impairments in Schizophrenia

    ERIC Educational Resources Information Center

    Zajenkowski, Marcin; Styla, Rafal; Szymanik, Jakub

    2011-01-01

    We compared the processing of natural language quantifiers in a group of patients with schizophrenia and a healthy control group. In both groups, the difficulty of the quantifiers was consistent with computational predictions, and patients with schizophrenia took more time to solve the problems. However, they were significantly less accurate only…

  12. Convolution of large 3D images on GPU and its decomposition

    NASA Astrophysics Data System (ADS)

    Karas, Pavel; Svoboda, David

    2011-12-01

    In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.

  13. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Anthony B., E-mail: acosta@northwestern.edu; Green, Jason R., E-mail: jason.green@umb.edu; Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 betweenmore » Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.« less

  14. Computational system identification of continuous-time nonlinear systems using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan

    2016-11-01

    In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.

  15. A GPU-Based Architecture for Real-Time Data Assessment at Synchrotron Experiments

    NASA Astrophysics Data System (ADS)

    Chilingaryan, Suren; Mirone, Alessandro; Hammersley, Andrew; Ferrero, Claudio; Helfen, Lukas; Kopmann, Andreas; Rolo, Tomy dos Santos; Vagovic, Patrik

    2011-08-01

    Advances in digital detector technology leads presently to rapidly increasing data rates in imaging experiments. Using fast two-dimensional detectors in computed tomography, the data acquisition can be much faster than the reconstruction if no adequate measures are taken, especially when a high photon flux at synchrotron sources is used. We have optimized the reconstruction software employed at the micro-tomography beamlines of our synchrotron facilities to use the computational power of modern graphic cards. The main paradigm of our approach is the full utilization of all system resources. We use a pipelined architecture, where the GPUs are used as compute coprocessors to reconstruct slices, while the CPUs are preparing the next ones. Special attention is devoted to minimize data transfers between the host and GPU memory and to execute memory transfers in parallel with the computations. We were able to reduce the reconstruction time by a factor 30 and process a typical data set of 20 GB in 40 seconds. The time needed for the first evaluation of the reconstructed sample is reduced significantly and quasi real-time visualization is now possible.

  16. Screen time and physical violence in 10 to 16-year-old Canadian youth.

    PubMed

    Janssen, Ian; Boyce, William F; Pickett, William

    2012-04-01

    To examine the independent associations between television, computer, and video game use with physical violence in youth. The study population consisted of a representative cross-sectional sample of 9,672 Canadian youth in grades 6-10 and a 1-year longitudinal sample of 1,861 youth in grades 9-10. The number of weekly hours watching television, playing video games, and using a computer was determined. Violence was defined as engagement in ≥2 physical fights in the previous year and/or perpetration of ≥2-3 monthly episodes of physical bullying. Logistic regression was used to examine associations. In the cross-sectional sample, computer use was associated with violence independent of television and video game use. Video game use was associated with violence in girls but not boys. Television use was not associated with violence after controlling for the other screen time measures. In the longitudinal sample, video game use was a significant predictor of violence after controlling for the other screen time measures. Computer and video game use were the screen time measures most strongly related to violence in this large sample of youth.

  17. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-file Systems.

    PubMed

    Prabhakar, Attiguppe R; Yavagal, Chandrashekar; Dixit, Kratika; Naik, Saraswathi V

    2016-01-01

    Primary root canals are considered to be most challenging due to their complex anatomy. "Wave one" and "one shape" are single-file systems with reciprocating and rotary motion respectively. The aim of this study was to evaluate and compare dentin thickness, centering ability, canal transportation, and instrumentation time of wave one and one shape files in primary root canals using a cone beam computed tomographic (CBCT) analysis. This is an experimental, in vitro study comparing the two groups. A total of 24 extracted human primary teeth with minimum 7 mm root length were included in the study. Cone beam computed tomographic images were taken before and after the instrumentation for each group. Dentin thickness, centering ability, canal transportation, and instrumentation times were evaluated for each group. A significant difference was found in instrumentation time and canal transportation measures between the two groups. Wave one showed less canal transportation as compared with one shape, and the mean instrumentation time of wave one was significantly less than one shape. Reciprocating single-file systems was found to be faster with much less procedural errors and can hence be recommended for shaping the root canals of primary teeth. How to cite this article: Prabhakar AR, Yavagal C, Dixit K, Naik SV. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-File Systems. Int J Clin Pediatr Dent 2016;9(1):45-49.

  18. EPIBLASTER-fast exhaustive two-locus epistasis detection strategy using graphical processing units

    PubMed Central

    Kam-Thong, Tony; Czamara, Darina; Tsuda, Koji; Borgwardt, Karsten; Lewis, Cathryn M; Erhardt-Lehmann, Angelika; Hemmer, Bernhard; Rieckmann, Peter; Daake, Markus; Weber, Frank; Wolf, Christiane; Ziegler, Andreas; Pütz, Benno; Holsboer, Florian; Schölkopf, Bernhard; Müller-Myhsok, Bertram

    2011-01-01

    Detection of epistatic interaction between loci has been postulated to provide a more in-depth understanding of the complex biological and biochemical pathways underlying human diseases. Studying the interaction between two loci is the natural progression following traditional and well-established single locus analysis. However, the added costs and time duration required for the computation involved have thus far deterred researchers from pursuing a genome-wide analysis of epistasis. In this paper, we propose a method allowing such analysis to be conducted very rapidly. The method, dubbed EPIBLASTER, is applicable to case–control studies and consists of a two-step process in which the difference in Pearson's correlation coefficients is computed between controls and cases across all possible SNP pairs as an indication of significant interaction warranting further analysis. For the subset of interactions deemed potentially significant, a second-stage analysis is performed using the likelihood ratio test from the logistic regression to obtain the P-value for the estimated coefficients of the individual effects and the interaction term. The algorithm is implemented using the parallel computational capability of commercially available graphical processing units to greatly reduce the computation time involved. In the current setup and example data sets (211 cases, 222 controls, 299468 SNPs; and 601 cases, 825 controls, 291095 SNPs), this coefficient evaluation stage can be completed in roughly 1 day. Our method allows for exhaustive and rapid detection of significant SNP pair interactions without imposing significant marginal effects of the single loci involved in the pair. PMID:21150885

  19. Brief communication: patient satisfaction with the use of tablet computers: a pilot study in two outpatient home dialysis clinics.

    PubMed

    Schick-Makaroff, Kara; Molzahn, Anita

    2014-01-01

    Electronic capture of patients' reports of their health is significant in clinical nephrology research because health-related quality of life (HRQOL) for patients with end-stage renal disease is compromised and assessment by patients of their HRQOL in practice is relatively uncommon. The purpose of this study was to evaluate patient satisfaction with and time involved in administering HRQOL and symptom assessment measures using tablet computers in two outpatient home dialysis clinics. A cross-sectional observational study design was employed. The study was conducted in two home dialysis clinics. Fifty-six patients participated in the study; 35 males (63%) and 21 females (37%) with a mean age of 66 ± 12 (36-90 years old) were included. Forty-nine participants were on peritoneal dialysis (87%), 6 on home hemodialysis (11%), and 1 on nocturnal home hemodialysis (2%). Measures included the Kidney Disease Quality of Life-36 (KDQOL-36), the Edmonton Symptom Assessment Scale (ESAS) and Participant's Level of Satisfaction in Using a Tablet Computer. Using a tablet computer, participants completed the three measures. Descriptive statistics and bivariate correlations were calculated. Participants' satisfaction with use of the tablet computer was high; 66% were "very satisfied", 7% "satisfied", 2% "slightly satisfied", and 18% "neutral". On the 7-point Likert-type scale, the mean satisfaction score was 5.11 (SD = 1.6). Mean time to complete the measures was: Level of Satisfaction 1.15 minutes (SD = 0.41), ESAS 2.55 minutes (SD = 1.04), and KDQOL 9.56 minutes (SD = 2.03); the mean time to complete all three instruments was 13.19 minutes (SD = 2.42). There were no significant correlations between level of satisfaction and age, gender, HRQOL, time taken to complete surveys, computer experience, or comfort with technology. Comfort with technology and computer experience were highly correlated, r = .7, p (one-tailed) < 0.01. Limitations include lack of generalizability because of a small self-selected sample of relatively healthy patients and a lack of psychometric testing on the measure of satisfaction. Participants were satisfied with the platform and the time involved for completion of instruments was modest. Routine use of HRQOL measures for clinical purposes may be facilitated through use of tablet computers.

  20. Cross-sectional associations of total sitting and leisure screen time with cardiometabolic risk in adults. Results from the HUNT Study, Norway.

    PubMed

    Chau, Josephine Y; Grunseit, Anne; Midthjell, Kristian; Holmen, Jostein; Holmen, Turid L; Bauman, Adrian E; van der Ploeg, Hidde P

    2014-01-01

    To examine associations of total sitting time, TV-viewing and leisure-time computer use with cardiometabolic risk biomarkers in adults. Population based cross-sectional study. Waist circumference, BMI, total cholesterol, HDL cholesterol, blood pressure, non-fasting glucose, gamma glutamyltransferase (GGT) and triglycerides were measured in 48,882 adults aged 20 years or older from the Nord-Trøndelag Health Study 2006-2008 (HUNT3). Adjusted multiple regression models were used to test for associations between these biomarkers and self-reported total sitting time, TV-viewing and leisure-time computer use in the whole sample and by cardiometabolic disease status sub-groups. In the whole sample, reporting total sitting time ≥10 h/day was associated with poorer BMI, waist circumference, total cholesterol, HDL cholesterol, diastolic blood pressure, systolic blood pressure, non-fasting glucose, GGT and triglyceride levels compared to those reporting total sitting time <4h/day (all p<0.05). TV-viewing ≥4 h/day was associated with poorer BMI, waist circumference, total cholesterol, HDL cholesterol, systolic blood pressure, GGT and triglycerides compared to TV-viewing <1h/day (all p<0.05). Leisure-time computer use ≥1 h/day was associated with poorer BMI, total cholesterol, diastolic blood pressure, GGT and triglycerides compared with those reporting no leisure-time computing. Sub-group analyses by cardiometabolic disease status showed similar patterns in participants free of cardiometabolic disease, while similar albeit non-significant patterns were observed in those with cardiometabolic disease. Total sitting time, TV-viewing and leisure-time computer use are associated with poorer cardiometabolic risk profiles in adults. Reducing sedentary behaviour throughout the day and limiting TV-viewing and leisure-time computer use may have health benefits. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  1. Parallel implementation and evaluation of motion estimation system algorithms on a distributed memory multiprocessor using knowledge based mappings

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.

  2. Solution of 3-dimensional time-dependent viscous flows. Part 3: Application to turbulent and unsteady flows

    NASA Technical Reports Server (NTRS)

    Weinberg, B. C.; Mcdonald, H.

    1982-01-01

    A numerical scheme is developed for solving the time dependent, three dimensional compressible viscous flow equations to be used as an aid in the design of helicopter rotors. In order to further investigate the numerical procedure, the computer code developed to solve an approximate form of the three dimensional unsteady Navier-Stokes equations employing a linearized block implicit technique in conjunction with a QR operator scheme is tested. Results of calculations are presented for several two dimensional boundary layer flows including steady turbulent and unsteady laminar cases. A comparison of fourth order and second order solutions indicate that increased accuracy can be obtained without any significant increases in cost (run time). The results of the computations also indicate that the computer code can be applied to more complex flows such as those encountered on rotating airfoils. The geometry of a symmetric NACA four digit airfoil is considered and the appropriate geometrical properties are computed.

  3. The effect of monitor raster latency on VEPs, ERPs and Brain-Computer Interface performance.

    PubMed

    Nagel, Sebastian; Dreher, Werner; Rosenstiel, Wolfgang; Spüler, Martin

    2018-02-01

    Visual neuroscience experiments and Brain-Computer Interface (BCI) control often require strict timings in a millisecond scale. As most experiments are performed using a personal computer (PC), the latencies that are introduced by the setup should be taken into account and be corrected. As a standard computer monitor uses a rastering to update each line of the image sequentially, this causes a monitor raster latency which depends on the position, on the monitor and the refresh rate. We technically measured the raster latencies of different monitors and present the effects on visual evoked potentials (VEPs) and event-related potentials (ERPs). Additionally we present a method for correcting the monitor raster latency and analyzed the performance difference of a code-modulated VEP BCI speller by correcting the latency. There are currently no other methods validating the effects of monitor raster latency on VEPs and ERPs. The timings of VEPs and ERPs are directly affected by the raster latency. Furthermore, correcting the raster latency resulted in a significant reduction of the target prediction error from 7.98% to 4.61% and also in a more reliable classification of targets by significantly increasing the distance between the most probable and the second most probable target by 18.23%. The monitor raster latency affects the timings of VEPs and ERPs, and correcting resulted in a significant error reduction of 42.23%. It is recommend to correct the raster latency for an increased BCI performance and methodical correctness. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Framework for architecture-independent run-time reconfigurable applications

    NASA Astrophysics Data System (ADS)

    Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.

    2000-10-01

    Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.

  5. Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor

    NASA Astrophysics Data System (ADS)

    Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul

    2017-05-01

    Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.

  6. The diagnostic value and accuracy of conjunctival impression cytology, dry eye symptomatology, and routine tear function tests in computer users.

    PubMed

    Bhargava, Rahul; Kumar, Prachi; Kaur, Avinash; Kumar, Manjushri; Mishra, Anurag

    2014-07-01

    To compare the diagnostic value and accuracy of dry eye scoring system (DESS), conjunctival impression cytology (CIC), tear film breakup time (TBUT), and Schirmer's test in computer users. A case-control study was done at two referral eye centers. Eyes of 344 computer users were compared to 371 eyes of age and sex matched controls. Dry eye questionnaire (DESS) was administered to both groups and they further underwent measurement of TBUT, Schirmer's, and CIC. Correlation analysis was performed between DESS, CIC, TBUT, and Schirmer's test scores. A Pearson's coefficient of the linear expression (R (2)) of 0.5 or more was statistically significant. The mean age in cases (26.05 ± 4.06 years) was comparable to controls (25.67 ± 3.65 years) (P = 0.465). The mean symptom score in computer users was significantly higher as compared to controls (P < 0.001). Mean TBUT, Schirmer's test values, and goblet cell density were significantly reduced in computer users (P < 0.001). TBUT, Schirmer's, and CIC were abnormal in 48.5%, 29.1%, and 38.4% symptomatic computer users respectively as compared to 8%, 6.7%, and 7.3% symptomatic controls respectively. On correlation analysis, there was a significant (inverse) association of dry eye symptoms (DESS) with TBUT and CIC scores (R (2) > 0.5), in contrast to Schirmer's scores (R(2) < 0.5). Duration of computer usage had a significant effect on dry eye symptoms severity, TBUT, and CIC scores as compared to Schirmer's test. DESS should be used in combination with TBUT and CIC for dry eye evaluation in computer users.

  7. Associations between parental rules, style of communication and children's screen time.

    PubMed

    Bjelland, Mona; Soenens, Bart; Bere, Elling; Kovács, Éva; Lien, Nanna; Maes, Lea; Manios, Yannis; Moschonis, George; te Velde, Saskia J

    2015-10-01

    Research suggests an inverse association between parental rules and screen time in pre-adolescents, and that parents' style of communication with their children is related to the children's time spent watching TV. The aims of this study were to examine associations of parental rules and parental style of communication with children's screen time and perceived excessive screen time in five European countries. UP4FUN was a multi-centre, cluster randomised controlled trial with pre- and post-test measurements in each of five countries; Belgium, Germany, Greece, Hungary and Norway. Questionnaires were completed by the children at school and the parent questionnaire was brought home. Three structural equation models were tested based on measures of screen time and parental style of communication from the pre-test questionnaires. Of the 152 schools invited, 62 (41 %) schools agreed to participate. In total 3325 children (average age 11.2 years and 51 % girls) and 3038 parents (81 % mothers) completed the pre-test questionnaire. The average TV/DVD times across the countries were between 1.5 and 1.8 h/day, while less time was used for computer/games console (0.9-1.4 h/day). The children's perceived parental style of communication was quite consistent for TV/DVD and computer/games console. The presence of rules was significantly associated with less time watching TV/DVD and use of computer/games console time. Moreover, the use of an autonomy-supportive style was negatively related to both time watching TV/DVD and use of computer/games console time. The use of a controlling style was related positively to perceived excessive time used on TV/DVD and excessive time used on computer/games console. With a few exceptions, results were similar across the five countries. This study suggests that an autonomy-supportive style of communicating rules for TV/DVD or computer/ games console use is negatively related to children's time watching TV/DVD and use of computer/games console time. In contrast, a controlling style is associated with more screen time and with more perceived excessive screen time in particular. Longitudinal research is needed to further examine effects of parental style of communication on children's screen time as well as possible reciprocal effects. International Standard Randomized Controlled Trial Number Register, registration number: ISRCTN34562078 . Date applied29/07/2011, Date assigned11/10/2011.

  8. A new system of computer-assisted navigation leading to reduction in operating time in uncemented total hip replacement in a matched population.

    PubMed

    Chaudhry, Fouad A; Ismail, Sanaa Z; Davis, Edward T

    2018-05-01

    Computer-assisted navigation techniques are used to optimise component placement and alignment in total hip replacement. It has developed in the last 10 years but despite its advantages only 0.3% of all total hip replacements in England and Wales are done using computer navigation. One of the reasons for this is that computer-assisted technology increases operative time. A new method of pelvic registration has been developed without the need to register the anterior pelvic plane (BrainLab hip 6.0) which has shown to improve the accuracy of THR. The purpose of this study was to find out if the new method reduces the operating time. This was a retrospective analysis of comparing operating time in computer navigated primary uncemented total hip replacement using two methods of registration. Group 1 included 128 cases that were performed using BrainLab versions 2.1-5.1. This version relied on the acquisition of the anterior pelvic plane for registration. Group 2 included 128 cases that were performed using the newest navigation software, BrainLab hip 6.0 (registration possible with the patient in the lateral decubitus position). The operating time was 65.79 (40-98) minutes using the old method of registration and was 50.87 (33-74) minutes using the new method of registration. This difference was statistically significant. The body mass index (BMI) was comparable in both groups. The study supports the use of new method of registration in improving the operating time in computer navigated primary uncemented total hip replacements.

  9. Feedback Inhibition Shapes Emergent Computational Properties of Cortical Microcircuit Motifs.

    PubMed

    Jonke, Zeno; Legenstein, Robert; Habenschuss, Stefan; Maass, Wolfgang

    2017-08-30

    Cortical microcircuits are very complex networks, but they are composed of a relatively small number of stereotypical motifs. Hence, one strategy for throwing light on the computational function of cortical microcircuits is to analyze emergent computational properties of these stereotypical microcircuit motifs. We are addressing here the question how spike timing-dependent plasticity shapes the computational properties of one motif that has frequently been studied experimentally: interconnected populations of pyramidal cells and parvalbumin-positive inhibitory cells in layer 2/3. Experimental studies suggest that these inhibitory neurons exert some form of divisive inhibition on the pyramidal cells. We show that this data-based form of feedback inhibition, which is softer than that of winner-take-all models that are commonly considered in theoretical analyses, contributes to the emergence of an important computational function through spike timing-dependent plasticity: The capability to disentangle superimposed firing patterns in upstream networks, and to represent their information content through a sparse assembly code. SIGNIFICANCE STATEMENT We analyze emergent computational properties of a ubiquitous cortical microcircuit motif: populations of pyramidal cells that are densely interconnected with inhibitory neurons. Simulations of this model predict that sparse assembly codes emerge in this microcircuit motif under spike timing-dependent plasticity. Furthermore, we show that different assemblies will represent different hidden sources of upstream firing activity. Hence, we propose that spike timing-dependent plasticity enables this microcircuit motif to perform a fundamental computational operation on neural activity patterns. Copyright © 2017 the authors 0270-6474/17/378511-13$15.00/0.

  10. Smart Sampling and HPC-based Probabilistic Look-ahead Contingency Analysis Implementation and its Evaluation with Real-world Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Etingov, Pavel V.; Ren, Huiying

    This paper describes a probabilistic look-ahead contingency analysis application that incorporates smart sampling and high-performance computing (HPC) techniques. Smart sampling techniques are implemented to effectively represent the structure and statistical characteristics of uncertainty introduced by different sources in the power system. They can significantly reduce the data set size required for multiple look-ahead contingency analyses, and therefore reduce the time required to compute them. High-performance-computing (HPC) techniques are used to further reduce computational time. These two techniques enable a predictive capability that forecasts the impact of various uncertainties on potential transmission limit violations. The developed package has been tested withmore » real world data from the Bonneville Power Administration. Case study results are presented to demonstrate the performance of the applications developed.« less

  11. Temporal and spatial organization of doctors' computer usage in a UK hospital department.

    PubMed

    Martins, H M G; Nightingale, P; Jones, M R

    2005-06-01

    This paper describes the use of an application accessible via distributed desktop computing and wireless mobile devices in a specialist department of a UK acute hospital. Data (application logs, in-depth interviews, and ethnographic observation) were simultaneously collected to study doctors' work via this application, when and where they accessed different areas of it, and from what computing devices. These show that the application is widely used, but in significantly different ways over time and space. For example, physicians and surgeons differ in how they use the application and in their choice of mobile or desktop computing. Consultants and junior doctors in the same teams also seem to access different sources of patient information, at different times, and from different locations. Mobile technology was used almost exclusively during the morning by groups of clinicians, predominantly for ward rounds.

  12. Assessment of time-dependent density functional theory with the restricted excitation space approximation for excited state calculations of large systems

    NASA Astrophysics Data System (ADS)

    Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.

    2018-06-01

    The restricted excitation subspace approximation is explored as a basis to reduce the memory storage required in linear response time-dependent density functional theory (TDDFT) calculations within the Tamm-Dancoff approximation. It is shown that excluding the core orbitals and up to 70% of the virtual orbitals in the construction of the excitation subspace does not result in significant changes in computed UV/vis spectra for large molecules. The reduced size of the excitation subspace greatly reduces the size of the subspace vectors that need to be stored when using the Davidson procedure to determine the eigenvalues of the TDDFT equations. Furthermore, additional screening of the two-electron integrals in combination with a reduction in the size of the numerical integration grid used in the TDDFT calculation leads to significant computational savings. The use of these approximations represents a simple approach to extend TDDFT to the study of large systems and make the calculations increasingly tractable using modest computing resources.

  13. Elimination sequence optimization for SPAR

    NASA Technical Reports Server (NTRS)

    Hogan, Harry A.

    1986-01-01

    SPAR is a large-scale computer program for finite element structural analysis. The program allows user specification of the order in which the joints of a structure are to be eliminated since this order can have significant influence over solution performance, in terms of both storage requirements and computer time. An efficient elimination sequence can improve performance by over 50% for some problems. Obtaining such sequences, however, requires the expertise of an experienced user and can take hours of tedious effort to affect. Thus, an automatic elimination sequence optimizer would enhance productivity by reducing the analysts' problem definition time and by lowering computer costs. Two possible methods for automating the elimination sequence specifications were examined. Several algorithms based on the graph theory representations of sparse matrices were studied with mixed results. Significant improvement in the program performance was achieved, but sequencing by an experienced user still yields substantially better results. The initial results provide encouraging evidence that the potential benefits of such an automatic sequencer would be well worth the effort.

  14. A review on quantum search algorithms

    NASA Astrophysics Data System (ADS)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  15. Hidden Markov induced Dynamic Bayesian Network for recovering time evolving gene regulatory networks

    NASA Astrophysics Data System (ADS)

    Zhu, Shijia; Wang, Yadong

    2015-12-01

    Dynamic Bayesian Networks (DBN) have been widely used to recover gene regulatory relationships from time-series data in computational systems biology. Its standard assumption is ‘stationarity’, and therefore, several research efforts have been recently proposed to relax this restriction. However, those methods suffer from three challenges: long running time, low accuracy and reliance on parameter settings. To address these problems, we propose a novel non-stationary DBN model by extending each hidden node of Hidden Markov Model into a DBN (called HMDBN), which properly handles the underlying time-evolving networks. Correspondingly, an improved structural EM algorithm is proposed to learn the HMDBN. It dramatically reduces searching space, thereby substantially improving computational efficiency. Additionally, we derived a novel generalized Bayesian Information Criterion under the non-stationary assumption (called BWBIC), which can help significantly improve the reconstruction accuracy and largely reduce over-fitting. Moreover, the re-estimation formulas for all parameters of our model are derived, enabling us to avoid reliance on parameter settings. Compared to the state-of-the-art methods, the experimental evaluation of our proposed method on both synthetic and real biological data demonstrates more stably high prediction accuracy and significantly improved computation efficiency, even with no prior knowledge and parameter settings.

  16. User Language Considerations in Military Human-Computer Interface Design

    DTIC Science & Technology

    1988-06-30

    InterfatceDe~sign (rinclassilied i. PEASO2NAL AUTHOR(S) 11rinil 3. Pond_ & VWilliamK. Cbruvn _______ Ia. TYPE OF REFORT Ib. TIME COVERED 14 DAt( OP...report details the soldtar lanquagoiculli-o ’s,.tves of poDzibIo releivance to US Military 01IOCliveneSS. 0&poCiatty in thosesV,tqIm& wtth cit:1c~l...IMPLICATIONS OF BILINGUALISM 7. Stress Effects 7 Significance for the US Military 9 BILINGUALISM AND THE HUMAN-COMPUTER INTERFACE 11 Computer-specific

  17. Bringing MapReduce Closer To Data With Active Drives

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Prathapan, S.; Warmka, R.; Wyatt, B.; Halem, M.; Trantham, J. D.; Markey, C. A.

    2017-12-01

    Moving computation closer to the data location has been a much theorized improvement to computation for decades. The increase in processor performance, the decrease in processor size and power requirement combined with the increase in data intensive computing has created a push to move computation as close to data as possible. We will show the next logical step in this evolution in computing: moving computation directly to storage. Hypothetical systems, known as Active Drives, have been proposed as early as 1998. These Active Drives would have a general-purpose CPU on each disk allowing for computations to be performed on them without the need to transfer the data to the computer over the system bus or via a network. We will utilize Seagate's Active Drives to perform general purpose parallel computing using the MapReduce programming model directly on each drive. We will detail how the MapReduce programming model can be adapted to the Active Drive compute model to perform general purpose computing with comparable results to traditional MapReduce computations performed via Hadoop. We will show how an Active Drive based approach significantly reduces the amount of data leaving the drive when performing several common algorithms: subsetting and gridding. We will show that an Active Drive based design significantly improves data transfer speeds into and out of drives compared to Hadoop's HDFS while at the same time keeping comparable compute speeds as Hadoop.

  18. Computer content analysis of schizophrenic speech: a preliminary report.

    PubMed

    Tucker, G J; Rosenberg, S D

    1975-06-01

    Computer analysis significantly differtiated the thermatic content of the free speech of 10 schizophrenic patients from that of 10 nonschizophrenic patients and from the content of transcripts of dream material from 10 normal subjects. Schizophrenic patients used the thematic categories in factor 1 (the "schizophrenic factor") 3 times more frequently than the nonschizophrenics and 10 times more frequently than the normal subjects (p smaller than 01). In general, the language content of the schizophrenic patient mirrored an almost agitated attempt to locate oneself in time and space and to defend against internal discomfort and confusion. The authors discuss the implications of this study for future research.

  19. Micro computed tomography evaluation of the Self-adjusting file and ProTaper Universal system on curved mandibular molars.

    PubMed

    Serefoglu, Burcu; Piskin, Beyser

    2017-09-26

    The aim of this investigation was to compare the cleaning and shaping efficiency of Self-adjusting file and Protaper, and to assess the correlation between root canal curvature and working time in mandibular molars using micro-computed tomography. Twenty extracted mandibular molars instrumented with Protaper and Self-adjusting file and the total working time was measured in mesial canals. The changes in canal volume, surface area and structure model index, transportation, uninstrumented area and the correlation between working-time and the curvature were analyzed. Although no statistically significant difference was observed between two systems in distal canals (p>0.05), a significantly higher amount of removed dentin volume and lower uninstrumented area were provided by Protaper in mesial canals (p<0.0001). A correlation between working-time and the canal-curvature was also observed in mesial canals for both groups (SAFr 2 =0.792, p<0.0004, PTUr 2 =0.9098, p<0.0001).

  20. A parallel algorithm for the initial screening of space debris collisions prediction using the SGP4/SDP4 models and GPU acceleration

    NASA Astrophysics Data System (ADS)

    Lin, Mingpei; Xu, Ming; Fu, Xiaoyu

    2017-05-01

    Currently, a tremendous amount of space debris in Earth's orbit imperils operational spacecraft. It is essential to undertake risk assessments of collisions and predict dangerous encounters in space. However, collision predictions for an enormous amount of space debris give rise to large-scale computations. In this paper, a parallel algorithm is established on the Compute Unified Device Architecture (CUDA) platform of NVIDIA Corporation for collision prediction. According to the parallel structure of NVIDIA graphics processors, a block decomposition strategy is adopted in the algorithm. Space debris is divided into batches, and the computation and data transfer operations of adjacent batches overlap. As a consequence, the latency to access shared memory during the entire computing process is significantly reduced, and a higher computing speed is reached. Theoretically, a simulation of collision prediction for space debris of any amount and for any time span can be executed. To verify this algorithm, a simulation example including 1382 pieces of debris, whose operational time scales vary from 1 min to 3 days, is conducted on Tesla C2075 of NVIDIA. The simulation results demonstrate that with the same computational accuracy as that of a CPU, the computing speed of the parallel algorithm on a GPU is 30 times that on a CPU. Based on this algorithm, collision prediction of over 150 Chinese spacecraft for a time span of 3 days can be completed in less than 3 h on a single computer, which meets the timeliness requirement of the initial screening task. Furthermore, the algorithm can be adapted for multiple tasks, including particle filtration, constellation design, and Monte-Carlo simulation of an orbital computation.

  1. High-speed multiple sequence alignment on a reconfigurable platform.

    PubMed

    Oliver, Tim; Schmidt, Bertil; Maskell, Douglas; Nathan, Darran; Clemens, Ralf

    2006-01-01

    Progressive alignment is a widely used approach to compute multiple sequence alignments (MSAs). However, aligning several hundred sequences by popular progressive alignment tools requires hours on sequential computers. Due to the rapid growth of sequence databases biologists have to compute MSAs in a far shorter time. In this paper we present a new approach to MSA on reconfigurable hardware platforms to gain high performance at low cost. We have constructed a linear systolic array to perform pairwise sequence distance computations using dynamic programming. This results in an implementation with significant runtime savings on a standard FPGA.

  2. A class of parallel algorithms for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.

  3. Computer Activities for Persons With Dementia.

    PubMed

    Tak, Sunghee H; Zhang, Hongmei; Patel, Hetal; Hong, Song Hee

    2015-06-01

    The study examined participant's experience and individual characteristics during a 7-week computer activity program for persons with dementia. The descriptive study with mixed methods design collected 612 observational logs of computer sessions from 27 study participants, including individual interviews before and after the program. Quantitative data analysis included descriptive statistics, correlational coefficients, t-test, and chi-square. Content analysis was used to analyze qualitative data. Each participant averaged 23 sessions and 591min for 7 weeks. Computer activities included slide shows with music, games, internet use, and emailing. On average, they had a high score of intensity in engagement per session. Women attended significantly more sessions than men. Higher education level was associated with a higher number of different activities used per session and more time spent on online games. Older participants felt more tired. Feeling tired was significantly correlated with a higher number of weeks with only one session attendance per week. More anticholinergic medications taken by participants were significantly associated with a higher percentage of sessions with disengagement. The findings were significant at p < .05. Qualitative content analysis indicated tailoring computer activities appropriate to individual's needs and functioning is critical. All participants needed technical assistance. A framework for tailoring computer activities may provide guidance on developing and maintaining treatment fidelity of tailored computer activity interventions among persons with dementia. Practice guidelines and education protocols may assist caregivers and service providers to integrate computer activities into homes and aging services settings. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Trends in screen time on week and weekend days in a representative sample of Southern Brazil students.

    PubMed

    Lopes, Adair S; Silva, Kelly S; Barbosa Filho, Valter C; Bezerra, Jorge; de Oliveira, Elusa S A; Nahas, Markus V

    2014-12-01

    Economic and technological improvements can help increase screen time use among adolescents, but evidence in developing countries is scarce. The aim of this study was to examine changes in TV watching and computer/video game use patterns on week and weekend days after a decade (2001 and 2011), among students in Santa Catarina, southern Brazil. A comparative analysis of two cross-sectional surveys that included 5 028 and 6 529 students in 2001 and 2011, respectively, aged 15-19 years. The screen time use indicators were self-reported. 95% Confidence intervals were used to compare the prevalence rates. All analyses were separated by gender. After a decade, there was a significant increase in computer/video game use. Inversely, a significant reduction in TV watching was observed, with a similar magnitude to the change in computer/video game use. The worst trends were identified on weekend days. The decrease in TV watching after a decade appears to be compensated by the increase in computer/video game use, both in boys and girls. Interventions are needed to reduce the negative impact of technological improvements in the lifestyles of young people, especially on weekend days. © The Author 2014. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Adiabatic gate teleportation.

    PubMed

    Bacon, Dave; Flammia, Steven T

    2009-09-18

    The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. This construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.

  6. Partitioning medical image databases for content-based queries on a Grid.

    PubMed

    Montagnat, J; Breton, V; E Magnin, I

    2005-01-01

    In this paper we study the impact of executing a medical image database query application on the grid. For lowering the total computation time, the image database is partitioned into subsets to be processed on different grid nodes. A theoretical model of the application complexity and estimates of the grid execution overhead are used to efficiently partition the database. We show results demonstrating that smart partitioning of the database can lead to significant improvements in terms of total computation time. Grids are promising for content-based image retrieval in medical databases.

  7. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less

  8. Accuracy and time requirements of a bar-code inventory system for medical supplies.

    PubMed

    Hanson, L B; Weinswig, M H; De Muth, J E

    1988-02-01

    The effects of implementing a bar-code system for issuing medical supplies to nursing units at a university teaching hospital were evaluated. Data on the time required to issue medical supplies to three nursing units at a 480-bed, tertiary-care teaching hospital were collected (1) before the bar-code system was implemented (i.e., when the manual system was in use), (2) one month after implementation, and (3) four months after implementation. At the same times, the accuracy of the central supply perpetual inventory was monitored using 15 selected items. One-way analysis of variance tests were done to determine any significant differences between the bar-code and manual systems. Using the bar-code system took longer than using the manual system because of a significant difference in the time required for order entry into the computer. Multiple-use requirements of the central supply computer system made entering bar-code data a much slower process. There was, however, a significant improvement in the accuracy of the perpetual inventory. Using the bar-code system for issuing medical supplies to the nursing units takes longer than using the manual system. However, the accuracy of the perpetual inventory was significantly improved with the implementation of the bar-code system.

  9. The home electronic media environment and parental safety concerns: relationships with outdoor time after school and over the weekend among 9-11 year old children.

    PubMed

    Wilkie, Hannah J; Standage, Martyn; Gillison, Fiona B; Cumming, Sean P; Katzmarzyk, Peter T

    2018-04-05

    Time spent outdoors is associated with higher physical activity levels among children, yet it may be threatened by parental safety concerns and the attraction of indoor sedentary pursuits. The purpose of this study was to explore the relationships between these factors and outdoor time during children's discretionary periods (i.e., after school and over the weekend). Data from 462 children aged 9-11 years old were analysed using generalised linear mixed models. The odds of spending > 1 h outdoors after school, and > 2 h outdoors on a weekend were computed, according to demographic variables, screen-based behaviours, media access, and parental safety concerns. Interactions with sex and socioeconomic status (SES) were explored. Boys, low SES participants, and children who played on their computer for < 2 h on a school day had higher odds of spending > 1 h outside after school than girls, high SES children and those playing on a computer for ≥2 h, respectively. Counterintuitive results were found for access to media devices and crime-related safety concerns as both of these were positively associated with time spent outdoors after school. A significant interaction for traffic-related concerns*sex was found; higher road safety concerns were associated with lower odds of outdoor time after school in boys only. Age was associated with weekend outdoor time, which interacted with sex and SES; older children were more likely to spend > 2 h outside on weekends but this was only significant among girls and high SES participants. Our results suggest that specific groups of children are less likely to spend their free time outside, and it would seem that only prolonged recreational computer use has a negative association with children's outdoor time after school. Further research is needed to explore potential underlying mechanisms, and parental safety concerns in more detail.

  10. Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1998-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.

  11. Linux Makes the Grade: An Open Source Solution That's Time Has Come

    ERIC Educational Resources Information Center

    Houston, Melissa

    2007-01-01

    In 2001, Indiana officials at the Department of Education were taking stock. The schools had an excellent network infrastructure and had installed significant numbers of computers for 1 million public school enrollees. Yet students were spending less than an hour a week on the computer. It was then that state officials knew each student needed a…

  12. High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs

    NASA Technical Reports Server (NTRS)

    Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.

    2014-01-01

    This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.

  13. Montage Version 3.0

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Katz, Daniel; Prince, Thomas; Berriman, Graham; Good, John; Laity, Anastasia

    2006-01-01

    The final version (3.0) of the Montage software has been released. To recapitulate from previous NASA Tech Briefs articles about Montage: This software generates custom, science-grade mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. This software can be executed on single-processor computers, multi-processor computers, and such networks of geographically dispersed computers as the National Science Foundation s TeraGrid or NASA s Information Power Grid. The primary advantage of running Montage in a grid environment is that computations can be done on a remote supercomputer for efficiency. Multiple computers at different sites can be used for different parts of a computation a significant advantage in cases of computations for large mosaics that demand more processor time than is available at any one site. Version 3.0 incorporates several improvements over prior versions. The most significant improvement is that this version is accessible to scientists located anywhere, through operational Web services that provide access to data from several large astronomical surveys and construct mosaics on either local workstations or remote computational grids as needed.

  14. Computer and Internet use among Undergraduate Medical Students in Iran

    PubMed Central

    Ayatollahi, Ali; Ayatollahi, Jamshid; Ayatollahi, Fatemeh; Ayatollahi, Reza; Shahcheraghi, Seyed Hossein

    2014-01-01

    Objective: Although computer technologies are now widely used in medicine, little is known about its use among medical students in Iran. The aim of this study was to determine the competence and access to computer and internet among the medical students. Methods: In this descriptive study, medical students of Shahid Sadoughi University of Medical Science, Yazd, Iran from the fifth years were asked to answer a questionnaire during a time-tabled lecture slot. The chi-square test was used to compare the frequency of computer and internet use between the two genders, and the level of statistical significance for all test was set at 0.05. Results: All the students have a personal computer and internet access. There were no statistically significant differences between men and women for the computer and internet access, use wireless device to access internet, having laptop and e-mail address and the difficulties encountered using internet. The main reason for less utilization of internet was slow speed of data transfer. Conclusions: Because of the wide range of computer skills and internet information among medical students in our institution, a single computer and internet course for all students would not be useful nor would it be accepted. PMID:25225525

  15. Provider-Independent Use of the Cloud

    NASA Astrophysics Data System (ADS)

    Harmer, Terence; Wright, Peter; Cunningham, Christina; Perrott, Ron

    Utility computing offers researchers and businesses the potential of significant cost-savings, making it possible for them to match the cost of their computing and storage to their demand for such resources. A utility compute provider enables the purchase of compute infrastructures on-demand; when a user requires computing resources a provider will provision a resource for them and charge them only for their period of use of that resource. There has been a significant growth in the number of cloud computing resource providers and each has a different resource usage model, application process and application programming interface (API)-developing generic multi-resource provider applications is thus difficult and time consuming. We have developed an abstraction layer that provides a single resource usage model, user authentication model and API for compute providers that enables cloud-provider neutral applications to be developed. In this paper we outline the issues in using external resource providers, give examples of using a number of the most popular cloud providers and provide examples of developing provider neutral applications. In addition, we discuss the development of the API to create a generic provisioning model based on a common architecture for cloud computing providers.

  16. Interactive vs passive screen time and nighttime sleep duration among school-aged children

    PubMed Central

    Yland, Jennifer; Guan, Stanford; Emanuele, Erin; Hale, Lauren

    2016-01-01

    Background Insufficient sleep among school-aged children is a growing concern, as numerous studies have shown that chronic short sleep duration increases the risk of poor academic performance and specific adverse health outcomes. We examined the association between weekday nighttime sleep duration and 3 types of screen exposure: television, computer use, and video gaming. Methods We used age 9 data from an ethnically diverse national birth cohort study, the Fragile Families and Child Wellbeing Study, to assess the association between screen time and sleep duration among 9-year-olds, using screen time data reported by both the child (n = 3269) and by the child's primary caregiver (n= 2770). Results Within the child-reported models, children who watched more than 2 hours of television per day had shorter sleep duration by approximately 11 minutes per night compared to those who watched less than 2 hours of television (β = −0.18; P < .001). Using the caregiver-reported models, both television and computer use were associated with reduced sleep duration. For both child- and parent-reported screen time measures, we did not find statistically significant differences in effect size across various types of screen time. Conclusions Screen time from televisions and computers is associated with reduced sleep duration among 9-year-olds, using 2 sources of estimates of screen time exposure (child and parent reports). No specific type or use of screen time resulted in significantly shorter sleep duration than another, suggesting that caution should be advised against excessive use of all screens. PMID:27540566

  17. The effect of boundary shape and minima selection on single limb stance postural stability.

    PubMed

    Cobb, Stephen C; Joshi, Mukta N; Bazett-Jones, David M; Earl-Boehm, Jennifer E

    2012-11-01

    The effect of time-to-boundary minima selection and stability limit definition was investigated during eyes open and eyes closed condition single-limb stance postural stability. Anteroposterior and mediolateral time-to-boundary were computed using the mean and standard deviation (SD) of all time-to-boundary minima during a trial, and the mean and SD of only the 10 absolute time-to-boundary minima. Time-to-boundary with rectangular, trapezoidal, and multisegmented polygon defined stability limits were also calculated. Spearman's rank correlation coefficient test results revealed significant medium-large correlations between anteroposterior and mediolateral time-to-boundary scores calculated using both the mean and SD of the 10 absolute time-to-boundary minima and of all the time-to-boundary minima. Friedman test results revealed significant mediolateral time-to-boundary differences between boundary shape definitions. Follow-up Wilcoxon signed rank test results revealed significant differences between the rectangular boundary shape and both the trapezoidal and multisegmented polygon shapes during the eyes open and eyes closed conditions when both the mean and the SD of the time-to-boundary minima were used to represent postural stability. Significant differences were also revealed between the trapezoidal and multisegmented polygon definitions during the eyes open condition when the SD of the time-to-boundary minima was used to represent postural stability. Based on these findings, the overall results (i.e., stable versus unstable participants or groups) of studies computing postural stability using different minima selection can be compared. With respect to boundary shape, the trapezoid or multisegmented polygon shapes may be more appropriate than the rectangular shape as they more closely represent the anatomical shape of the stance foot.

  18. Precision of computer-assisted core decompression drilling of the knee.

    PubMed

    Beckmann, J; Goetz, J; Bäthis, H; Kalteis, T; Grifka, J; Perlick, L

    2006-06-01

    Core decompression by exact drilling into the ischemic areas is the treatment of choice in early stages of osteonecrosis of the femoral condyle. Computer-aided surgery might enhance the precision of the drilling and lower the radiation exposure time of both staff and patients. The aim of this study was to evaluate the precision of the fluoroscopically based VectorVision-navigation system in an in vitro model. Thirty sawbones were prepared with a defect filled up with a radiopaque gypsum sphere mimicking the osteonecrosis. 20 sawbones were drilled by guidance of an intraoperative navigation system VectorVision (BrainLAB, Munich, Germany). Ten sawbones were drilled by fluoroscopic control only. A statistically significant difference with a mean distance of 0.58 mm in the navigated group and 0.98 mm in the control group regarding the distance to the desired mid-point of the lesion could be stated. Significant difference was further found in the number of drilling corrections as well as radiation time needed. The fluoroscopic-based VectorVision-navigation system shows a high feasibility and precision of computer-guided drilling with simultaneously reduction of radiation time and therefore could be integrated into clinical routine.

  19. Wrist Hypothermia Related to Continuous Work with a Computer Mouse: A Digital Infrared Imaging Pilot Study

    PubMed Central

    Reste, Jelena; Zvagule, Tija; Kurjane, Natalja; Martinsone, Zanna; Martinsone, Inese; Seile, Anita; Vanadzins, Ivars

    2015-01-01

    Computer work is characterized by sedentary static workload with low-intensity energy metabolism. The aim of our study was to evaluate the dynamics of skin surface temperature in the hand during prolonged computer mouse work under different ergonomic setups. Digital infrared imaging of the right forearm and wrist was performed during three hours of continuous computer work (measured at the start and every 15 minutes thereafter) in a laboratory with controlled ambient conditions. Four people participated in the study. Three different ergonomic computer mouse setups were tested on three different days (horizontal computer mouse without mouse pad; horizontal computer mouse with mouse pad and padded wrist support; vertical computer mouse without mouse pad). The study revealed a significantly strong negative correlation between the temperature of the dorsal surface of the wrist and time spent working with a computer mouse. Hand skin temperature decreased markedly after one hour of continuous computer mouse work. Vertical computer mouse work preserved more stable and higher temperatures of the wrist (>30 °C), while continuous use of a horizontal mouse for more than two hours caused an extremely low temperature (<28 °C) in distal parts of the hand. The preliminary observational findings indicate the significant effect of the duration and ergonomics of computer mouse work on the development of hand hypothermia. PMID:26262633

  20. Wrist Hypothermia Related to Continuous Work with a Computer Mouse: A Digital Infrared Imaging Pilot Study.

    PubMed

    Reste, Jelena; Zvagule, Tija; Kurjane, Natalja; Martinsone, Zanna; Martinsone, Inese; Seile, Anita; Vanadzins, Ivars

    2015-08-07

    Computer work is characterized by sedentary static workload with low-intensity energy metabolism. The aim of our study was to evaluate the dynamics of skin surface temperature in the hand during prolonged computer mouse work under different ergonomic setups. Digital infrared imaging of the right forearm and wrist was performed during three hours of continuous computer work (measured at the start and every 15 minutes thereafter) in a laboratory with controlled ambient conditions. Four people participated in the study. Three different ergonomic computer mouse setups were tested on three different days (horizontal computer mouse without mouse pad; horizontal computer mouse with mouse pad and padded wrist support; vertical computer mouse without mouse pad). The study revealed a significantly strong negative correlation between the temperature of the dorsal surface of the wrist and time spent working with a computer mouse. Hand skin temperature decreased markedly after one hour of continuous computer mouse work. Vertical computer mouse work preserved more stable and higher temperatures of the wrist (>30 °C), while continuous use of a horizontal mouse for more than two hours caused an extremely low temperature (<28 °C) in distal parts of the hand. The preliminary observational findings indicate the significant effect of the duration and ergonomics of computer mouse work on the development of hand hypothermia.

  1. Evaluation of RAPID for a UNF cask benchmark problem

    NASA Astrophysics Data System (ADS)

    Mascolino, Valerio; Haghighat, Alireza; Roskoff, Nathan J.

    2017-09-01

    This paper examines the accuracy and performance of the RAPID (Real-time Analysis for Particle transport and In-situ Detection) code system for the simulation of a used nuclear fuel (UNF) cask. RAPID is capable of determining eigenvalue, subcritical multiplication, and pin-wise, axially-dependent fission density throughout a UNF cask. We study the source convergence based on the analysis of the different parameters used in an eigenvalue calculation in the MCNP Monte Carlo code. For this study, we consider a single assembly surrounded by absorbing plates with reflective boundary conditions. Based on the best combination of eigenvalue parameters, a reference MCNP solution for the single assembly is obtained. RAPID results are in excellent agreement with the reference MCNP solutions, while requiring significantly less computation time (i.e., minutes vs. days). A similar set of eigenvalue parameters is used to obtain a reference MCNP solution for the whole UNF cask. Because of time limitation, the MCNP results near the cask boundaries have significant uncertainties. Except for these, the RAPID results are in excellent agreement with the MCNP predictions, and its computation time is significantly lower, 35 second on 1 core versus 9.5 days on 16 cores.

  2. Elucidating reaction mechanisms on quantum computers.

    PubMed

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M; Wecker, Dave; Troyer, Matthias

    2017-07-18

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  3. Elucidating reaction mechanisms on quantum computers

    PubMed Central

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-01-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources. PMID:28674011

  4. Elucidating reaction mechanisms on quantum computers

    NASA Astrophysics Data System (ADS)

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-07-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.

    In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less

  6. Optimum space shuttle launch times relative to natural environment

    NASA Technical Reports Server (NTRS)

    King, R. L.

    1977-01-01

    The probabilities of favorable and unfavorable weather conditions for launch and landing of the STS under different criteria were computed for every three hours on a yearly basis using 14 years of weather data. These temporal probability distributions were considered for three sets of weather criteria encompassing benign, moderate and severe weather conditions for both Kennedy Space Center and for Edwards Air Force Base. In addition, the conditional probabilities were computed for unfavorable weather conditions occurring after a delay which may or may not be due to weather conditions. Also for KSC, the probabilities of favorable landing conditions at various times after favorable launch conditions have prevailed. The probabilities were computed to indicate the significance of each weather element to the overall result.

  7. Dynamic Computation Offloading for Low-Power Wearable Health Monitoring Systems.

    PubMed

    Kalantarian, Haik; Sideris, Costas; Mortazavi, Bobak; Alshurafa, Nabil; Sarrafzadeh, Majid

    2017-03-01

    The objective of this paper is to describe and evaluate an algorithm to reduce power usage and increase battery lifetime for wearable health-monitoring devices. We describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data processing between the wearable device and mobile application as a function of desired classification accuracy. By making the correct offloading decision based on current system parameters, we show that we are able to reduce system power by as much as 20%. We demonstrate that computation offloading can be applied to real-time monitoring systems, and yields significant power savings. Making correct offloading decisions for health monitoring devices can extend battery life and improve adherence.

  8. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    PubMed

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  9. Use of information and communication technology and prevalence of overweight and obesity among adolescents.

    PubMed

    Kautiainen, S; Koivusilta, L; Lintonen, T; Virtanen, S M; Rimpelä, A

    2005-08-01

    The prevalence of overweight and obesity has increased among children and adolescents, as well as among adults, and television viewing has been suggested as one cause. Playing digital games (video, computer and console games), or using computer may be other sedentary behaviors related to the development of overweight and obesity. To study the relationships of times spent on viewing television, playing digital games and using computer to overweight among Finnish adolescents. Mailed cross-sectional survey. Nationally representative samples of 14-, 16-, and 18-y-old (N=6515, response rate 70%) in 2001. Overweight and obesity were assessed by body mass index (BMI). The respondents reported times spent daily on viewing television, playing digital games (video, computer and console games) and using computer (for e-mail, writing and surfing). Data on timing of biological maturation, intensity of weekly physical activity and family's socio economic status were taken into account in the statistical analyses. Increased times spent on viewing television and using computer were associated with increased prevalence of overweight (obesity inclusive) among girls: compared to girls viewing television <1 h daily, the adjusted odds ratio (OR) for being overweight was 1.4 when spending 1-3 h, and 2.0 when spending > or =4 h daily on viewing television. In girls using computer > or =1 h daily, the OR for being overweight was 1.5 compared to girls using computer <1 h daily. The results were similar in boys, although not statistically significant. Time spent on playing digital games was not associated with overweight. Overweight was associated with using information and communication technology (ICT), but only with certain forms of ICT. Increased use of ICT may be one factor explaining the increased prevalence of overweight and obesity at the population level, at least in girls. Playing digital games was not related to overweight, perhaps by virtue of game playing being less sedentary or related to a different lifestyle than viewing television and using computer.

  10. The effects of feedback on computer workstation posture habits.

    PubMed

    Epstein, Rhonda; Colford, Sean; Epstein, Ethan; Loye, Brandon; Walsh, Michael

    2012-01-01

    Repetitive stress injuries (RSI) and musculoskeletal disorders in the United States and worldwide are increasing at an alarming rate due to the advent of ubiquitous computer usage. Factors that lead to computer-related musculoskeletal disorders (MSD) include inadequately designed workstations, poor posture, and lack of knowledge about proper ergonomics and use habits. Studies have documented the negative impact of improper posture and the MSD seen in students and office workers due to frequent computer usage. Determine if the frequency (single vs. continuous reminder) and/or use of feedback affects posture at a computer workstation. Observations of posture habits were made in three local schools and one local company. Feedback effects were tested on the students (ages 10-15). Real time feedback was given in two studies. In one study, instructions and a verbal reminder were given to students and in a second study, a prototype 'Posture Pad' was developed to provide continuous feedback to the user. Verbal reminders to sit correctly led to transient improvement of posture. Use of the 'Posture Pad' resulted in significant improvement in posture with subjects exhibiting correct posture 98 ± 5% of the time. Real time feedback about how one is sitting is an effective mechanism for non-transient improvement of posture at computer workstations.

  11. Optimal pre-scheduling of problem remappings

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1987-01-01

    A large class of scientific computational problems can be characterized as a sequence of steps where a significant amount of computation occurs each step, but the work performed at each step is not necessarily identical. Two good examples of this type of computation are: (1) regridding methods which change the problem discretization during the course of the computation, and (2) methods for solving sparse triangular systems of linear equations. Recent work has investigated a means of mapping such computations onto parallel processors; the method defines a family of static mappings with differing degrees of importance placed on the conflicting goals of good load balance and low communication/synchronization overhead. The performance tradeoffs are controllable by adjusting the parameters of the mapping method. To achieve good performance it may be necessary to dynamically change these parameters at run-time, but such changes can impose additional costs. If the computation's behavior can be determined prior to its execution, it can be possible to construct an optimal parameter schedule using a low-order-polynomial-time dynamic programming algorithm. Since the latter can be expensive, the performance is studied of the effect of a linear-time scheduling heuristic on one of the model problems, and it is shown to be effective and nearly optimal.

  12. Longitudinal effects of college type and selectivity on degrees conferred upon undergraduate females in physical science, life science, math and computer science, and social science

    NASA Astrophysics Data System (ADS)

    Stevens, Stacy Mckimm

    There has been much research to suggest that a single-sex college experience for female undergraduate students can increase self-confidence and leadership ability during the college years and beyond. The results of previous studies also suggest that these students achieve in the workforce and enter graduate school at higher rates than their female peers graduating from coeducational institutions. However, some researchers have questioned these findings, suggesting that it is the selectivity level of the colleges rather than the comprised gender of the students that causes these differences. The purpose of this study was to justify the continuation of single-sex educational opportunities for females at the post-secondary level by examining the effects that college selectivity, college type, and time have on the rate of undergraduate females pursuing majors in non-traditional fields. The study examined the percentage of physical science, life science, math and computer science, and social science degrees conferred upon females graduating from women's colleges from 1985-2001, as compared to those at comparable coeducational colleges. Sampling for this study consisted of 42 liberal arts women's (n = 21) and coeducational (n = 21) colleges. Variables included the type of college, the selectivity level of the college, and the effect of time on the percentage of female graduates. Doubly multivariate repeated measures analysis of variance testing revealed significant main effects for college selectivity on social science graduates, and time on both life science and math and computer science graduates. Significant interaction was also found between the college type and time on social science graduates, as well as the college type, selectivity level, and time on math and computer science graduates. Implications of the results and suggestions for further research are discussed.

  13. Visualization of Pulsar Search Data

    NASA Astrophysics Data System (ADS)

    Foster, R. S.; Wolszczan, A.

    1993-05-01

    The search for periodic signals from rotating neutron stars or pulsars has been a computationally taxing problem to astronomers for more than twenty-five years. Over this time interval, increases in computational capability have allowed ever more sensitive searches, covering a larger parameter space. The volume of input data and the general presence of radio frequency interference typically produce numerous spurious signals. Visualization of the search output and enhanced real-time processing of significant candidate events allow the pulsar searcher to optimally processes and search for new radio pulsars. The pulsar search algorithm and visualization system presented in this paper currently runs on serial RISC based workstations, a traditional vector based super computer, and a massively parallel computer. A description of the serial software algorithm and its modifications for massively parallel computing are describe. The results of four successive searches for millisecond period radio pulsars using the Arecibo telescope at 430 MHz have resulted in the successful detection of new long-period and millisecond period radio pulsars.

  14. Validating the Accuracy of Reaction Time Assessment on Computer-Based Tablet Devices.

    PubMed

    Schatz, Philip; Ybarra, Vincent; Leitner, Donald

    2015-08-01

    Computer-based assessment has evolved to tablet-based devices. Despite the availability of tablets and "apps," there is limited research validating their use. We documented timing delays between stimulus presentation and (simulated) touch response on iOS devices (3rd- and 4th-generation Apple iPads) and Android devices (Kindle Fire, Google Nexus, Samsung Galaxy) at response intervals of 100, 250, 500, and 1,000 milliseconds (ms). Results showed significantly greater timing error on Google Nexus and Samsung tablets (81-97 ms), than Kindle Fire and Apple iPads (27-33 ms). Within Apple devices, iOS 7 obtained significantly lower timing error than iOS 6. Simple reaction time (RT) trials (250 ms) on tablet devices represent 12% to 40% error (30-100 ms), depending on the device, which decreases considerably for choice RT trials (3-5% error at 1,000 ms). Results raise implications for using the same device for serial clinical assessment of RT using tablets, as well as the need for calibration of software and hardware. © The Author(s) 2015.

  15. SU-E-J-91: FFT Based Medical Image Registration Using a Graphics Processing Unit (GPU).

    PubMed

    Luce, J; Hoggarth, M; Lin, J; Block, A; Roeske, J

    2012-06-01

    To evaluate the efficiency gains obtained from using a Graphics Processing Unit (GPU) to perform a Fourier Transform (FT) based image registration. Fourier-based image registration involves obtaining the FT of the component images, and analyzing them in Fourier space to determine the translations and rotations of one image set relative to another. An important property of FT registration is that by enlarging the images (adding additional pixels), one can obtain translations and rotations with sub-pixel resolution. The expense, however, is an increased computational time. GPUs may decrease the computational time associated with FT image registration by taking advantage of their parallel architecture to perform matrix computations much more efficiently than a Central Processor Unit (CPU). In order to evaluate the computational gains produced by a GPU, images with known translational shifts were utilized. A program was written in the Interactive Data Language (IDL; Exelis, Boulder, CO) to performCPU-based calculations. Subsequently, the program was modified using GPU bindings (Tech-X, Boulder, CO) to perform GPU-based computation on the same system. Multiple image sizes were used, ranging from 256×256 to 2304×2304. The time required to complete the full algorithm by the CPU and GPU were benchmarked and the speed increase was defined as the ratio of the CPU-to-GPU computational time. The ratio of the CPU-to- GPU time was greater than 1.0 for all images, which indicates the GPU is performing the algorithm faster than the CPU. The smallest improvement, a 1.21 ratio, was found with the smallest image size of 256×256, and the largest speedup, a 4.25 ratio, was observed with the largest image size of 2304×2304. GPU programming resulted in a significant decrease in computational time associated with a FT image registration algorithm. The inclusion of the GPU may provide near real-time, sub-pixel registration capability. © 2012 American Association of Physicists in Medicine.

  16. Efficient storage, computation, and exposure of computer-generated holograms by electron-beam lithography.

    PubMed

    Newman, D M; Hawley, R W; Goeckel, D L; Crawford, R D; Abraham, S; Gallagher, N C

    1993-05-10

    An efficient storage format was developed for computer-generated holograms for use in electron-beam lithography. This method employs run-length encoding and Lempel-Ziv-Welch compression and succeeds in exposing holograms that were previously infeasible owing to the hologram's tremendous pattern-data file size. These holograms also require significant computation; thus the algorithm was implemented on a parallel computer, which improved performance by 2 orders of magnitude. The decompression algorithm was integrated into the Cambridge electron-beam machine's front-end processor.Although this provides much-needed ability, some hardware enhancements will be required in the future to overcome inadequacies in the current front-end processor that result in a lengthy exposure time.

  17. Non-invasive brain stimulation and computational models in post-stroke aphasic patients: single session of transcranial magnetic stimulation and transcranial direct current stimulation. A randomized clinical trial.

    PubMed

    Santos, Michele Devido Dos; Cavenaghi, Vitor Breseghello; Mac-Kay, Ana Paula Machado Goyano; Serafim, Vitor; Venturi, Alexandre; Truong, Dennis Quangvinh; Huang, Yu; Boggio, Paulo Sérgio; Fregni, Felipe; Simis, Marcel; Bikson, Marom; Gagliardi, Rubens José

    2017-01-01

    Patients undergoing the same neuromodulation protocol may present different responses. Computational models may help in understanding such differences. The aims of this study were, firstly, to compare the performance of aphasic patients in naming tasks before and after one session of transcranial direct current stimulation (tDCS), transcranial magnetic stimulation (TMS) and sham, and analyze the results between these neuromodulation techniques; and secondly, through computational model on the cortex and surrounding tissues, to assess current flow distribution and responses among patients who received tDCS and presented different levels of results from naming tasks. Prospective, descriptive, qualitative and quantitative, double blind, randomized and placebo-controlled study conducted at Faculdade de Ciências Médicas da Santa Casa de São Paulo. Patients with aphasia received one session of tDCS, TMS or sham stimulation. The time taken to name pictures and the response time were evaluated before and after neuromodulation. Selected patients from the first intervention underwent a computational model stimulation procedure that simulated tDCS. The results did not indicate any statistically significant differences from before to after the stimulation.The computational models showed different current flow distributions. The present study did not show any statistically significant difference between tDCS, TMS and sham stimulation regarding naming tasks. The patients'responses to the computational model showed different patterns of current distribution.

  18. GPU accelerated dynamic functional connectivity analysis for functional MRI data.

    PubMed

    Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu

    2015-07-01

    Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Comparative randomised active drug controlled clinical trial of a herbal eye drop in computer vision syndrome.

    PubMed

    Chatterjee, Pranab Kr; Bairagi, Debasis; Roy, Sudipta; Majumder, Nilay Kr; Paul, Ratish Ch; Bagchi, Sunil Ch

    2005-07-01

    A comparative double-blind placebo-controlled clinical trial of a herbal eye drop (itone) was conducted to find out its efficacy and safety in 120 patients with computer vision syndrome. Patients using computers for more than 3 hours continuously per day having symptoms of watering, redness, asthenia, irritation, foreign body sensation and signs of conjunctival hyperaemia, corneal filaments and mucus were studied. One hundred and twenty patients were randomly given either placebo, tears substitute (tears plus) or itone in identical vials with specific code number and were instructed to put one drop four times daily for 6 weeks. Subjective and objective assessments were done at bi-weekly intervals. In computer vision syndrome both subjective and objective improvements were noticed with itone drops. Itone drop was found significantly better than placebo (p<0.01) and almost identical results were observed with tears plus (difference was not statistically significant). Itone is considered to be a useful drug in computer vision syndrome.

  20. Validity of the modified RULA for computer workers and reliability of one observation compared to six.

    PubMed

    Levanon, Yafa; Lerman, Yehuda; Gefen, Amit; Ratzon, Navah Z

    2014-01-01

    Awkward body posture while typing is associated with musculoskeletal disorders (MSDs). Valid rapid assessment of computer workers' body posture is essential for the prevention of MSD among this large population. This study aimed to examine the validity of the modified rapid upper limb assessment (mRULA) which adjusted the rapid upper limb assessment (RULA) for computer workers. Moreover, this study examines whether one observation during a working day is sufficient or more observations are needed. A total of 29 right-handed computer workers were recruited. RULA and mRULA were conducted. The observations were then repeated six times at one-hour intervals. A significant moderate correlation (r = 0.6 and r = 0.7 for mouse and keyboard, respectively) was found between the assessments. No significant differences were found between one observation and six observations per working day. The mRULA was found to be valid for the assessment of computer workers, and one observation was sufficient to assess the work-related risk factor.

  1. Transferring ecosystem simulation codes to supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1995-01-01

    Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.

  2. Prospects for Finite-Difference Time-Domain (FDTD) Computational Electrodynamics

    NASA Astrophysics Data System (ADS)

    Taflove, Allen

    2002-08-01

    FDTD is the most powerful numerical solution of Maxwell's equations for structures having internal details. Relative to moment-method and finite-element techniques, FDTD can accurately model such problems with 100-times more field unknowns and with nonlinear and/or time-variable parameters. Hundreds of FDTD theory and applications papers are published each year. Currently, there are at least 18 commercial FDTD software packages for solving problems in: defense (especially vulnerability to electromagnetic pulse and high-power microwaves); design of antennas and microwave devices/circuits; electromagnetic compatibility; bioelectromagnetics (especially assessment of cellphone-generated RF absorption in human tissues); signal integrity in computer interconnects; and design of micro-photonic devices (especially photonic bandgap waveguides, microcavities; and lasers). This paper explores emerging prospects for FDTD computational electromagnetics brought about by continuing advances in computer capabilities and FDTD algorithms. We conclude that advances already in place point toward the usage by 2015 of ultralarge-scale (up to 1E11 field unknowns) FDTD electromagnetic wave models covering the frequency range from about 0.1 Hz to 1E17 Hz. We expect that this will yield significant benefits for our society in areas as diverse as computing, telecommunications, defense, and public health and safety.

  3. A preliminary design for flight testing the FINDS algorithm

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.

    1986-01-01

    This report presents a preliminary design for flight testing the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a target flight computer. The FINDS software was ported onto the target flight computer by reducing the code size by 65%. Several modifications were made to the computational algorithms resulting in a near real-time execution speed. Finally, a new failure detection strategy was developed resulting in a significant improvement in the detection time performance. In particular, low level MLS, IMU and IAS sensor failures are detected instantaneously with the new detection strategy, while accelerometer and the rate gyro failures are detected within the minimum time allowed by the information generated in the sensor residuals based on the point mass equations of motion. All of the results have been demonstrated by using five minutes of sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment.

  4. Reconfigurable Computing As an Enabling Technology for Single-Photon-Counting Laser Altimetry

    NASA Technical Reports Server (NTRS)

    Powell, Wesley; Hicks, Edward; Pinchinat, Maxime; Dabney, Philip; McGarry, Jan; Murray, Paul

    2003-01-01

    Single-photon-counting laser altimetry is a new measurement technique offering significant advantages in vertical resolution, reducing instrument size, mass, and power, and reducing laser complexity as compared to analog or threshold detection laser altimetry techniques. However, these improvements come at the cost of a dramatically increased requirement for onboard real-time data processing. Reconfigurable computing has been shown to offer considerable performance advantages in performing this processing. These advantages have been demonstrated on the Multi-KiloHertz Micro-Laser Altimeter (MMLA), an aircraft based single-photon-counting laser altimeter developed by NASA Goddard Space Flight Center with several potential spaceflight applications. This paper describes how reconfigurable computing technology was employed to perform MMLA data processing in real-time under realistic operating constraints, along with the results observed. This paper also expands on these prior results to identify concepts for using reconfigurable computing to enable spaceflight single-photon-counting laser altimeter instruments.

  5. An automatic step adjustment method for average power analysis technique used in fiber amplifiers

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Ming

    2006-04-01

    An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.

  6. The computational challenges of Earth-system science.

    PubMed

    O'Neill, Alan; Steenman-Clark, Lois

    2002-06-15

    The Earth system--comprising atmosphere, ocean, land, cryosphere and biosphere--is an immensely complex system, involving processes and interactions on a wide range of space- and time-scales. To understand and predict the evolution of the Earth system is one of the greatest challenges of modern science, with success likely to bring enormous societal benefits. High-performance computing, along with the wealth of new observational data, is revolutionizing our ability to simulate the Earth system with computer models that link the different components of the system together. There are, however, considerable scientific and technical challenges to be overcome. This paper will consider four of them: complexity, spatial resolution, inherent uncertainty and time-scales. Meeting these challenges requires a significant increase in the power of high-performance computers. The benefits of being able to make reliable predictions about the evolution of the Earth system should, on their own, amply repay this investment.

  7. Accelerated computer generated holography using sparse bases in the STFT domain.

    PubMed

    Blinder, David; Schelkens, Peter

    2018-01-22

    Computer-generated holography at high resolutions is a computationally intensive task. Efficient algorithms are needed to generate holograms at acceptable speeds, especially for real-time and interactive applications such as holographic displays. We propose a novel technique to generate holograms using a sparse basis representation in the short-time Fourier space combined with a wavefront-recording plane placed in the middle of the 3D object. By computing the point spread functions in the transform domain, we update only a small subset of the precomputed largest-magnitude coefficients to significantly accelerate the algorithm over conventional look-up table methods. We implement the algorithm on a GPU, and report a speedup factor of over 30. We show that this transform is superior over wavelet-based approaches, and show quantitative and qualitative improvements over the state-of-the-art WASABI method; we report accuracy gains of 2dB PSNR, as well improved view preservation.

  8. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Vincent W.C., E-mail: htvinwu@polyu.edu.hk; Tse, Teddy K.H.; Ho, Cola L.M.

    2013-07-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each casemore » by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.« less

  9. A Gaussian mixture model based adaptive classifier for fNIRS brain-computer interfaces and its testing via simulation

    NASA Astrophysics Data System (ADS)

    Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe

    2017-08-01

    Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus  <54% in two-choice classification accuracy. Significance. We believe GMMAC will be useful for clinical fNIRS-based brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.

  10. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-file Systems

    PubMed Central

    Prabhakar, Attiguppe R; Yavagal, Chandrashekar; Naik, Saraswathi V

    2016-01-01

    ABSTRACT Background: Primary root canals are considered to be most challenging due to their complex anatomy. "Wave one" and "one shape" are single-file systems with reciprocating and rotary motion respectively. The aim of this study was to evaluate and compare dentin thickness, centering ability, canal transportation, and instrumentation time of wave one and one shape files in primary root canals using a cone beam computed tomographic (CBCT) analysis. Study design: This is an experimental, in vitro study comparing the two groups. Materials and methods: A total of 24 extracted human primary teeth with minimum 7 mm root length were included in the study. Cone beam computed tomographic images were taken before and after the instrumentation for each group. Dentin thickness, centering ability, canal transportation, and instrumentation times were evaluated for each group. Results: A significant difference was found in instrumentation time and canal transportation measures between the two groups. Wave one showed less canal transportation as compared with one shape, and the mean instrumentation time of wave one was significantly less than one shape. Conclusion: Reciprocating single-file systems was found to be faster with much less procedural errors and can hence be recommended for shaping the root canals of primary teeth. How to cite this article: Prabhakar AR, Yavagal C, Dixit K, Naik SV. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-File Systems. Int J Clin Pediatr Dent 2016;9(1):45-49. PMID:27274155

  11. Parallel Computer System for 3D Visualization Stereo on GPU

    NASA Astrophysics Data System (ADS)

    Al-Oraiqat, Anas M.; Zori, Sergii A.

    2018-03-01

    This paper proposes the organization of a parallel computer system based on Graphic Processors Unit (GPU) for 3D stereo image synthesis. The development is based on the modified ray tracing method developed by the authors for fast search of tracing rays intersections with scene objects. The system allows significant increase in the productivity for the 3D stereo synthesis of photorealistic quality. The generalized procedure of 3D stereo image synthesis on the Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC) is proposed. The efficiency of the proposed solutions by GPU implementation is compared with single-threaded and multithreaded implementations on the CPU. The achieved average acceleration in multi-thread implementation on the test GPU and CPU is about 7.5 and 1.6 times, respectively. Studying the influence of choosing the size and configuration of the computational Compute Unified Device Archi-tecture (CUDA) network on the computational speed shows the importance of their correct selection. The obtained experimental estimations can be significantly improved by new GPUs with a large number of processing cores and multiprocessors, as well as optimized configuration of the computing CUDA network.

  12. Bridging the Hardware-Software Gap: A Proof Carrying Approach for Computer Systems Trust Evaluation (5.3.5)

    DTIC Science & Technology

    2017-08-22

    has significantly lowered the design cost and shortened the time-to- market (TTM) of Integrated Circuits (ICs) in the electronic industry. Over the...semiconductor companies have focused on high-profit phases such as design, marketing , and sales and have outsourced chip manufacturing, wafer fabrication...supply chain has significantly lowered the design cost and shortened the time- to- market (TTM) of integrated circuits (ICs) in the electronic

  13. Soft tissue deformation estimation by spatio-temporal Kalman filter finite element method.

    PubMed

    Yarahmadian, Mehran; Zhong, Yongmin; Gu, Chengfan; Shin, Jaehyun

    2018-01-01

    Soft tissue modeling plays an important role in the development of surgical training simulators as well as in robot-assisted minimally invasive surgeries. It has been known that while the traditional Finite Element Method (FEM) promises the accurate modeling of soft tissue deformation, it still suffers from a slow computational process. This paper presents a Kalman filter finite element method to model soft tissue deformation in real time without sacrificing the traditional FEM accuracy. The proposed method employs the FEM equilibrium equation and formulates it as a filtering process to estimate soft tissue behavior using real-time measurement data. The model is temporally discretized using the Newmark method and further formulated as the system state equation. Simulation results demonstrate that the computational time of KF-FEM is approximately 10 times shorter than the traditional FEM and it is still as accurate as the traditional FEM. The normalized root-mean-square error of the proposed KF-FEM in reference to the traditional FEM is computed as 0.0116. It is concluded that the proposed method significantly improves the computational performance of the traditional FEM without sacrificing FEM accuracy. The proposed method also filters noises involved in system state and measurement data.

  14. Stability-Constrained Aerodynamic Shape Optimization with Applications to Flying Wings

    NASA Astrophysics Data System (ADS)

    Mader, Charles Alexander

    A set of techniques is developed that allows the incorporation of flight dynamics metrics as an additional discipline in a high-fidelity aerodynamic optimization. Specifically, techniques for including static stability constraints and handling qualities constraints in a high-fidelity aerodynamic optimization are demonstrated. These constraints are developed from stability derivative information calculated using high-fidelity computational fluid dynamics (CFD). Two techniques are explored for computing the stability derivatives from CFD. One technique uses an automatic differentiation adjoint technique (ADjoint) to efficiently and accurately compute a full set of static and dynamic stability derivatives from a single steady solution. The other technique uses a linear regression method to compute the stability derivatives from a quasi-unsteady time-spectral CFD solution, allowing for the computation of static, dynamic and transient stability derivatives. Based on the characteristics of the two methods, the time-spectral technique is selected for further development, incorporated into an optimization framework, and used to conduct stability-constrained aerodynamic optimization. This stability-constrained optimization framework is then used to conduct an optimization study of a flying wing configuration. This study shows that stability constraints have a significant impact on the optimal design of flying wings and that, while static stability constraints can often be satisfied by modifying the airfoil profiles of the wing, dynamic stability constraints can require a significant change in the planform of the aircraft in order for the constraints to be satisfied.

  15. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics

    PubMed Central

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.

    2012-01-01

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924

  16. Computer versus lecture: a comparison of two methods of teaching oral medication administration in a nursing skills laboratory.

    PubMed

    Jeffries, P R

    2001-10-01

    The purpose of this study was to compare the effectiveness of both an interactive, multimedia CD-ROM and a traditional lecture for teaching oral medication administration to nursing students. A randomized pretest/posttest experimental design was used. Forty-two junior baccalaureate nursing students beginning their fundamentals nursing course were recruited for this study at a large university in the midwestern United States. The students ranged in age from 19 to 45. Seventy-three percent reported having average computer skills and experience, while 15% reported poor to below average skills. Two methods were compared for teaching oral medication administration--a scripted lecture with black and white overhead transparencies, in addition to an 18-minute videotape on medication administration, and an interactive, multimedia CD-ROM program, covering the same content. There were no significant (p < .05) baseline differences between the computer and lecture groups by education or computer skills. Results showed significant differences between the two groups in cognitive gains and student satisfaction (p = .01), with the computer group demonstrating higher student satisfaction and more cognitive gains than the lecture group. The groups were similar in their ability to demonstrate the skill correctly. Importantly, time on task using the CD-ROM was less, with 96% of the learners completing the program in 2 hours or less, compared to 3 hours of class time for the lecture group.

  17. Computational Modeling and High Performance Computing in Advanced Materials Processing, Synthesis, and Design

    DTIC Science & Technology

    2014-12-07

    parameters of resin viscosity and preform permeability prior to resin gelation. However, there could be significant variations in these two parameters...during actual manufacturing due to differences in the resin batches, mixes, temperature, ambient conditions for viscosity ; in the preform rolls...optimal injection time and locations for given process parameters of resin viscosity and preform permeability prior to resin gelation. However, there

  18. Intranasal dexmedetomidine for sedation for pediatric computed tomography imaging.

    PubMed

    Mekitarian Filho, Eduardo; Robinson, Fay; de Carvalho, Werther Brunow; Gilio, Alfredo Elias; Mason, Keira P

    2015-05-01

    This prospective observational pilot study evaluated the aerosolized intranasal route for dexmedetomidine as a safe, effective, and efficient option for infant and pediatric sedation for computed tomography imaging. The mean time to sedation was 13.4 minutes, with excellent image quality, no failed sedations, or significant adverse events. Registered with ClinicalTrials.gov: NCT01900405. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. A non-voxel-based broad-beam (NVBB) framework for IMRT treatment planning.

    PubMed

    Lu, Weiguo

    2010-12-07

    We present a novel framework that enables very large scale intensity-modulated radiation therapy (IMRT) planning in limited computation resources with improvements in cost, plan quality and planning throughput. Current IMRT optimization uses a voxel-based beamlet superposition (VBS) framework that requires pre-calculation and storage of a large amount of beamlet data, resulting in large temporal and spatial complexity. We developed a non-voxel-based broad-beam (NVBB) framework for IMRT capable of direct treatment parameter optimization (DTPO). In this framework, both objective function and derivative are evaluated based on the continuous viewpoint, abandoning 'voxel' and 'beamlet' representations. Thus pre-calculation and storage of beamlets are no longer needed. The NVBB framework has linear complexities (O(N(3))) in both space and time. The low memory, full computation and data parallelization nature of the framework render its efficient implementation on the graphic processing unit (GPU). We implemented the NVBB framework and incorporated it with the TomoTherapy treatment planning system (TPS). The new TPS runs on a single workstation with one GPU card (NVBB-GPU). Extensive verification/validation tests were performed in house and via third parties. Benchmarks on dose accuracy, plan quality and throughput were compared with the commercial TomoTherapy TPS that is based on the VBS framework and uses a computer cluster with 14 nodes (VBS-cluster). For all tests, the dose accuracy of these two TPSs is comparable (within 1%). Plan qualities were comparable with no clinically significant difference for most cases except that superior target uniformity was seen in the NVBB-GPU for some cases. However, the planning time using the NVBB-GPU was reduced many folds over the VBS-cluster. In conclusion, we developed a novel NVBB framework for IMRT optimization. The continuous viewpoint and DTPO nature of the algorithm eliminate the need for beamlets and lead to better plan quality. The computation parallelization on a GPU instead of a computer cluster significantly reduces hardware and service costs. Compared with using the current VBS framework on a computer cluster, the planning time is significantly reduced using the NVBB framework on a single workstation with a GPU card.

  20. Computing Bounds on Resource Levels for Flexible Plans

    NASA Technical Reports Server (NTRS)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow algorithm applied to an auxiliary flow network of 2N nodes. The algorithm is believed to be efficient in practice; experimental analysis shows the practical cost of maxflow to be as low as O(N1.5). The algorithm could be enhanced following at least two approaches. In the first approach, incremental subalgorithms for the computation of the envelope could be developed. By use of temporal scanning of the events in the temporal network, it may be possible to significantly reduce the size of the networks on which it is necessary to run the maximum-flow subalgorithm, thereby significantly reducing the time required for envelope calculation. In the second approach, the practical effectiveness of resource envelopes in the inner loops of search algorithms could be tested for multi-capacity resource scheduling. This testing would include inner-loop backtracking and termination tests and variable and value-ordering heuristics that exploit the properties of resource envelopes more directly.

  1. Software Aids Visualization of Computed Unsteady Flow

    NASA Technical Reports Server (NTRS)

    Kao, David; Kenwright, David

    2003-01-01

    Unsteady Flow Analysis Toolkit (UFAT) is a computer program that synthesizes motions of time-dependent flows represented by very large sets of data generated in computational fluid dynamics simulations. Prior to the development of UFAT, it was necessary to rely on static, single-snapshot depictions of time-dependent flows generated by flow-visualization software designed for steady flows. Whereas it typically takes weeks to analyze the results of a largescale unsteady-flow simulation by use of steady-flow visualization software, the analysis time is reduced to hours when UFAT is used. UFAT can be used to generate graphical objects of flow visualization results using multi-block curvilinear grids in the format of a previously developed NASA data-visualization program, PLOT3D. These graphical objects can be rendered using FAST, another popular flow visualization software developed at NASA. Flow-visualization techniques that can be exploited by use of UFAT include time-dependent tracking of particles, detection of vortex cores, extractions of stream ribbons and surfaces, and tetrahedral decomposition for optimal particle tracking. Unique computational features of UFAT include capabilities for automatic (batch) processing, restart, memory mapping, and parallel processing. These capabilities significantly reduce analysis time and storage requirements, relative to those of prior flow-visualization software. UFAT can be executed on a variety of supercomputers.

  2. Decompression management by 43 models of dive computer: single square-wave exposures to between 15 and 50 metres' depth.

    PubMed

    Sayer, Martin D J; Azzopardi, Elaine; Sieber, Arne

    2014-12-01

    Dive computers are used in some occupational diving sectors to manage decompression but there is little independent assessment of their performance. A significant proportion of occupational diving operations employ single square-wave pressure exposures in support of their work. Single examples of 43 models of dive computer were compressed to five simulated depths between 15 and 50 metres' sea water (msw) and maintained at those depths until they had registered over 30 minutes of decompression. At each depth, and for each model, downloaded data were used to collate the times at which the unit was still registering "no decompression" and the times at which various levels of decompression were indicated or exceeded. Each depth profile was replicated three times for most models. Decompression isopleths for no-stop dives indicated that computers tended to be more conservative than standard decompression tables at depths shallower than 30 msw but less conservative between 30-50 msw. For dives requiring decompression, computers were predominantly more conservative than tables across the whole depth range tested. There was considerable variation between models in the times permitted at all of the depth/decompression combinations. The present study would support the use of some dive computers for controlling single, square-wave diving by some occupational sectors. The choice of which makes and models to use would have to consider their specific dive management characteristics which may additionally be affected by the intended operational depth and whether staged decompression was permitted.

  3. Fast Determination of Distribution-Connected PV Impacts Using a Variable Time-Step Quasi-Static Time-Series Approach: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry

    The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce themore » required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.« less

  4. Simulation of intelligent object behavior in a virtual reality system

    NASA Astrophysics Data System (ADS)

    Mironov, Sergey F.

    1998-01-01

    This article presents a technique for computer control of a power boat movement in real-time marine trainers or arcade games. The author developed and successfully implemented a general technique allowing intellectual navigation of computer controlled moving objects that proved to be appropriate for real-time applications. This technique covers significant part of necessary behavioral tasks that appear in such titles. At the same time the technique forms a part of a more general system that involves control of less complicated characters of another nature. The system being an open one can be easily used by an action or arcade programming to improve the overall quality of characters artificial intelligence style.

  5. GPU-based real-time soft tissue deformation with cutting and haptic feedback.

    PubMed

    Courtecuisse, Hadrien; Jung, Hoeryong; Allard, Jérémie; Duriez, Christian; Lee, Doo Yong; Cotin, Stéphane

    2010-12-01

    This article describes a series of contributions in the field of real-time simulation of soft tissue biomechanics. These contributions address various requirements for interactive simulation of complex surgical procedures. In particular, this article presents results in the areas of soft tissue deformation, contact modelling, simulation of cutting, and haptic rendering, which are all relevant to a variety of medical interventions. The contributions described in this article share a common underlying model of deformation and rely on GPU implementations to significantly improve computation times. This consistency in the modelling technique and computational approach ensures coherent results as well as efficient, robust and flexible solutions. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. Numerical Simulation of Forced and Free-to-Roll Delta-Wing Motions

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.; Schiff, Lewis B.

    1996-01-01

    The three-dimensional, Reynolds-averaged, Navier-Stokes (RANS) equations are used to numerically simulate nonsteady vortical flow about a 65-deg sweep delta wing at 30-deg angle of attack. Two large-amplitude, high-rate, forced-roll motions, and a damped free-to-roll motion are presented. The free-to-roll motion is computed by coupling the time-dependent RANS equations to the flight dynamic equation of motion. The computed results are in good agreement with the forces, moments, and roll-angle time histories. Vortex breakdown is present in each case. Significant time lags in the vortex breakdown motions relative to the body motions strongly influence the dynamic forces and moments.

  7. Characterizing and Mitigating Work Time Inflation in Task Parallel Programs

    DOE PAGES

    Olivier, Stephen L.; de Supinski, Bronis R.; Schulz, Martin; ...

    2013-01-01

    Task parallelism raises the level of abstraction in shared memory parallel programming to simplify the development of complex applications. However, task parallel applications can exhibit poor performance due to thread idleness, scheduling overheads, and work time inflation – additional time spent by threads in a multithreaded computation beyond the time required to perform the same work in a sequential computation. We identify the contributions of each factor to lost efficiency in various task parallel OpenMP applications and diagnose the causes of work time inflation in those applications. Increased data access latency can cause significant work time inflation in NUMA systems.more » Our locality framework for task parallel OpenMP programs mitigates this cause of work time inflation. Our extensions to the Qthreads library demonstrate that locality-aware scheduling can improve performance up to 3X compared to the Intel OpenMP task scheduler.« less

  8. Association of sedentary behavior time with ideal cardiovascular health: the ORISCAV-LUX study.

    PubMed

    Crichton, Georgina E; Alkerwi, Ala'a

    2014-01-01

    Recently attention has been drawn to the health impacts of time spent engaging in sedentary behaviors. No studies have examined sedentary behaviors in relation to the newly defined construct of ideal cardiovascular health, which incorporates three health factors (blood pressure, total cholesterol, fasting plasma glucose) and four behaviors (physical activity, smoking, body mass index, diet). The purpose of this study was to examine associations between sedentary behaviors, including sitting time, and time spent viewing television and in front of a computer, with cardiovascular health, in a representative sample of adults from Luxembourg. A cross-sectional analysis of 1262 participants in the Observation of Cardiovascular Risk Factors in Luxembourg study was conducted, who underwent objective cardiovascular health assessments and completed the International Physical Activity Questionnaire. A Cardiovascular Health Score was calculated based on the number of health factors and behaviors at ideal levels. Sitting time on a weekday, television time, and computer time (both on a workday and a day off), were related to the Cardiovascular Health Score. Higher weekday sitting time was significantly associated with a poorer Cardiovascular Health Score (p = 0.002 for linear trend), after full adjustment for age, gender, education, income and occupation. Television time was inversely associated with the Cardiovascular Health Score, on both a workday and a day off (p = 0.002 for both). A similar inverse relationship was observed between the Cardiovascular Health Score and computer time, only on a day off (p = 0.04). Higher time spent sitting, viewing television, and using a computer during a day off may be unfavorably associated with ideal cardiovascular health.

  9. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  10. Viscoelastic Finite Difference Modeling Using Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.

    2014-12-01

    Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size and the slow memory transfers are the limiting factors of our GPU implementation. Those results show the benefits of using GPUs instead of CPUs for time based finite-difference seismic simulations. The reductions in computation time and in hardware costs are significant and open the door for new approaches in seismic inversion.

  11. A novel recursive Fourier transform for nonuniform sampled signals: application to heart rate variability spectrum estimation.

    PubMed

    Holland, Alexander; Aboy, Mateo

    2009-07-01

    We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.

  12. A robust computational technique for model order reduction of two-time-scale discrete systems via genetic algorithms.

    PubMed

    Alsmadi, Othman M K; Abo-Hammour, Zaer S

    2015-01-01

    A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  13. FPGA-based architecture for motion recovering in real-time

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel; Maya-Rueda, Selene E.; Torres-Huitzil, Cesar

    2002-03-01

    A key problem in the computer vision field is the measurement of object motion in a scene. The main goal is to compute an approximation of the 3D motion from the analysis of an image sequence. Once computed, this information can be used as a basis to reach higher level goals in different applications. Motion estimation algorithms pose a significant computational load for the sequential processors limiting its use in practical applications. In this work we propose a hardware architecture for motion estimation in real time based on FPGA technology. The technique used for motion estimation is Optical Flow due to its accuracy, and the density of velocity estimation, however other techniques are being explored. The architecture is composed of parallel modules working in a pipeline scheme to reach high throughput rates near gigaflops. The modules are organized in a regular structure to provide a high degree of flexibility to cover different applications. Some results will be presented and the real-time performance will be discussed and analyzed. The architecture is prototyped in an FPGA board with a Virtex device interfaced to a digital imager.

  14. Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.

  15. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.

  16. Investigation of television transmission using adaptive delta modulation principles

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1976-01-01

    The results are presented of a study on the use of the delta modulator as a digital encoder of television signals. The computer simulation of different delta modulators was studied in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results were analyzed and then implemented in hardware to study its ability to encode real time motion pictures from an NTSC format television camera. The effects of channel errors on the delta modulated video signal were tested along with several error correction algorithms via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. Delta modulators were investigated which could achieve significant bandwidth reduction without regard to complexity or speed. The first scheme investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved the computer simulation of two dimensional delta modulator algorithms.

  17. Documentation Driven Development for Complex Real-Time Systems

    DTIC Science & Technology

    2004-12-01

    This paper presents a novel approach for development of complex real - time systems , called the documentation-driven development (DDD) approach. This... time systems . DDD will also support automated software generation based on a computational model and some relevant techniques. DDD includes two main...stakeholders to be easily involved in development processes and, therefore, significantly improve the agility of software development for complex real

  18. Porting of the transfer-matrix method for multilayer thin-film computations on graphics processing units

    NASA Astrophysics Data System (ADS)

    Limmer, Steffen; Fey, Dietmar

    2013-07-01

    Thin-film computations are often a time-consuming task during optical design. An efficient way to accelerate these computations with the help of graphics processing units (GPUs) is described. It turned out that significant speed-ups can be achieved. We investigate the circumstances under which the best speed-up values can be expected. Therefore we compare different GPUs among themselves and with a modern CPU. Furthermore, the effect of thickness modulation on the speed-up and the runtime behavior depending on the input data is examined.

  19. Formal design and verification of a reliable computing platform for real-time control. Phase 2: Results

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Divito, Ben L.

    1992-01-01

    The design and formal verification of the Reliable Computing Platform (RCP), a fault tolerant computing system for digital flight control applications is presented. The RCP uses N-Multiply Redundant (NMR) style redundancy to mask faults and internal majority voting to flush the effects of transient faults. The system is formally specified and verified using the Ehdm verification system. A major goal of this work is to provide the system with significant capability to withstand the effects of High Intensity Radiated Fields (HIRF).

  20. One approach for evaluating the Distributed Computing Design System (DCDS)

    NASA Technical Reports Server (NTRS)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  1. WE-C-217BCD-08: Rapid Monte Carlo Simulations of DQE(f) of Scintillator-Based Detectors.

    PubMed

    Star-Lack, J; Abel, E; Constantin, D; Fahrig, R; Sun, M

    2012-06-01

    Monte Carlo simulations of DQE(f) can greatly aid in the design of scintillator-based detectors by helping optimize key parameters including scintillator material and thickness, pixel size, surface finish, and septa reflectivity. However, the additional optical transport significantly increases simulation times, necessitating a large number of parallel processors to adequately explore the parameter space. To address this limitation, we have optimized the DQE(f) algorithm, reducing simulation times per design iteration to 10 minutes on a single CPU. DQE(f) is proportional to the ratio, MTF(f)̂2 /NPS(f). The LSF-MTF simulation uses a slanted line source and is rapidly performed with relatively few gammas launched. However, the conventional NPS simulation for standard radiation exposure levels requires the acquisition of multiple flood fields (nRun), each requiring billions of input gamma photons (nGamma), many of which will scintillate, thereby producing thousands of optical photons (nOpt) per deposited MeV. The resulting execution time is proportional to the product nRun x nGamma x nOpt. In this investigation, we revisit the theoretical derivation of DQE(f), and reveal significant computation time savings through the optimization of nRun, nGamma, and nOpt. Using GEANT4, we determine optimal values for these three variables for a GOS scintillator-amorphous silicon portal imager. Both isotropic and Mie optical scattering processes were modeled. Simulation results were validated against the literature. We found that, depending on the radiative and optical attenuation properties of the scintillator, the NPS can be accurately computed using values for nGamma below 1000, and values for nOpt below 500/MeV. nRun should remain above 200. Using these parameters, typical computation times for a complete NPS ranged from 2-10 minutes on a single CPU. The number of launched particles and corresponding execution times for a DQE simulation can be dramatically reduced allowing for accurate computation with modest computer hardware. NIHRO1 CA138426. Several authors work for Varian Medical Systems. © 2012 American Association of Physicists in Medicine.

  2. On the use of SSTAs (Significant Sequences of TIR Anomalies) to activate Natural Time Analysis: a long term study on earthquakes (M>4) occurred in Greece during 2004-2013

    NASA Astrophysics Data System (ADS)

    Lisi, Mariano; Tramutoli, Valerio; Eleftheriou, Alexander; Filizzola, Carolina; Genzano, Nicola; Lacava, Teodosio; Paciello, Rossana; Pergola, Nicola; Vallianatos, Filippos

    2017-04-01

    Real-time integration of independent observations is expected to significantly improve our present capability of dynamically assess Seismic Hazard. Specific observations (e.g. anomaly in one parameter) can be used as a trigger (and/or to establish space/time constraints) for activating (implementing) the analysis on other independent parameters (e.g. b-value computation, Natural Time Analysis, on seismic data) whose systematic computation could result otherwise very computationally expensive or operationally impossible. In the present paper one of these parameters (the Earth's emitted radiation in the Thermal Infra-Red spectral region) has been used to activate the application of Natural Time Analysis of seismic data in order to verify possible improvements in the forecast of earthquakes (with M≥4) occurred in Greece during 2004-2013. The RST (Robust Satellite Technique) data analysis approach and RETIRA (Robust Estimator of TIR Anomalies) index were used to preliminarily define, and then to identify, Significant Sequences of TIR Anomalies (SSTAs) in 10 years (2004-2013) of daily TIR images acquired by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite. A previous paper showed that in the same period of time more than 93% of all identified SSTAs occurred in a pre-fixed space-time window around earthquakes time (30 days before up to 15 after) and epicenter (within 150 km or Dorbrovolsky distance) with a false positive rate smaller than 7%. In this paper a circular area around the barycenter of the observed Thermal Anomalies (and not just the convolution of them) has been used to define the area from which to collect seismic data required for Natural Time Analysis. Fifteen days prior the date of the first observed Significant Thermal Anomaly (STA) was the starting time used for collecting earthquakes from the catalog. The changes in the quality of earthquake forecast that were achieved by using each individual parameter in different configurations as well as the improvement emerging by their joint use of them will be presented referring to the 10 years studied period and to several recent events occurred in Greece.

  3. Is Virtual Surgical Planning in Orthognathic Surgery Faster Than Conventional Planning? A Time and Workflow Analysis of an Office-Based Workflow for Single- and Double-Jaw Surgery.

    PubMed

    Steinhuber, Thomas; Brunold, Silvia; Gärtner, Catherina; Offermanns, Vincent; Ulmer, Hanno; Ploder, Oliver

    2018-02-01

    The purpose of this study was to measure and compare the working time for virtual surgical planning (VSP) in orthognathic surgery in a largely office-based workflow in comparison with conventional surgical planning (CSP) regarding the type of surgery, staff involved, and working location. This prospective cohort study included patients treated with orthognathic surgery from May to December 2016. For each patient, both CSP with manual splint fabrication and VSP with fabrication of computer-aided design-computer-aided manufacturing splints were performed. The predictor variables were planning method (CSP or VSP) and type of surgery (single or double jaw), and the outcome was time. Descriptive and analytic statistics, including analysis of variance for repeated measures, were computed. The sample was composed of 40 patients (25 female and 15 male patients; mean age, 24.6 years) treated with single-jaw surgery (n = 18) or double-jaw surgery (n = 22). The mean times for planning single-jaw surgery were 145.5 ± 11.5 minutes for CSP and 109.3 ± 10.8 minutes for VSP, and those for planning double-jaw surgery were 224.1 ± 11.2 minutes and 149.6 ± 15.3 minutes, respectively. Besides the expected result that the working time was shorter for single-versus double-jaw surgery (P < .001), it was shown that VSP shortened the working time significantly versus CSP (P < .001). The reduction of time through VSP was relatively stronger for double-jaw surgery (P < .001 for interaction). All differences between CSP and VSP regarding profession (except for the surgeon's time investment) and location were statistically significant (P < .01). The surgeon's time to plan single-jaw surgery was 37.0 minutes for CSP and 41.2 minutes for VSP; for double-jaw surgery, it was 53.8 minutes and 53.6 minutes, respectively. Office-based VSP for orthognathic surgery was significantly faster for single- and double-jaw surgery. The time investment of the surgeon was equal for both methods, and all other steps of the workflow differed significantly compared with CSP. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  4. Isolation of the ocular surface to treat dysfunctional tear syndrome associated with computer use.

    PubMed

    Yee, Richard W; Sperling, Harry G; Kattek, Ashballa; Paukert, Martin T; Dawson, Kevin; Garcia, Marcie; Hilsenbeck, Susan

    2007-10-01

    Dysfunctional tear syndrome (DTS) associated with computer use is characterized by mild irritation, itching, redness, and intermittent tearing after extended staring. It frequently involves foreign body or sandy sensation, blurring of vision, and fatigue, worsening especially at the end of the day. We undertook a study to determine the effectiveness of periocular isolation using microenvironment glasses (MEGS) alone and in combination with artificial tears in alleviating the symptoms and signs of dry eye related to computer use. At the same time, we evaluated the relative ability of a battery of clinical tests for dry eye to distinguish dry eyes from normal eyes in heavy computer users. Forty adult subjects who used computers 3 hours or more per day were divided into dry eye sufferers and controls based on their scores on the Ocular Surface Disease Index (OSDI). Baseline scores were recorded and ocular surface assessments were made. On four subsequent visits, the subjects played a computer game for 30 minutes in a controlled environment, during which one of four treatment conditions were applied, in random order, to each subject: 1) no treatment, 2) artificial tears, 3) MEGS, and 4) artificial tears combined with MEGS. Immediately after each session, subjects were tested on: a subjective comfort questionnaire, tear breakup time (TBUT), fluorescein staining, lissamine green staining, and conjunctival injection. In this study, a significant correlation was found between cumulative lifetime computer use and ocular surface disorder, as measured by the standardized OSDI index. The experimental and control subjects were significantly different (P<0.05) in the meibomian gland assessment and TBUT; they were consistently different in fluorescein and lissamine green staining, but with P>0.05. Isolation of the ocular surface alone produced significant improvements in comfort scores and TBUT and a consistent trend of improvement in fluorescein staining and lissamine green staining. Isolation plus tears produced a significant improvement in lissamine green staining. The subjective comfort inventory and the TBUT test were most effective in distinguishing between the treatments used. Computer users with ocular surface complaints should have a detailed ocular surface examination and, if symptomatic, they can be effectively treated with isolation of the ocular surface, artificial tears therapy, and effective environmental manipulations.

  5. Fusing human and machine skills for remote robotic operations

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S.; Kim, Won S.; Venema, Steven C.; Bejczy, Antal K.

    1991-01-01

    The question of how computer assists can improve teleoperator trajectory tracking during both free and force-constrained motions is addressed. Computer graphics techniques which enable the human operator to both visualize and predict detailed 3D trajectories in real-time are reported. Man-machine interactive control procedures for better management of manipulator contact forces and positioning are also described. It is found that collectively, these novel advanced teleoperations techniques both enhance system performance and significantly reduce control problems long associated with teleoperations under time delay. Ongoing robotic simulations of the 1984 space shuttle Solar Maximum EVA Repair Mission are briefly described.

  6. Shortest path problem on a grid network with unordered intermediate points

    NASA Astrophysics Data System (ADS)

    Saw, Veekeong; Rahman, Amirah; Eng Ong, Wen

    2017-10-01

    We consider a shortest path problem with single cost factor on a grid network with unordered intermediate points. A two stage heuristic algorithm is proposed to find a feasible solution path within a reasonable amount of time. To evaluate the performance of the proposed algorithm, computational experiments are performed on grid maps of varying size and number of intermediate points. Preliminary results for the problem are reported. Numerical comparisons against brute forcing show that the proposed algorithm consistently yields solutions that are within 10% of the optimal solution and uses significantly less computation time.

  7. Design analysis and computer-aided performance evaluation of shuttle orbiter electrical power system. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Studies were conducted to develop appropriate space shuttle electrical power distribution and control (EPDC) subsystem simulation models and to apply the computer simulations to systems analysis of the EPDC. A previously developed software program (SYSTID) was adapted for this purpose. The following objectives were attained: (1) significant enhancement of the SYSTID time domain simulation software, (2) generation of functionally useful shuttle EPDC element models, and (3) illustrative simulation results in the analysis of EPDC performance, under the conditions of fault, current pulse injection due to lightning, and circuit protection sizing and reaction times.

  8. Fourth order scheme for wavelet based solution of Black-Scholes equation

    NASA Astrophysics Data System (ADS)

    Finěk, Václav

    2017-12-01

    The present paper is devoted to the numerical solution of the Black-Scholes equation for pricing European options. We apply the Crank-Nicolson scheme with Richardson extrapolation for time discretization and Hermite cubic spline wavelets with four vanishing moments for space discretization. This scheme is the fourth order accurate both in time and in space. Computational results indicate that the Crank-Nicolson scheme with Richardson extrapolation significantly decreases the amount of computational work. We also numerically show that optimal convergence rate for the used scheme is obtained without using startup procedure despite the data irregularities in the model.

  9. An analysis of thermal response factors and how to reduce their computational time requirement

    NASA Technical Reports Server (NTRS)

    Wiese, M. R.

    1982-01-01

    Te RESFAC2 version of the Thermal Response Factor Program (RESFAC) is the result of numerous modifications and additions to the original RESFAC. These modifications and additions have significantly reduced the program's computational time requirement. As a result of this work, the program is more efficient and its code is both readable and understandable. This report describes what a thermal response factor is; analyzes the original matrix algebra calculations and root finding techniques; presents a new root finding technique and streamlined matrix algebra; supplies ten validation cases and their results.

  10. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1990-01-01

    Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.

  11. Enabling Wide-Scale Computer Science Education through Improved Automated Assessment Tools

    NASA Astrophysics Data System (ADS)

    Boe, Bryce A.

    There is a proliferating demand for newly trained computer scientists as the number of computer science related jobs continues to increase. University programs will only be able to train enough new computer scientists to meet this demand when two things happen: when there are more primary and secondary school students interested in computer science, and when university departments have the resources to handle the resulting increase in enrollment. To meet these goals, significant effort is being made to both incorporate computational thinking into existing primary school education, and to support larger university computer science class sizes. We contribute to this effort through the creation and use of improved automated assessment tools. To enable wide-scale computer science education we do two things. First, we create a framework called Hairball to support the static analysis of Scratch programs targeted for fourth, fifth, and sixth grade students. Scratch is a popular building-block language utilized to pique interest in and teach the basics of computer science. We observe that Hairball allows for rapid curriculum alterations and thus contributes to wide-scale deployment of computer science curriculum. Second, we create a real-time feedback and assessment system utilized in university computer science classes to provide better feedback to students while reducing assessment time. Insights from our analysis of student submission data show that modifications to the system configuration support the way students learn and progress through course material, making it possible for instructors to tailor assignments to optimize learning in growing computer science classes.

  12. Large scale cardiac modeling on the Blue Gene supercomputer.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J

    2008-01-01

    Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.

  13. Building A Community Focused Data and Modeling Collaborative platform with Hardware Virtualization Technology

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Wang, W.; Melton, F. S.; Votava, P.; Milesi, C.; Hashimoto, H.; Nemani, R. R.; Hiatt, S. H.

    2009-12-01

    As the length and diversity of the global earth observation data records grow, modeling and analyses of biospheric conditions increasingly requires multiple terabytes of data from a diversity of models and sensors. With network bandwidth beginning to flatten, transmission of these data from centralized data archives presents an increasing challenge, and costs associated with local storage and management of data and compute resources are often significant for individual research and application development efforts. Sharing community valued intermediary data sets, results and codes from individual efforts with others that are not in direct funded collaboration can also be a challenge with respect to time, cost and expertise. We purpose a modeling, data and knowledge center that houses NASA satellite data, climate data and ancillary data where a focused community may come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform, named Ecosystem Modeling Center (EMC). With the recent development of new technologies for secure hardware virtualization, an opportunity exists to create specific modeling, analysis and compute environments that are customizable, “archiveable” and transferable. Allowing users to instantiate such environments on large compute infrastructures that are directly connected to large data archives may significantly reduce costs and time associated with scientific efforts by alleviating users from redundantly retrieving and integrating data sets and building modeling analysis codes. The EMC platform also provides the possibility for users receiving indirect assistance from expertise through prefabricated compute environments, potentially reducing study “ramp up” times.

  14. A low-cost test-bed for real-time landmark tracking

    NASA Astrophysics Data System (ADS)

    Csaszar, Ambrus; Hanan, Jay C.; Moreels, Pierre; Assad, Christopher

    2007-04-01

    A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing-each additional landmark is tracked in order-but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.

  15. BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark.

    PubMed

    Gulzar, Muhammad Ali; Interlandi, Matteo; Yoo, Seunghyun; Tetali, Sai Deep; Condie, Tyson; Millstein, Todd; Kim, Miryung

    2016-05-01

    Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today's data-centers is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires re-thinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user. First, BIGDEBUG's simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BIGDEBUG scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BIGDEBUG supports debugging at interactive speeds with minimal performance impact.

  16. Reducing statistical uncertainties in simulated organ doses of phantoms immersed in water

    DOE PAGES

    Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.; ...

    2016-08-13

    In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less

  17. The association between acculturation and recreational computer use among Latino adolescents in California.

    PubMed

    Shi, L; van Meijgaard, J; Simon, P

    2012-08-01

    Physical inactivity like recreational computer use is a likely factor in the rising obesity prevalence among Latino adolescents. Using the data from California Health Interview Survey, we test the hypothesis whether acculturation is associated with recreational computer use among Latino adolescents. We run linear regressions of the weekly time spent on recreational computer use among Latino adolescents, stratified first by gender and then by age group (12-14 and 15-17 years). Years living in the United States and language at home are used as key variables for acculturation. For all four sub-populations, living in the United States for less than 5 years is significantly associated with fewer hours on recreational computer use, compared with those US-born. Among female adolescents, those who lived in the United States for 10 years or more spent fewer hours on recreational computer use than those US-born. Among adolescents under 15, speaking English only and speaking English plus another language are both significantly associated with more hours on recreational computer use, compared with those who speak a non-English language at home. Educators and health professionals should heed the Latino adolescents' possible increase in recreational computer use. © 2012 The Authors. Pediatric Obesity © 2012 International Association for the Study of Obesity.

  18. Sockets Manufactured by CAD/CAM Method Have Positive Effects on the Quality of Life of Patients With Transtibial Amputation.

    PubMed

    Karakoç, Mehmet; Batmaz, İbrahim; Sariyildiz, Mustafa Akif; Yazmalar, Levent; Aydin, Abdülkadir; Em, Serda

    2017-08-01

    Patients with amputation need prosthesis to comfortably move around. One of the most important parts of a good prosthesis is the socket. Currently, the most commonly used method is the traditional socket manufacturing method, which involves manual work; however, computer-aided design/computer-aided manufacturing (CAD/CAM) is also being used in the recent years. The present study aimed to investigate the effects of sockets manufactured by traditional and CAD/CAM method on clinical characteristics and quality of life of patients with transtibial amputation. The study included 72 patients with transtibial amputation using prosthesis, 36 of whom had CAD/CAM prosthetic sockets (group 1) and 36 had traditional prosthetic sockets (group 2). Amputation reason, prosthesis lifetime, walking time and distance with prosthesis, pain-free walking time with prosthesis, production time of the prosthesis, and adaptation time to the prosthesis were questioned. Quality of life was assessed using the 36-item Short Form Health Survey questionnaire and the Trinity Amputation and Prosthesis Experience Scales. Walking time and distance and pain-free walking time with prosthesis were significantly better in group 1 than those in group 2. Furthermore, the prosthesis was applied in a significantly shorter time, and socket adaptation time was significantly shorter in group 1. Except emotional role limitation, all 36-item Short Form Healthy Survey questionnaire parameters were significantly better in group 1 than in group 2. Trinity Amputation and Prosthesis Experience Scales activity limitation scores of group 1 were lower, and Trinity Amputation and Prosthesis Experience Scales satisfaction with the prosthesis scores were higher than those in group 2. Our study demonstrated that the sockets manufactured by CAD/CAM methods yield better outcomes in quality of life of patients with transtibial amputation than the sockets manufactured by the traditional method.

  19. EHR use and patient satisfaction: What we learned.

    PubMed

    Farber, Neil J; Liu, Lin; Chen, Yunan; Calvitti, Alan; Street, Richard L; Zuest, Danielle; Bell, Kristin; Gabuzda, Mark; Gray, Barbara; Ashfaq, Shazia; Agha, Zia

    2015-11-01

    Few studies have quantitatively examined the degree to which the use of the computer affects patients' satisfaction with the clinician and the quality of the visit. We conducted a study to examine this association. Twenty-three clinicians (21 internal medicine physicians, 2 nurse practitioners) were recruited from 4 Veteran Affairs Medical Center (VAMC) clinics located in San Diego, Calif. Five to 6 patients for most clinicians (one patient each for 2 of the clinicians) were recruited to participate in a study of patient-physician communication. The clinicians' computer use and the patient-clinician interactions in the exam room were captured in real time via video recordings of the interactions and the computer screen, and through the use of the Morae usability testing software system, which recorded clinician clicks and scrolls on the computer. After the visit, patients were asked to complete a satisfaction survey. The final sample consisted of 126 consultations. Total patient satisfaction (beta=0.014; P=.027) and patient satisfaction with patient-centered communication (beta=0.02; P=.02) were significantly associated with higher clinician “gaze time” at the patient. A higher percentage of gaze time during a visit (controlling for the length of the visit) was significantly associated with greater satisfaction with patient-centered communication (beta=0.628; P=.033). Higher clinician gaze time at the patient predicted greater patient satisfaction. This suggests that clinicians would be well served to refine their multitasking skills so that they communicate in a patient-centered manner while performing necessary computer-related tasks. These findings also have important implications for clinical training with respect to using an electronic health record (EHR) system in ways that do not impede the one-on-one conversation between clinician and patient.

  20. A CPU/MIC Collaborated Parallel Framework for GROMACS on Tianhe-2 Supercomputer.

    PubMed

    Peng, Shaoliang; Yang, Shunyun; Su, Wenhe; Zhang, Xiaoyu; Zhang, Tenglilang; Liu, Weiguo; Zhao, Xingming

    2017-06-16

    Molecular Dynamics (MD) is the simulation of the dynamic behavior of atoms and molecules. As the most popular software for molecular dynamics, GROMACS cannot work on large-scale data because of limit computing resources. In this paper, we propose a CPU and Intel® Xeon Phi Many Integrated Core (MIC) collaborated parallel framework to accelerate GROMACS using the offload mode on a MIC coprocessor, with which the performance of GROMACS is improved significantly, especially with the utility of Tianhe-2 supercomputer. Furthermore, we optimize GROMACS so that it can run on both the CPU and MIC at the same time. In addition, we accelerate multi-node GROMACS so that it can be used in practice. Benchmarking on real data, our accelerated GROMACS performs very well and reduces computation time significantly. Source code: https://github.com/tianhe2/gromacs-mic.

  1. High-Productivity Computing in Computational Physics Education

    NASA Astrophysics Data System (ADS)

    Tel-Zur, Guy

    2011-03-01

    We describe the development of a new course in Computational Physics at the Ben-Gurion University. This elective course for 3rd year undergraduates and MSc. students is being taught during one semester. Computational Physics is by now well accepted as the Third Pillar of Science. This paper's claim is that modern Computational Physics education should deal also with High-Productivity Computing. The traditional approach of teaching Computational Physics emphasizes ``Correctness'' and then ``Accuracy'' and we add also ``Performance.'' Along with topics in Mathematical Methods and case studies in Physics the course deals a significant amount of time with ``Mini-Courses'' in topics such as: High-Throughput Computing - Condor, Parallel Programming - MPI and OpenMP, How to build a Beowulf, Visualization and Grid and Cloud Computing. The course does not intend to teach neither new physics nor new mathematics but it is focused on an integrated approach for solving problems starting from the physics problem, the corresponding mathematical solution, the numerical scheme, writing an efficient computer code and finally analysis and visualization.

  2. Effects of portable computing devices on posture, muscle activation levels and efficiency.

    PubMed

    Werth, Abigail; Babski-Reeves, Kari

    2014-11-01

    Very little research exists on ergonomic exposures when using portable computing devices. This study quantified muscle activity (forearm and neck), posture (wrist, forearm and neck), and performance (gross typing speed and error rates) differences across three portable computing devices (laptop, netbook, and slate computer) and two work settings (desk and computer) during data entry tasks. Twelve participants completed test sessions on a single computer using a test-rest-test protocol (30min of work at one work setting, 15min of rest, 30min of work at the other work setting). The slate computer resulted in significantly more non-neutral wrist, elbow and neck postures, particularly when working on the sofa. Performance on the slate computer was four times less than that of the other computers, though lower muscle activity levels were also found. Potential or injury or illness may be elevated when working on smaller, portable computers in non-traditional work settings. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  3. Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurement

    NASA Technical Reports Server (NTRS)

    Weimer, Daniel R.

    2001-01-01

    The first draft of a manuscript titled "Variable time delays in the propagation of the interplanetary magnetic field" has been completed, for submission to the Journal of Geophysical Research. In the preparation of this manuscript all data and analysis programs had been updated to the highest temporal resolution possible, at 16 seconds or better. The program which computes the "measured" IMF propagation time delays from these data has also undergone another improvement. In another significant development, a technique has been developed in order to predict IMF phase plane orientations, and the resulting time delays, using only measurements from a single satellite at L1. The "minimum variance" method is used for this computation. Further work will be done on optimizing the choice of several parameters for the minimum variance calculation.

  4. Anytime Prediction: Efficient Ensemble Methods for Any Computational Budget

    DTIC Science & Technology

    2014-01-21

    difficult problem and is the focus of this work. 1.1 Motivation The number of machine learning applications which involve real time and latency sensitive pre...significantly increasing latency , and the computational costs associated with hosting a service are often critical to its viability. For such...balancing training costs, concerns such as scalability and tractability are often more important, as opposed to factors such as latency which are more

  5. Image Processing Using a Parallel Architecture.

    DTIC Science & Technology

    1987-12-01

    ENG/87D-25 Abstract This study developed a set o± low level image processing tools on a parallel computer that allows concurrent processing of images...environment, the set of tools offers a significant reduction in the time required to perform some commonly used image processing operations. vI IMAGE...step toward developing these systems, a structured set of image processing tools was implemented using a parallel computer. More important than

  6. Heuristic Modeling for TRMM Lifetime Predictions

    NASA Technical Reports Server (NTRS)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  7. Statistical aspects of solar flares

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    1987-01-01

    A survey of the statistical properties of 850 H alpha solar flares during 1975 is presented. Comparison of the results found here with those reported elsewhere for different epochs is accomplished. Distributions of rise time, decay time, and duration are given, as are the mean, mode, median, and 90th percentile values. Proportions by selected groupings are also determined. For flares in general, mean values for rise time, decay time, and duration are 5.2 + or - 0.4 min, and 18.1 + or 1.1 min, respectively. Subflares, accounting for nearly 90 percent of the flares, had mean values lower than those found for flares of H alpha importance greater than 1, and the differences are statistically significant. Likewise, flares of bright and normal relative brightness have mean values of decay time and duration that are significantly longer than those computed for faint flares, and mass-motion related flares are significantly longer than non-mass-motion related flares. Seventy-three percent of the mass-motion related flares are categorized as being a two-ribbon flare and/or being accompanied by a high-speed dark filament. Slow rise time flares (rise time greater than 5 min) have a mean value for duration that is significantly longer than that computed for fast rise time flares, and long-lived duration flares (duration greater than 18 min) have a mean value for rise time that is significantly longer than that computed for short-lived duration flares, suggesting a positive linear relationship between rise time and duration for flares. Monthly occurrence rates for flares in general and by group are found to be linearly related in a positive sense to monthly sunspot number. Statistical testing reveals the association between sunspot number and numbers of flares to be significant at the 95 percent level of confidence, and the t statistic for slope is significant at greater than 99 percent level of confidence. Dependent upon the specific fit, between 58 percent and 94 percent of the variation can be accounted for with the linear fits. A statistically significant Northern Hemisphere flare excess (P less than 1 percent) was found, as was a Western Hemisphere excess (P approx 3 percent). Subflares were more prolific within 45 deg of central meridian (P less than 1 percent), while flares of H alpha importance or = 1 were more prolific near the limbs greater than 45 deg from central meridian; P approx 2 percent). Two-ribbon flares were more frequent within 45 deg of central meridian (P less than 1 percent). Slow rise time flares occurred more frequently in the western hemisphere (P approx 2 percent), as did short-lived duration flares (P approx 9 percent), but fast rise time flares were not preferentially distributed (in terms of east-west or limb-disk). Long-lived duration flares occurred more often within 45 deg 0 central meridian (P approx 7 percent). Mean durations for subflares and flares of H alpha importance or + 1, found within 45 deg of central meridian, are 14 percent and 70 percent, respectively, longer than those found for flares closer to the limb. As compared to flares occurring near cycle maximum, the flares of 1975 (near solar minimum) have mean values of rise time, decay time, and duration that are significantly shorter. A flare near solar maximum, on average, is about 1.6 times longer than one occurring near solar minimum.

  8. Factor Structure and Scale Reliabilities of the Adjective Check List Across Time

    ERIC Educational Resources Information Center

    Miller, Stephen H.; And Others

    1978-01-01

    Investigated factor structure and scale reliabilities of Gough's Adjective Check List (ACL) and their stability over time. Employees in a community mental health center completed the ACL twice, separated by a one-year interval. After each administration, separate factor analyses were computed. All scales had highly significant test-retest…

  9. Anticipation of the landing shock phenomenon in flight simulation

    NASA Technical Reports Server (NTRS)

    Mcfarland, Richard E.

    1987-01-01

    An aircraft landing may be described as a controlled crash because a runway surface is intercepted. In a simulation model the transition from aerodynamic flight to weight on wheels involves a single computational cycle during which stiff differential equations are activated; with a significant probability these initial conditions are unrealistic. This occurs because of the finite cycle time, during which large restorative forces will accompany unrealistic initial oleo compressions. This problem was recognized a few years ago at Ames Research Center during simulation studies of a supersonic transport. The mathematical model of this vehicle severely taxed computational resources, and required a large cycle time. The ground strike problem was solved by a described technique called anticipation equations. This extensively used technique has not been previously reported. The technique of anticipating a significant event is a useful tool in the general field of discrete flight simulation. For the differential equations representing a landing gear model stiffness, rate of interception and cycle time may combine to produce an unrealistic simulation of the continuum.

  10. Compressive Spectral Method for the Simulation of the Nonlinear Gravity Waves

    PubMed Central

    Bayındır, Cihan

    2016-01-01

    In this paper an approach for decreasing the computational effort required for the spectral simulations of the fully nonlinear ocean waves is introduced. The proposed approach utilizes the compressive sampling algorithm and depends on the idea of using a smaller number of spectral components compared to the classical spectral method. After performing the time integration with a smaller number of spectral components and using the compressive sampling technique, it is shown that the ocean wave field can be reconstructed with a significantly better efficiency compared to the classical spectral method. For the sparse ocean wave model in the frequency domain the fully nonlinear ocean waves with Jonswap spectrum is considered. By implementation of a high-order spectral method it is shown that the proposed methodology can simulate the linear and the fully nonlinear ocean waves with negligible difference in the accuracy and with a great efficiency by reducing the computation time significantly especially for large time evolutions. PMID:26911357

  11. The association between sedentary behaviors during weekdays and weekend with change in body composition in young adults

    PubMed Central

    Drenowatz, Clemens; DeMello, Madison M.; Shook, Robin P.; Hand, Gregory A.; Burgess, Stephanie; Blair, Steven N.

    2016-01-01

    Background High sedentary time has been considered an important chronic disease risk factor but there is only limited information on the association of specific sedentary behaviors on weekdays and weekend-days with body composition. The present study examines the prospective association of total sedentary time and specific sedentary behaviors during weekdays and the weekend with body composition in young adults. Methods A total of 332 adults (50% male; 27.7 ± 3.7 years) were followed over a period of 1 year. Time spent sedentary, excluding sleep (SED), and in physical activity (PA) during weekdays and weekend-days was objectively assessed every 3 months with a multi-sensor device over a period of at least 8 days. In addition, participants reported sitting time, TV time and non-work related time spent at the computer separately for weekdays and the weekend. Fat mass and fat free mass were assessed via dual x-ray absorptiometry and used to calculate percent body fat (%BF). Energy intake was estimated based on TDEE and change in body composition. Results Cross-sectional analyses showed a significant correlation between SED and body composition (0.18 ≤ r ≤ 0.34). Associations between body weight and specific sedentary behaviors were less pronounced and significant during weekdays only (r ≤ 0.16). Nevertheless, decrease in SED during weekends, rather than during weekdays, was significantly associated with subsequent decrease in %BF (β = 0.06, p <0.01). After adjusting for PA and energy intake, results for SED were no longer significant. Only the association between change in sitting time during weekends and subsequent %BF was independent from change in PA or energy intake (β%BF = 0.04, p = 0.01), while there was no significant association between TV or computer time and subsequent body composition. Conclusions The stronger prospective association between sedentary behavior during weekends with subsequent body composition emphasizes the importance of leisure time behavior in weight management. PMID:29546170

  12. The association between sedentary behaviors during weekdays and weekend with change in body composition in young adults.

    PubMed

    Drenowatz, Clemens; DeMello, Madison M; Shook, Robin P; Hand, Gregory A; Burgess, Stephanie; Blair, Steven N

    2016-01-01

    High sedentary time has been considered an important chronic disease risk factor but there is only limited information on the association of specific sedentary behaviors on weekdays and weekend-days with body composition. The present study examines the prospective association of total sedentary time and specific sedentary behaviors during weekdays and the weekend with body composition in young adults. A total of 332 adults (50% male; 27.7 ± 3.7 years) were followed over a period of 1 year. Time spent sedentary, excluding sleep (SED), and in physical activity (PA) during weekdays and weekend-days was objectively assessed every 3 months with a multi-sensor device over a period of at least 8 days. In addition, participants reported sitting time, TV time and non-work related time spent at the computer separately for weekdays and the weekend. Fat mass and fat free mass were assessed via dual x-ray absorptiometry and used to calculate percent body fat (%BF). Energy intake was estimated based on TDEE and change in body composition. Cross-sectional analyses showed a significant correlation between SED and body composition (0.18 ≤ r ≤ 0.34). Associations between body weight and specific sedentary behaviors were less pronounced and significant during weekdays only ( r ≤ 0.16). Nevertheless, decrease in SED during weekends, rather than during weekdays, was significantly associated with subsequent decrease in %BF ( β = 0.06, p <0.01). After adjusting for PA and energy intake, results for SED were no longer significant. Only the association between change in sitting time during weekends and subsequent %BF was independent from change in PA or energy intake (β %BF = 0.04, p = 0.01), while there was no significant association between TV or computer time and subsequent body composition. The stronger prospective association between sedentary behavior during weekends with subsequent body composition emphasizes the importance of leisure time behavior in weight management.

  13. Adaptive multi-time-domain subcycling for crystal plasticity FE modeling of discrete twin evolution

    NASA Astrophysics Data System (ADS)

    Ghosh, Somnath; Cheng, Jiahao

    2018-02-01

    Crystal plasticity finite element (CPFE) models that accounts for discrete micro-twin nucleation-propagation have been recently developed for studying complex deformation behavior of hexagonal close-packed (HCP) materials (Cheng and Ghosh in Int J Plast 67:148-170, 2015, J Mech Phys Solids 99:512-538, 2016). A major difficulty with conducting high fidelity, image-based CPFE simulations of polycrystalline microstructures with explicit twin formation is the prohibitively high demands on computing time. High strain localization within fast propagating twin bands requires very fine simulation time steps and leads to enormous computational cost. To mitigate this shortcoming and improve the simulation efficiency, this paper proposes a multi-time-domain subcycling algorithm. It is based on adaptive partitioning of the evolving computational domain into twinned and untwinned domains. Based on the local deformation-rate, the algorithm accelerates simulations by adopting different time steps for each sub-domain. The sub-domains are coupled back after coarse time increments using a predictor-corrector algorithm at the interface. The subcycling-augmented CPFEM is validated with a comprehensive set of numerical tests. Significant speed-up is observed with this novel algorithm without any loss of accuracy that is advantageous for predicting twinning in polycrystalline microstructures.

  14. Temporal response improvement for computed tomography fluoroscopy

    NASA Astrophysics Data System (ADS)

    Hsieh, Jiang

    1997-10-01

    Computed tomography fluoroscopy (CTF) has attracted significant attention recently. This is mainly due to the growing clinical application of CTF in interventional procedures, such as guided biopsy. Although many studies have been conducted for its clinical efficacy, little attention has been paid to the temporal response and the inherent limitations of the CTF system. For example, during a biopsy operation, when needle is inserted at a relatively high speed, the true needle position will not be correctly depicted in the CTF image due to the time delay. This could result in an overshoot or misplacement of the biopsy needle by the operator. In this paper, we first perform a detailed analysis of the temporal response of the CTF by deriving a set of equations to describe the average location of a moving object observed by the CTF system. The accuracy of the equations is verified by computer simulations and experiments. We show that the CT reconstruction process acts as a low pass filter to the motion function. As a result, there is an inherent time delay in the CTF process to the true biopsy needle motion and locations. Based on this study, we propose a generalized underscan weighting scheme which significantly improve the performance of CTF in terms of time lag and delay.

  15. Developing and Evaluating Computer-Based Teamwork Skills Training for Long-Duration Spaceflight Crews

    ERIC Educational Resources Information Center

    Hixson, Katharine

    2013-01-01

    Due to the long-duration and long distance nature of future exploration missions, coupled with significant communication delays from ground-based personnel, NASA astronauts will be living and working within confined, isolated environments for significant periods of time. This extreme environment poses concerns for the flight crews' ability to…

  16. Evaluation of the matrix exponential for use in ground-water-flow and solute-transport simulations; theoretical framework

    USGS Publications Warehouse

    Umari, A.M.; Gorelick, S.M.

    1986-01-01

    It is possible to obtain analytic solutions to the groundwater flow and solute transport equations if space variables are discretized but time is left continuous. From these solutions, hydraulic head and concentration fields for any future time can be obtained without ' marching ' through intermediate time steps. This analytical approach involves matrix exponentiation and is referred to as the Matrix Exponential Time Advancement (META) method. Two algorithms are presented for the META method, one for symmetric and the other for non-symmetric exponent matrices. A numerical accuracy indicator, referred to as the matrix condition number, was defined and used to determine the maximum number of significant figures that may be lost in the META method computations. The relative computational and storage requirements of the META method with respect to the time marching method increase with the number of nodes in the discretized problem. The potential greater accuracy of the META method and the associated greater reliability through use of the matrix condition number have to be weighed against this increased relative computational and storage requirements of this approach as the number of nodes becomes large. For a particular number of nodes, the META method may be computationally more efficient than the time-marching method, depending on the size of time steps used in the latter. A numerical example illustrates application of the META method to a sample ground-water-flow problem. (Author 's abstract)

  17. SALSA3D: A Tomographic Model of Compressional Wave Slowness in the Earth’s Mantle for Improved Travel-Time Prediction and Travel-Time Prediction Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballard, Sanford; Hipp, James R.; Begnaud, Michael L.

    The task of monitoring the Earth for nuclear explosions relies heavily on seismic data to detect, locate, and characterize suspected nuclear tests. In this study, motivated by the need to locate suspected explosions as accurately and precisely as possible, we developed a tomographic model of the compressional wave slowness in the Earth’s mantle with primary focus on the accuracy and precision of travel-time predictions for P and Pn ray paths through the model. Path-dependent travel-time prediction uncertainties are obtained by computing the full 3D model covariance matrix and then integrating slowness variance and covariance along ray paths from source tomore » receiver. Path-dependent travel-time prediction uncertainties reflect the amount of seismic data that was used in tomography with very low values for paths represented by abundant data in the tomographic data set and very high values for paths through portions of the model that were poorly sampled by the tomography data set. The pattern of travel-time prediction uncertainty is a direct result of the off-diagonal terms of the model covariance matrix and underscores the importance of incorporating the full model covariance matrix in the determination of travel-time prediction uncertainty. In addition, the computed pattern of uncertainty differs significantly from that of 1D distance-dependent travel-time uncertainties computed using traditional methods, which are only appropriate for use with travel times computed through 1D velocity models.« less

  18. SALSA3D: A Tomographic Model of Compressional Wave Slowness in the Earth’s Mantle for Improved Travel-Time Prediction and Travel-Time Prediction Uncertainty

    DOE PAGES

    Ballard, Sanford; Hipp, James R.; Begnaud, Michael L.; ...

    2016-10-11

    The task of monitoring the Earth for nuclear explosions relies heavily on seismic data to detect, locate, and characterize suspected nuclear tests. In this study, motivated by the need to locate suspected explosions as accurately and precisely as possible, we developed a tomographic model of the compressional wave slowness in the Earth’s mantle with primary focus on the accuracy and precision of travel-time predictions for P and Pn ray paths through the model. Path-dependent travel-time prediction uncertainties are obtained by computing the full 3D model covariance matrix and then integrating slowness variance and covariance along ray paths from source tomore » receiver. Path-dependent travel-time prediction uncertainties reflect the amount of seismic data that was used in tomography with very low values for paths represented by abundant data in the tomographic data set and very high values for paths through portions of the model that were poorly sampled by the tomography data set. The pattern of travel-time prediction uncertainty is a direct result of the off-diagonal terms of the model covariance matrix and underscores the importance of incorporating the full model covariance matrix in the determination of travel-time prediction uncertainty. In addition, the computed pattern of uncertainty differs significantly from that of 1D distance-dependent travel-time uncertainties computed using traditional methods, which are only appropriate for use with travel times computed through 1D velocity models.« less

  19. Cerebro-cerebellar interactions underlying temporal information processing.

    PubMed

    Aso, Kenji; Hanakawa, Takashi; Aso, Toshihiko; Fukuyama, Hidenao

    2010-12-01

    The neural basis of temporal information processing remains unclear, but it is proposed that the cerebellum plays an important role through its internal clock or feed-forward computation functions. In this study, fMRI was used to investigate the brain networks engaged in perceptual and motor aspects of subsecond temporal processing without accompanying coprocessing of spatial information. Direct comparison between perceptual and motor aspects of time processing was made with a categorical-design analysis. The right lateral cerebellum (lobule VI) was active during a time discrimination task, whereas the left cerebellar lobule VI was activated during a timed movement generation task. These findings were consistent with the idea that the cerebellum contributed to subsecond time processing in both perceptual and motor aspects. The feed-forward computational theory of the cerebellum predicted increased cerebro-cerebellar interactions during time information processing. In fact, a psychophysiological interaction analysis identified the supplementary motor and dorsal premotor areas, which had a significant functional connectivity with the right cerebellar region during a time discrimination task and with the left lateral cerebellum during a timed movement generation task. The involvement of cerebro-cerebellar interactions may provide supportive evidence that temporal information processing relies on the simulation of timing information through feed-forward computation in the cerebellum.

  20. Logistics Control Facility: A Normative Model for Total Asset Visibility in the Air Force Logistics System

    DTIC Science & Technology

    1994-09-01

    IIssue Computers, information systems, and communication systems are being increasingly used in transportation, warehousing, order processing , materials...inventory levels, reduced order processing times, reduced order processing costs, and increased customer satisfaction. While purchasing and transportation...process, the speed in which crders are processed would increase significantly. Lowering the order processing time in turn lowers the lead time, which in

  1. Computer-assisted dieting: effects of a randomised controlled intervention.

    PubMed

    Schroder, Kerstin E E

    2010-06-01

    In this pilot study, the effects of two computer-assisted dieting (CAD) interventions on weight loss and blood chemistry were examined among overweight and obese adults. Participants (91 community members, average age 42.6 years) were randomly assigned to CAD-only (a single-session introduction and provision of a dieting software, n = 30), CAD plus an additional four-session self-management group training (CAD+G, n = 31) and a waitlist control group whose members were randomised into the two interventions at the 3-month follow-up (n = 30). A three (group)-by-two (time) repeated measures ANOVA revealed no significant group by time interaction during the initial 3-month period. However, the two intervention groups combined showed a significant, though moderate weight loss relative to the control group. Further, although a general improvement was found with regard to the lipid panel results during the first 3 months of the trial, the treatment by time interaction was not significant. A comparison of the developments in the two intervention groups during the 3- to 6-month follow-up time period revealed a tendency towards greater weight regain in the CAD-only condition. The evidence suggests that CAD supports initial weight loss; however, additional self-management training might be necessary to support maintenance.

  2. Fast Dynamic Simulation-Based Small Signal Stability Assessment and Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acharya, Naresh; Baone, Chaitanya; Veda, Santosh

    2014-12-31

    Power grid planning and operation decisions are made based on simulation of the dynamic behavior of the system. Enabling substantial energy savings while increasing the reliability of the aging North American power grid through improved utilization of existing transmission assets hinges on the adoption of wide-area measurement systems (WAMS) for power system stabilization. However, adoption of WAMS alone will not suffice if the power system is to reach its full entitlement in stability and reliability. It is necessary to enhance predictability with "faster than real-time" dynamic simulations that will enable the dynamic stability margins, proactive real-time control, and improve gridmore » resiliency to fast time-scale phenomena such as cascading network failures. Present-day dynamic simulations are performed only during offline planning studies, considering only worst case conditions such as summer peak, winter peak days, etc. With widespread deployment of renewable generation, controllable loads, energy storage devices and plug-in hybrid electric vehicles expected in the near future and greater integration of cyber infrastructure (communications, computation and control), monitoring and controlling the dynamic performance of the grid in real-time would become increasingly important. The state-of-the-art dynamic simulation tools have limited computational speed and are not suitable for real-time applications, given the large set of contingency conditions to be evaluated. These tools are optimized for best performance of single-processor computers, but the simulation is still several times slower than real-time due to its computational complexity. With recent significant advances in numerical methods and computational hardware, the expectations have been rising towards more efficient and faster techniques to be implemented in power system simulators. This is a natural expectation, given that the core solution algorithms of most commercial simulators were developed decades ago, when High Performance Computing (HPC) resources were not commonly available.« less

  3. Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces

    NASA Technical Reports Server (NTRS)

    Ellman, Alvin; Carlton, Magdi

    1993-01-01

    The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.

  4. Wide-angle display developments by computer graphics

    NASA Technical Reports Server (NTRS)

    Fetter, William A.

    1989-01-01

    Computer graphics can now expand its new subset, wide-angle projection, to be as significant a generic capability as computer graphics itself. Some prior work in computer graphics is presented which leads to an attractive further subset of wide-angle projection, called hemispheric projection, to be a major communication media. Hemispheric film systems have long been present and such computer graphics systems are in use in simulators. This is the leading edge of capabilities which should ultimately be as ubiquitous as CRTs (cathode-ray tubes). These assertions are not from degrees in science or only from a degree in graphic design, but in a history of computer graphics innovations, laying groundwork by demonstration. The author believes that it is timely to look at several development strategies, since hemispheric projection is now at a point comparable to the early stages of computer graphics, requiring similar patterns of development again.

  5. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    PubMed Central

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  6. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

  7. Bayesian Model Selection under Time Constraints

    NASA Astrophysics Data System (ADS)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  8. Non-conforming finite-element formulation for cardiac electrophysiology: an effective approach to reduce the computation time of heart simulations without compromising accuracy

    NASA Astrophysics Data System (ADS)

    Hurtado, Daniel E.; Rojas, Guillermo

    2018-04-01

    Computer simulations constitute a powerful tool for studying the electrical activity of the human heart, but computational effort remains prohibitively high. In order to recover accurate conduction velocities and wavefront shapes, the mesh size in linear element (Q1) formulations cannot exceed 0.1 mm. Here we propose a novel non-conforming finite-element formulation for the non-linear cardiac electrophysiology problem that results in accurate wavefront shapes and lower mesh-dependance in the conduction velocity, while retaining the same number of global degrees of freedom as Q1 formulations. As a result, coarser discretizations of cardiac domains can be employed in simulations without significant loss of accuracy, thus reducing the overall computational effort. We demonstrate the applicability of our formulation in biventricular simulations using a coarse mesh size of ˜ 1 mm, and show that the activation wave pattern closely follows that obtained in fine-mesh simulations at a fraction of the computation time, thus improving the accuracy-efficiency trade-off of cardiac simulations.

  9. Multigrid optimal mass transport for image registration and morphing

    NASA Astrophysics Data System (ADS)

    Rehman, Tauseef ur; Tannenbaum, Allen

    2007-02-01

    In this paper we present a computationally efficient Optimal Mass Transport algorithm. This method is based on the Monge-Kantorovich theory and is used for computing elastic registration and warping maps in image registration and morphing applications. This is a parameter free method which utilizes all of the grayscale data in an image pair in a symmetric fashion. No landmarks need to be specified for correspondence. In our work, we demonstrate significant improvement in computation time when our algorithm is applied as compared to the originally proposed method by Haker et al [1]. The original algorithm was based on a gradient descent method for removing the curl from an initial mass preserving map regarded as 2D vector field. This involves inverting the Laplacian in each iteration which is now computed using full multigrid technique resulting in an improvement in computational time by a factor of two. Greater improvement is achieved by decimating the curl in a multi-resolutional framework. The algorithm was applied to 2D short axis cardiac MRI images and brain MRI images for testing and comparison.

  10. Sample entropy applied to the analysis of synthetic time series and tachograms

    NASA Astrophysics Data System (ADS)

    Muñoz-Diosdado, A.; Gálvez-Coyt, G. G.; Solís-Montufar, E.

    2017-01-01

    Entropy is a method of non-linear analysis that allows an estimate of the irregularity of a system, however, there are different types of computational entropy that were considered and tested in order to obtain one that would give an index of signals complexity taking into account the data number of the analysed time series, the computational resources demanded by the method, and the accuracy of the calculation. An algorithm for the generation of fractal time-series with a certain value of β was used for the characterization of the different entropy algorithms. We obtained a significant variation for most of the algorithms in terms of the series size, which could result counterproductive for the study of real signals of different lengths. The chosen method was sample entropy, which shows great independence of the series size. With this method, time series of heart interbeat intervals or tachograms of healthy subjects and patients with congestive heart failure were analysed. The calculation of sample entropy was carried out for 24-hour tachograms and time subseries of 6-hours for sleepiness and wakefulness. The comparison between the two populations shows a significant difference that is accentuated when the patient is sleeping.

  11. Visual cluster analysis and pattern recognition methods

    DOEpatents

    Osbourn, Gordon Cecil; Martinez, Rubel Francisco

    2001-01-01

    A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  12. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  13. Acceleration of FDTD mode solver by high-performance computing techniques.

    PubMed

    Han, Lin; Xi, Yanping; Huang, Wei-Ping

    2010-06-21

    A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.

  14. Prosthetically guided maxillofacial surgery: evaluation of the accuracy of a surgical guide and custom-made bone plate in oncology patients after mandibular reconstruction.

    PubMed

    Mazzoni, Simona; Marchetti, Claudio; Sgarzani, Rossella; Cipriani, Riccardo; Scotti, Roberto; Ciocca, Leonardo

    2013-06-01

    The aim of the present study was to evaluate the accuracy of prosthetically guided maxillofacial surgery in reconstructing the mandible with a free vascularized flap using custom-made bone plates and a surgical guide to cut the mandible and fibula. The surgical protocol was applied in a study group of seven consecutive mandibular-reconstructed patients who were compared with a control group treated using the standard preplating technique on stereolithographic models (indirect computer-aided design/computer-aided manufacturing method). The precision of both surgical techniques (prosthetically guided maxillofacial surgery and indirect computer-aided design/computer-aided manufacturing procedure) was evaluated by comparing preoperative and postoperative computed tomographic data and assessment of specific landmarks. With regard to midline deviation, no significant difference was documented between the test and control groups. With regard to mandibular angle shift, only one left angle shift on the lateral plane showed a statistically significant difference between the groups. With regard to angular deviation of the body axis, the data showed a significant difference in the arch deviation. All patients in the control group registered greater than 8 degrees of deviation, determining a facial contracture of the external profile at the lower margin of the mandible. With regard to condylar position, the postoperative condylar position was better in the test group than in the control group, although no significant difference was detected. The new protocol for mandibular reconstruction using computer-aided design/computer-aided manufacturing prosthetically guided maxillofacial surgery to construct custom-made guides and plates may represent a viable method of reproducing the patient's anatomical contour, giving the surgeon better procedural control and reducing procedure time. Therapeutic, III.

  15. Association of Sedentary Behavior Time with Ideal Cardiovascular Health: The ORISCAV-LUX Study

    PubMed Central

    Crichton, Georgina E.; Alkerwi, Ala'a

    2014-01-01

    Background Recently attention has been drawn to the health impacts of time spent engaging in sedentary behaviors. No studies have examined sedentary behaviors in relation to the newly defined construct of ideal cardiovascular health, which incorporates three health factors (blood pressure, total cholesterol, fasting plasma glucose) and four behaviors (physical activity, smoking, body mass index, diet). The purpose of this study was to examine associations between sedentary behaviors, including sitting time, and time spent viewing television and in front of a computer, with cardiovascular health, in a representative sample of adults from Luxembourg. Methods A cross-sectional analysis of 1262 participants in the Observation of Cardiovascular Risk Factors in Luxembourg study was conducted, who underwent objective cardiovascular health assessments and completed the International Physical Activity Questionnaire. A Cardiovascular Health Score was calculated based on the number of health factors and behaviors at ideal levels. Sitting time on a weekday, television time, and computer time (both on a workday and a day off), were related to the Cardiovascular Health Score. Results Higher weekday sitting time was significantly associated with a poorer Cardiovascular Health Score (p = 0.002 for linear trend), after full adjustment for age, gender, education, income and occupation. Television time was inversely associated with the Cardiovascular Health Score, on both a workday and a day off (p = 0.002 for both). A similar inverse relationship was observed between the Cardiovascular Health Score and computer time, only on a day off (p = 0.04). Conclusion Higher time spent sitting, viewing television, and using a computer during a day off may be unfavorably associated with ideal cardiovascular health. PMID:24925084

  16. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    NASA Technical Reports Server (NTRS)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  17. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    PubMed

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  18. Buffer management for sequential decoding. [block erasure probability reduction

    NASA Technical Reports Server (NTRS)

    Layland, J. W.

    1974-01-01

    Sequential decoding has been found to be an efficient means of communicating at low undetected error rates from deep space probes, but erasure or computational overflow remains a significant problem. Erasure of a block occurs when the decoder has not finished decoding that block at the time that it must be output. By drawing upon analogies in computer time sharing, this paper develops a buffer-management strategy which reduces the decoder idle time to a negligible level, and therefore improves the erasure probability of a sequential decoder. For a decoder with a speed advantage of ten and a buffer size of ten blocks, operating at an erasure rate of .01, use of this buffer-management strategy reduces the erasure rate to less than .0001.

  19. Computer use changes generalization of movement learning.

    PubMed

    Wei, Kunlin; Yan, Xiang; Kong, Gaiqing; Yin, Cong; Zhang, Fan; Wang, Qining; Kording, Konrad Paul

    2014-01-06

    Over the past few decades, one of the most salient lifestyle changes for us has been the use of computers. For many of us, manual interaction with a computer occupies a large portion of our working time. Through neural plasticity, this extensive movement training should change our representation of movements (e.g., [1-3]), just like search engines affect memory [4]. However, how computer use affects motor learning is largely understudied. Additionally, as virtually all participants in studies of perception and actions are computer users, a legitimate question is whether insights from these studies bear the signature of computer-use experience. We compared non-computer users with age- and education-matched computer users in standard motor learning experiments. We found that people learned equally fast but that non-computer users generalized significantly less across space, a difference negated by two weeks of intensive computer training. Our findings suggest that computer-use experience shaped our basic sensorimotor behaviors, and this influence should be considered whenever computer users are recruited as study participants. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elber, Ron

    Atomically detailed computer simulations of complex molecular events attracted the imagination of many researchers in the field as providing comprehensive information on chemical, biological, and physical processes. However, one of the greatest limitations of these simulations is of time scales. The physical time scales accessible to straightforward simulations are too short to address many interesting and important molecular events. In the last decade significant advances were made in different directions (theory, software, and hardware) that significantly expand the capabilities and accuracies of these techniques. This perspective describes and critically examines some of these advances.

  1. Pen-based computers: Computers without keys

    NASA Technical Reports Server (NTRS)

    Conklin, Cheryl L.

    1994-01-01

    The National Space Transportation System (NSTS) is comprised of many diverse and highly complex systems incorporating the latest technologies. Data collection associated with ground processing of the various Space Shuttle system elements is extremely challenging due to the many separate processing locations where data is generated. This presents a significant problem when the timely collection, transfer, collation, and storage of data is required. This paper describes how new technology, referred to as Pen-Based computers, is being used to transform the data collection process at Kennedy Space Center (KSC). Pen-Based computers have streamlined procedures, increased data accuracy, and now provide more complete information than previous methods. The end results is the elimination of Shuttle processing delays associated with data deficiencies.

  2. Extending Clause Learning of SAT Solvers with Boolean Gröbner Bases

    NASA Astrophysics Data System (ADS)

    Zengler, Christoph; Küchlin, Wolfgang

    We extend clause learning as performed by most modern SAT Solvers by integrating the computation of Boolean Gröbner bases into the conflict learning process. Instead of learning only one clause per conflict, we compute and learn additional binary clauses from a Gröbner basis of the current conflict. We used the Gröbner basis engine of the logic package Redlog contained in the computer algebra system Reduce to extend the SAT solver MiniSAT with Gröbner basis learning. Our approach shows a significant reduction of conflicts and a reduction of restarts and computation time on many hard problems from the SAT 2009 competition.

  3. GPU-accelerated computation of electron transfer.

    PubMed

    Höfinger, Siegfried; Acocella, Angela; Pop, Sergiu C; Narumi, Tetsu; Yasuoka, Kenji; Beu, Titus; Zerbetto, Francesco

    2012-11-05

    Electron transfer is a fundamental process that can be studied with the help of computer simulation. The underlying quantum mechanical description renders the problem a computationally intensive application. In this study, we probe the graphics processing unit (GPU) for suitability to this type of problem. Time-critical components are identified via profiling of an existing implementation and several different variants are tested involving the GPU at increasing levels of abstraction. A publicly available library supporting basic linear algebra operations on the GPU turns out to accelerate the computation approximately 50-fold with minor dependence on actual problem size. The performance gain does not compromise numerical accuracy and is of significant value for practical purposes. Copyright © 2012 Wiley Periodicals, Inc.

  4. Discretization of the induced-charge boundary integral equation.

    PubMed

    Bardhan, Jaydeep P; Eisenberg, Robert S; Gillespie, Dirk

    2009-07-01

    Boundary-element methods (BEMs) for solving integral equations numerically have been used in many fields to compute the induced charges at dielectric boundaries. In this paper, we consider a more accurate implementation of BEM in the context of ions in aqueous solution near proteins, but our results are applicable more generally. The ions that modulate protein function are often within a few angstroms of the protein, which leads to the significant accumulation of polarization charge at the protein-solvent interface. Computing the induced charge accurately and quickly poses a numerical challenge in solving a popular integral equation using BEM. In particular, the accuracy of simulations can depend strongly on seemingly minor details of how the entries of the BEM matrix are calculated. We demonstrate that when the dielectric interface is discretized into flat tiles, the qualocation method of Tausch [IEEE Trans Comput.-Comput.-Aided Des. 20, 1398 (2001)] to compute the BEM matrix elements is always more accurate than the traditional centroid-collocation method. Qualocation is not more expensive to implement than collocation and can save significant computational time by reducing the number of boundary elements needed to discretize the dielectric interfaces.

  5. Discretization of the induced-charge boundary integral equation

    NASA Astrophysics Data System (ADS)

    Bardhan, Jaydeep P.; Eisenberg, Robert S.; Gillespie, Dirk

    2009-07-01

    Boundary-element methods (BEMs) for solving integral equations numerically have been used in many fields to compute the induced charges at dielectric boundaries. In this paper, we consider a more accurate implementation of BEM in the context of ions in aqueous solution near proteins, but our results are applicable more generally. The ions that modulate protein function are often within a few angstroms of the protein, which leads to the significant accumulation of polarization charge at the protein-solvent interface. Computing the induced charge accurately and quickly poses a numerical challenge in solving a popular integral equation using BEM. In particular, the accuracy of simulations can depend strongly on seemingly minor details of how the entries of the BEM matrix are calculated. We demonstrate that when the dielectric interface is discretized into flat tiles, the qualocation method of Tausch [IEEE Trans Comput.-Comput.-Aided Des. 20, 1398 (2001)] to compute the BEM matrix elements is always more accurate than the traditional centroid-collocation method. Qualocation is not more expensive to implement than collocation and can save significant computational time by reducing the number of boundary elements needed to discretize the dielectric interfaces.

  6. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    NASA Astrophysics Data System (ADS)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nitao, J J

    The goal of the Event Reconstruction Project is to find the location and strength of atmospheric release points, both stationary and moving. Source inversion relies on observational data as input. The methodology is sufficiently general to allow various forms of data. In this report, the authors will focus primarily on concentration measurements obtained at point monitoring locations at various times. The algorithms being investigated in the Project are the MCMC (Markov Chain Monte Carlo), SMC (Sequential Monte Carlo) Methods, classical inversion methods, and hybrids of these. They refer the reader to the report by Johannesson et al. (2004) for explanationsmore » of these methods. These methods require computing the concentrations at all monitoring locations for a given ''proposed'' source characteristic (locations and strength history). It is anticipated that the largest portion of the CPU time will take place performing this computation. MCMC and SMC will require this computation to be done at least tens of thousands of times. Therefore, an efficient means of computing forward model predictions is important to making the inversion practical. In this report they show how Green's functions and reciprocal Green's functions can significantly accelerate forward model computations. First, instead of computing a plume for each possible source strength history, they can compute plumes from unit impulse sources only. By using linear superposition, they can obtain the response for any strength history. This response is given by the forward Green's function. Second, they may use the law of reciprocity. Suppose that they require the concentration at a single monitoring point x{sub m} due to a potential (unit impulse) source that is located at x{sub s}. instead of computing a plume with source location x{sub s}, they compute a ''reciprocal plume'' whose (unit impulse) source is at the monitoring locations x{sub m}. The reciprocal plume is computed using a reversed-direction wind field. The wind field and transport coefficients must also be appropriately time-reversed. Reciprocity says that the concentration of reciprocal plume at x{sub s} is related to the desired concentration at x{sub m}. Since there are many less monitoring points than potential source locations, the number of forward model computations is drastically reduced.« less

  8. Evaluating Computer Screen Time and Its Possible Link to Psychopathology in the Context of Age: A Cross-Sectional Study of Parents and Children

    PubMed Central

    Ross, Sharon; Silman, Zmira; Maoz, Hagai; Bloch, Yuval

    2015-01-01

    Background Several studies have suggested that high levels of computer use are linked to psychopathology. However, there is ambiguity about what should be considered normal or over-use of computers. Furthermore, the nature of the link between computer usage and psychopathology is controversial. The current study utilized the context of age to address these questions. Our hypothesis was that the context of age will be paramount for differentiating normal from excessive use, and that this context will allow a better understanding of the link to psychopathology. Methods In a cross-sectional study, 185 parents and children aged 3–18 years were recruited in clinical and community settings. They were asked to fill out questionnaires regarding demographics, functional and academic variables, computer use as well as psychiatric screening questionnaires. Using a regression model, we identified 3 groups of normal-use, over-use and under-use and examined known factors as putative differentiators between the over-users and the other groups. Results After modeling computer screen time according to age, factors linked to over-use were: decreased socialization (OR 3.24, Confidence interval [CI] 1.23–8.55, p = 0.018), difficulty to disengage from the computer (OR 1.56, CI 1.07–2.28, p = 0.022) and age, though borderline-significant (OR 1.1 each year, CI 0.99–1.22, p = 0.058). While psychopathology was not linked to over-use, post-hoc analysis revealed that the link between increased computer screen time and psychopathology was age-dependent and solidified as age progressed (p = 0.007). Unlike computer usage, the use of small-screens and smartphones was not associated with psychopathology. Conclusions The results suggest that computer screen time follows an age-based course. We conclude that differentiating normal from over-use as well as defining over-use as a possible marker for psychiatric difficulties must be performed within the context of age. If verified by additional studies, future research should integrate those views in order to better understand the intricacies of computer over-use. PMID:26536037

  9. Evaluating Computer Screen Time and Its Possible Link to Psychopathology in the Context of Age: A Cross-Sectional Study of Parents and Children.

    PubMed

    Segev, Aviv; Mimouni-Bloch, Aviva; Ross, Sharon; Silman, Zmira; Maoz, Hagai; Bloch, Yuval

    2015-01-01

    Several studies have suggested that high levels of computer use are linked to psychopathology. However, there is ambiguity about what should be considered normal or over-use of computers. Furthermore, the nature of the link between computer usage and psychopathology is controversial. The current study utilized the context of age to address these questions. Our hypothesis was that the context of age will be paramount for differentiating normal from excessive use, and that this context will allow a better understanding of the link to psychopathology. In a cross-sectional study, 185 parents and children aged 3-18 years were recruited in clinical and community settings. They were asked to fill out questionnaires regarding demographics, functional and academic variables, computer use as well as psychiatric screening questionnaires. Using a regression model, we identified 3 groups of normal-use, over-use and under-use and examined known factors as putative differentiators between the over-users and the other groups. After modeling computer screen time according to age, factors linked to over-use were: decreased socialization (OR 3.24, Confidence interval [CI] 1.23-8.55, p = 0.018), difficulty to disengage from the computer (OR 1.56, CI 1.07-2.28, p = 0.022) and age, though borderline-significant (OR 1.1 each year, CI 0.99-1.22, p = 0.058). While psychopathology was not linked to over-use, post-hoc analysis revealed that the link between increased computer screen time and psychopathology was age-dependent and solidified as age progressed (p = 0.007). Unlike computer usage, the use of small-screens and smartphones was not associated with psychopathology. The results suggest that computer screen time follows an age-based course. We conclude that differentiating normal from over-use as well as defining over-use as a possible marker for psychiatric difficulties must be performed within the context of age. If verified by additional studies, future research should integrate those views in order to better understand the intricacies of computer over-use.

  10. Real-Time Non-Intrusive Assessment of Viewing Distance during Computer Use.

    PubMed

    Argilés, Marc; Cardona, Genís; Pérez-Cabré, Elisabet; Pérez-Magrané, Ramon; Morcego, Bernardo; Gispets, Joan

    2016-12-01

    To develop and test the sensitivity of an ultrasound-based sensor to assess the viewing distance of visual display terminals operators in real-time conditions. A modified ultrasound sensor was attached to a computer display to assess viewing distance in real time. Sensor functionality was tested on a sample of 20 healthy participants while they conducted four 10-minute randomly presented typical computer tasks (a match-three puzzle game, a video documentary, a task requiring participants to complete a series of sentences, and a predefined internet search). The ultrasound sensor offered good measurement repeatability. Game, text completion, and web search tasks were conducted at shorter viewing distances (54.4 cm [95% CI 51.3-57.5 cm], 54.5 cm [95% CI 51.1-58.0 cm], and 54.5 cm [95% CI 51.4-57.7 cm], respectively) than the video task (62.3 cm [95% CI 58.9-65.7 cm]). Statistically significant differences were found between the video task and the other three tasks (all p < 0.05). Range of viewing distances (from 22 to 27 cm) was similar for all tasks (F = 0.996; p = 0.413). Real-time assessment of the viewing distance of computer users with a non-intrusive ultrasonic device disclosed a task-dependent pattern.

  11. African-American males in computer science---Examining the pipeline for clogs

    NASA Astrophysics Data System (ADS)

    Stone, Daryl Bryant

    The literature on African-American males (AAM) begins with a statement to the effect that "Today young Black men are more likely to be killed or sent to prison than to graduate from college." Why are the numbers of African-American male college graduates decreasing? Why are those enrolled in college not majoring in the science, technology, engineering, and mathematics (STEM) disciplines? This research explored why African-American males are not filling the well-recognized industry need for Computer Scientist/Technologists by choosing college tracks to these careers. The literature on STEM disciplines focuses largely on women in STEM, as opposed to minorities, and within minorities, there is a noticeable research gap in addressing the needs and opportunities available to African-American males. The primary goal of this study was therefore to examine the computer science "pipeline" from the African-American male perspective. The method included a "Computer Science Degree Self-Efficacy Scale" be distributed to five groups of African-American male students, to include: (1) fourth graders, (2) eighth graders, (3) eleventh graders, (4) underclass undergraduate computer science majors, and (5) upperclass undergraduate computer science majors. In addition to a 30-question self-efficacy test, subjects from each group were asked to participate in a group discussion about "African-American males in computer science." The audio record of each group meeting provides qualitative data for the study. The hypotheses include the following: (1) There is no significant difference in "Computer Science Degree" self-efficacy between fourth and eighth graders. (2) There is no significant difference in "Computer Science Degree" self-efficacy between eighth and eleventh graders. (3) There is no significant difference in "Computer Science Degree" self-efficacy between eleventh graders and lower-level computer science majors. (4) There is no significant difference in "Computer Science Degree" self-efficacy between lower-level computer science majors and upper-level computer science majors. (5) There is no significant difference in "Computer Science Degree" self-efficacy between each of the five groups of students. Finally, the researcher selected African-American male students attending six primary schools, including the predominately African-American elementary, middle and high school that the researcher attended during his own academic career. Additionally, a racially mixed elementary, middle and high school was selected from the same county in Maryland. Bowie State University provided both the underclass and upperclass computer science majors surveyed in this study. Of the five hypotheses, the sample provided enough evidence to support the claim that there are significant differences in the "Computer Science Degree" self-efficacy between each of the five groups of students. ANOVA analysis by question and total self-efficacy scores provided more results of statistical significance. Additionally, factor analysis and review of the qualitative data provide more insightful results. Overall, the data suggest 'a clog' may exist in the middle school level and students attending racially mixed schools were more confident in their computer, math and science skills. African-American males admit to spending lots of time on social networking websites and emailing, but are 'dis-aware' of the skills and knowledge needed to study in the computing disciplines. The majority of the subjects knew little, if any, AAMs in the 'computing discipline pipeline'. The collegian African-American males, in this study, agree that computer programming is a difficult area and serves as a 'major clog in the pipeline'.

  12. Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS

    NASA Astrophysics Data System (ADS)

    Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L.; Bolch, Wesley E.

    2017-06-01

    A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.

  13. Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS.

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L; Bolch, Wesley E

    2017-06-21

    A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.

  14. Adaptive time steps in trajectory surface hopping simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spörkel, Lasse, E-mail: spoerkel@kofo.mpg.de; Thiel, Walter, E-mail: thiel@kofo.mpg.de

    2016-05-21

    Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energymore » surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.« less

  15. Adaptive time steps in trajectory surface hopping simulations

    NASA Astrophysics Data System (ADS)

    Spörkel, Lasse; Thiel, Walter

    2016-05-01

    Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.

  16. Market-based control strategy for long-span structures considering the multi-time delay issue

    NASA Astrophysics Data System (ADS)

    Li, Hongnan; Song, Jianzhu; Li, Gang

    2017-01-01

    To solve the different time delays that exist in the control device installed on spatial structures, in this study, discrete analysis using a 2 N precise algorithm was selected to solve the multi-time-delay issue for long-span structures based on the market-based control (MBC) method. The concept of interval mixed energy was introduced from computational structural mechanics and optimal control research areas, and it translates the design of the MBC multi-time-delay controller into a solution for the segment matrix. This approach transforms the serial algorithm in time to parallel computing in space, greatly improving the solving efficiency and numerical stability. The designed controller is able to consider the issue of time delay with a linear controlling force combination and is especially effective for large time-delay conditions. A numerical example of a long-span structure was selected to demonstrate the effectiveness of the presented controller, and the time delay was found to have a significant impact on the results.

  17. Engineering incremental resistive switching in TaOx based memristors for brain-inspired computing

    NASA Astrophysics Data System (ADS)

    Wang, Zongwei; Yin, Minghui; Zhang, Teng; Cai, Yimao; Wang, Yangyuan; Yang, Yuchao; Huang, Ru

    2016-07-01

    Brain-inspired neuromorphic computing is expected to revolutionize the architecture of conventional digital computers and lead to a new generation of powerful computing paradigms, where memristors with analog resistive switching are considered to be potential solutions for synapses. Here we propose and demonstrate a novel approach to engineering the analog switching linearity in TaOx based memristors, that is, by homogenizing the filament growth/dissolution rate via the introduction of an ion diffusion limiting layer (DLL) at the TiN/TaOx interface. This has effectively mitigated the commonly observed two-regime conductance modulation behavior and led to more uniform filament growth (dissolution) dynamics with time, therefore significantly improving the conductance modulation linearity that is desirable in neuromorphic systems. In addition, the introduction of the DLL also served to reduce the power consumption of the memristor, and important synaptic learning rules in biological brains such as spike timing dependent plasticity were successfully implemented using these optimized devices. This study could provide general implications for continued optimizations of memristor performance for neuromorphic applications, by carefully tuning the dynamics involved in filament growth and dissolution.Brain-inspired neuromorphic computing is expected to revolutionize the architecture of conventional digital computers and lead to a new generation of powerful computing paradigms, where memristors with analog resistive switching are considered to be potential solutions for synapses. Here we propose and demonstrate a novel approach to engineering the analog switching linearity in TaOx based memristors, that is, by homogenizing the filament growth/dissolution rate via the introduction of an ion diffusion limiting layer (DLL) at the TiN/TaOx interface. This has effectively mitigated the commonly observed two-regime conductance modulation behavior and led to more uniform filament growth (dissolution) dynamics with time, therefore significantly improving the conductance modulation linearity that is desirable in neuromorphic systems. In addition, the introduction of the DLL also served to reduce the power consumption of the memristor, and important synaptic learning rules in biological brains such as spike timing dependent plasticity were successfully implemented using these optimized devices. This study could provide general implications for continued optimizations of memristor performance for neuromorphic applications, by carefully tuning the dynamics involved in filament growth and dissolution. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00476h

  18. The influence of commenting validity, placement, and style on perceptions of computer code trustworthiness: A heuristic-systematic processing approach.

    PubMed

    Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August

    2018-07-01

    Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Time-reversal transcranial ultrasound beam focusing using a k-space method

    PubMed Central

    Jing, Yun; Meral, F. Can; Clement, Greg. T.

    2012-01-01

    This paper proposes the use of a k-space method to obtain the correction for transcranial ultrasound beam focusing. Mirroring past approaches, A synthetic point source at the focal point is numerically excited, and propagated through the skull, using acoustic properties acquired from registered computed tomograpy of the skull being studied. The received data outside the skull contains the correction information and can be phase conjugated (time reversed) and then physically generated to achieve a tight focusing inside the skull, by assuming quasi-plane transmission where shear waves are not present or their contribution can be neglected. Compared with the conventional finite-difference time-domain method for wave propagation simulation, it will be shown that the k-space method is significantly more accurate even for a relatively coarse spatial resolution, leading to a dramatically reduced computation time. Both numerical simulations and experiments conducted on an ex vivo human skull demonstrate that, precise focusing can be realized using the k-space method with a spatial resolution as low as only 2.56 grid points per wavelength, thus allowing treatment planning computation on the order of minutes. PMID:22290477

  20. Re-Computation of Numerical Results Contained in NACA Report No. 496

    NASA Technical Reports Server (NTRS)

    Perry, Boyd, III

    2015-01-01

    An extensive examination of NACA Report No. 496 (NACA 496), "General Theory of Aerodynamic Instability and the Mechanism of Flutter," by Theodore Theodorsen, is described. The examination included checking equations and solution methods and re-computing interim quantities and all numerical examples in NACA 496. The checks revealed that NACA 496 contains computational shortcuts (time- and effort-saving devices for engineers of the time) and clever artifices (employed in its solution methods), but, unfortunately, also contains numerous tripping points (aspects of NACA 496 that have the potential to cause confusion) and some errors. The re-computations were performed employing the methods and procedures described in NACA 496, but using modern computational tools. With some exceptions, the magnitudes and trends of the original results were in fair-to-very-good agreement with the re-computed results. The exceptions included what are speculated to be computational errors in the original in some instances and transcription errors in the original in others. Independent flutter calculations were performed and, in all cases, including those where the original and re-computed results differed significantly, were in excellent agreement with the re-computed results. Appendix A contains NACA 496; Appendix B contains a Matlab(Reistered) program that performs the re-computation of results; Appendix C presents three alternate solution methods, with examples, for the two-degree-of-freedom solution method of NACA 496; Appendix D contains the three-degree-of-freedom solution method (outlined in NACA 496 but never implemented), with examples.

  1. An on-line reactivity and power monitor for a TRIGA reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binney, Stephen E.; Bakir, Alia J.

    1988-07-01

    As the personal computer (PC) becomes more and more of a significant influence on modern technology, it is reasonable that at some point in time they would be used to interface with TRIGA reactors. A personal computer with a special interface board has been used to monitor key parameters during operation of the Oregon State University TRIGA Reactor (OSTR). A description of the apparatus used and sample results are included.

  2. SPAN: Ocean science

    NASA Technical Reports Server (NTRS)

    Thomas, Valerie L.; Koblinsky, Chester J.; Webster, Ferris; Zlotnicki, Victor; Green, James L.

    1987-01-01

    The Space Physics Analysis Network (SPAN) is a multi-mission, correlative data comparison network which links space and Earth science research and data analysis computers. It provides a common working environment for sharing computer resources, sharing computer peripherals, solving proprietary problems, and providing the potential for significant time and cost savings for correlative data analysis. This is one of a series of discipline-specific SPAN documents which are intended to complement the SPAN primer and SPAN Management documents. Their purpose is to provide the discipline scientists with a comprehensive set of documents to assist in the use of SPAN for discipline specific scientific research.

  3. CFD Prediction for Spin Rate of Fixed Canards on a Spinning Projectile

    NASA Astrophysics Data System (ADS)

    Ji, X. L.; Jia, Ch. Y.; Jiang, T. Y.

    2011-09-01

    A computational study performed for spin rate of fixed canards on a spinning projectile is presented in this paper. The cancards configurations provide challenges in terms of the determination of the aerodynamic forces and moments and the flow field changes which could have significant effect on the stability, performance, and corrected round accuracy. Advanced time accurate Navier-Stokes computations have been performed to compute the spin rate associated with the spinning motion of the cancards configurations at supersonic speed. The results show that roll-damping moment of cancards varies linearly with the spin rate at supersonic velocity.

  4. Symplectic multi-particle tracking on GPUs

    NASA Astrophysics Data System (ADS)

    Liu, Zhicong; Qiang, Ji

    2018-05-01

    A symplectic multi-particle tracking model is implemented on the Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) language. The symplectic tracking model can preserve phase space structure and reduce non-physical effects in long term simulation, which is important for beam property evaluation in particle accelerators. Though this model is computationally expensive, it is very suitable for parallelization and can be accelerated significantly by using GPUs. In this paper, we optimized the implementation of the symplectic tracking model on both single GPU and multiple GPUs. Using a single GPU processor, the code achieves a factor of 2-10 speedup for a range of problem sizes compared with the time on a single state-of-the-art Central Processing Unit (CPU) node with similar power consumption and semiconductor technology. It also shows good scalability on a multi-GPU cluster at Oak Ridge Leadership Computing Facility. In an application to beam dynamics simulation, the GPU implementation helps save more than a factor of two total computing time in comparison to the CPU implementation.

  5. CBESW: sequence alignment on the Playstation 3.

    PubMed

    Wirawan, Adrianto; Kwoh, Chee Keong; Hieu, Nim Tri; Schmidt, Bertil

    2008-09-17

    The exponential growth of available biological data has caused bioinformatics to be rapidly moving towards a data-intensive, computational science. As a result, the computational power needed by bioinformatics applications is growing exponentially as well. The recent emergence of accelerator technologies has made it possible to achieve an excellent improvement in execution time for many bioinformatics applications, compared to current general-purpose platforms. In this paper, we demonstrate how the PlayStation 3, powered by the Cell Broadband Engine, can be used as a computational platform to accelerate the Smith-Waterman algorithm. For large datasets, our implementation on the PlayStation 3 provides a significant improvement in running time compared to other implementations such as SSEARCH, Striped Smith-Waterman and CUDA. Our implementation achieves a peak performance of up to 3,646 MCUPS. The results from our experiments demonstrate that the PlayStation 3 console can be used as an efficient low cost computational platform for high performance sequence alignment applications.

  6. CBESW: Sequence Alignment on the Playstation 3

    PubMed Central

    Wirawan, Adrianto; Kwoh, Chee Keong; Hieu, Nim Tri; Schmidt, Bertil

    2008-01-01

    Background The exponential growth of available biological data has caused bioinformatics to be rapidly moving towards a data-intensive, computational science. As a result, the computational power needed by bioinformatics applications is growing exponentially as well. The recent emergence of accelerator technologies has made it possible to achieve an excellent improvement in execution time for many bioinformatics applications, compared to current general-purpose platforms. In this paper, we demonstrate how the PlayStation® 3, powered by the Cell Broadband Engine, can be used as a computational platform to accelerate the Smith-Waterman algorithm. Results For large datasets, our implementation on the PlayStation® 3 provides a significant improvement in running time compared to other implementations such as SSEARCH, Striped Smith-Waterman and CUDA. Our implementation achieves a peak performance of up to 3,646 MCUPS. Conclusion The results from our experiments demonstrate that the PlayStation® 3 console can be used as an efficient low cost computational platform for high performance sequence alignment applications. PMID:18798993

  7. Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fijany, A.; Milman, M.; Redding, D.

    1994-12-31

    In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less

  8. Utilization of computer technology by science teachers in public high schools and the impact of standardized testing

    NASA Astrophysics Data System (ADS)

    Priest, Richard Harding

    A significant percentage of high school science teachers are not using computers to teach their students or prepare them for standardized testing. A survey of high school science teachers was conducted to determine how they are having students use computers in the classroom, why science teachers are not using computers in the classroom, which variables were relevant to their not using computers, and what are the effects of standardized testing on the use of technology in the high school science classroom. A self-administered questionnaire was developed to measure these aspects of computer integration and demographic information. A follow-up telephone interview survey of a portion of the original sample was conducted in order to clarify questions, correct misunderstandings, and to draw out more holistic descriptions from the subjects. The primary method used to analyze the quantitative data was frequency distributions. Multiple regression analysis was used to investigate the relationships between the barriers and facilitators and the dimensions of instructional use, frequency, and importance of the use of computers. All high school science teachers in a large urban/suburban school district were sent surveys. A response rate of 58% resulted from two mailings of the survey. It was found that contributing factors to why science teachers do not use computers were not enough up-to-date computers in their classrooms and other educational commitments and duties do not leave them enough time to prepare lessons that include technology. While a high percentage of science teachers thought their school and district administrations were supportive of technology, they also believed more inservice technology training and follow-up activities to support that training are needed and more software needs to be created. The majority of the science teachers do not use the computer to help students prepare for standardized tests because they believe they can prepare students more efficiently without a computer. Nearly half of the teachers, however, gave lack of time to prepare instructional materials and lack of a means to project a computer image to the whole class as reasons they do not use computers. A significant percentage thought science standardized testing was having a negative effect on computer use.

  9. Speech recognition for embedded automatic positioner for laparoscope

    NASA Astrophysics Data System (ADS)

    Chen, Xiaodong; Yin, Qingyun; Wang, Yi; Yu, Daoyin

    2014-07-01

    In this paper a novel speech recognition methodology based on Hidden Markov Model (HMM) is proposed for embedded Automatic Positioner for Laparoscope (APL), which includes a fixed point ARM processor as the core. The APL system is designed to assist the doctor in laparoscopic surgery, by implementing the specific doctor's vocal control to the laparoscope. Real-time respond to the voice commands asks for more efficient speech recognition algorithm for the APL. In order to reduce computation cost without significant loss in recognition accuracy, both arithmetic and algorithmic optimizations are applied in the method presented. First, depending on arithmetic optimizations most, a fixed point frontend for speech feature analysis is built according to the ARM processor's character. Then the fast likelihood computation algorithm is used to reduce computational complexity of the HMM-based recognition algorithm. The experimental results show that, the method shortens the recognition time within 0.5s, while the accuracy higher than 99%, demonstrating its ability to achieve real-time vocal control to the APL.

  10. Fast computation of quadrupole and hexadecapole approximations in microlensing with a single point-source evaluation

    NASA Astrophysics Data System (ADS)

    Cassan, Arnaud

    2017-07-01

    The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.

  11. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.

  12. A computational approach to estimate postmortem interval using opacity development of eye for human subjects.

    PubMed

    Cantürk, İsmail; Özyılmaz, Lale

    2018-07-01

    This paper presents an approach to postmortem interval (PMI) estimation, which is a very debated and complicated area of forensic science. Most of the reported methods to determine PMI in the literature are not practical because of the need for skilled persons and significant amounts of time, and give unsatisfactory results. Additionally, the error margin of PMI estimation increases proportionally with elapsed time after death. It is crucial to develop practical PMI estimation methods for forensic science. In this study, a computational system is developed to determine the PMI of human subjects by investigating postmortem opacity development of the eye. Relevant features from the eye images were extracted using image processing techniques to reflect gradual opacity development. The features were then investigated to predict the time after death using machine learning methods. The experimental results prove that the development of opacity can be utilized as a practical computational tool to determine PMI for human subjects. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Fluctuating ideal-gas lattice Boltzmann method with fluctuation dissipation theorem for nonvanishing velocities.

    PubMed

    Kaehler, G; Wagner, A J

    2013-06-01

    Current implementations of fluctuating ideal-gas descriptions with the lattice Boltzmann methods are based on a fluctuation dissipation theorem, which, while greatly simplifying the implementation, strictly holds only for zero mean velocity and small fluctuations. We show how to derive the fluctuation dissipation theorem for all k, which was done only for k=0 in previous derivations. The consistent derivation requires, in principle, locally velocity-dependent multirelaxation time transforms. Such an implementation is computationally prohibitively expensive but, with a small computational trick, it is feasible to reproduce the correct FDT without overhead in computation time. It is then shown that the previous standard implementations perform poorly for non vanishing mean velocity as indicated by violations of Galilean invariance of measured structure factors. Results obtained with the method introduced here show a significant reduction of the Galilean invariance violations.

  14. Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment

    NASA Astrophysics Data System (ADS)

    Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.

    2013-12-01

    Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model

  15. Multi-GPU Jacobian accelerated computing for soft-field tomography.

    PubMed

    Borsic, A; Attardo, E A; Halter, R J

    2012-10-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15-20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on four GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 min to 14 s. We regard this as an important step toward gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for EIT, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the adjoint method.

  16. Multi-GPU Jacobian Accelerated Computing for Soft Field Tomography

    PubMed Central

    Borsic, A.; Attardo, E. A.; Halter, R. J.

    2012-01-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use Finite Element Models to represent the volume of interest and to solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are three-dimensional. Though the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in Electrical Impedance Tomography applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15 to 20 minutes with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Further, providing high-speed reconstructions are essential for some promising clinical application of EIT. For 3D problems 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In the present work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have a much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on 4 GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 minutes to 14 seconds. We regard this as an important step towards gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for Electrical Impedance Tomography, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the Adjoint Method. PMID:23010857

  17. Computation of shock wave/target interaction

    NASA Technical Reports Server (NTRS)

    Mark, A.; Kutler, P.

    1983-01-01

    Computational results of shock waves impinging on targets and the ensuing diffraction flowfield are presented. A number of two-dimensional cases are computed with finite difference techniques. The classical case of a shock wave/cylinder interaction is compared with shock tube data and shows the quality of the computations on a pressure-time plot. Similar results are obtained for a shock wave/rectangular body interaction. Here resolution becomes important and the use of grid clustering techniques tend to show good agreement with experimental data. Computational results are also compared with pressure data resulting from shock impingement experiments for a complicated truck-like geometry. Here of significance are the grid generation and clustering techniques used. For these very complicated bodies, grids are generated by numerically solving a set of elliptic partial differential equations.

  18. Fast Virtual Fractional Flow Reserve Based Upon Steady-State Computational Fluid Dynamics Analysis: Results From the VIRTU-Fast Study.

    PubMed

    Morris, Paul D; Silva Soto, Daniel Alejandro; Feher, Jeroen F A; Rafiroiu, Dan; Lungu, Angela; Varma, Susheel; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2017-08-01

    Fractional flow reserve (FFR)-guided percutaneous intervention is superior to standard assessment but remains underused. The authors have developed a novel "pseudotransient" analysis protocol for computing virtual fractional flow reserve (vFFR) based upon angiographic images and steady-state computational fluid dynamics. This protocol generates vFFR results in 189 s (cf >24 h for transient analysis) using a desktop PC, with <1% error relative to that of full-transient computational fluid dynamics analysis. Sensitivity analysis demonstrated that physiological lesion significance was influenced less by coronary or lesion anatomy (33%) and more by microvascular physiology (59%). If coronary microvascular resistance can be estimated, vFFR can be accurately computed in less time than it takes to make invasive measurements.

  19. Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    1994-01-01

    In the mid-1980s, Kinetic Systems and Langley Research Center determined that high speed CAMAC (Computer Automated Measurement and Control) data acquisition systems could significantly improve Langley's ARTS (Advanced Real Time Simulation) system. The ARTS system supports flight simulation R&D, and the CAMAC equipment allowed 32 high performance simulators to be controlled by centrally located host computers. This technology broadened Kinetic Systems' capabilities and led to several commercial applications. One of them is General Atomics' fusion research program. Kinetic Systems equipment allows tokamak data to be acquired four to 15 times more rapidly. Ford Motor company uses the same technology to control and monitor transmission testing facilities.

  20. A new approach for the calculation of response spectral density of a linear stationary random multidegree of freedom system

    NASA Astrophysics Data System (ADS)

    Sharan, A. M.; Sankar, S.; Sankar, T. S.

    1982-08-01

    A new approach for the calculation of response spectral density for a linear stationary random multidegree of freedom system is presented. The method is based on modifying the stochastic dynamic equations of the system by using a set of auxiliary variables. The response spectral density matrix obtained by using this new approach contains the spectral densities and the cross-spectral densities of the system generalized displacements and velocities. The new method requires significantly less computation time as compared to the conventional method for calculating response spectral densities. Two numerical examples are presented to compare quantitatively the computation time.

  1. New core-reflector boundary conditions for transient nodal reactor calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, E.K.; Kim, C.H.; Joo, H.K.

    1995-09-01

    New core-reflector boundary conditions designed for the exclusion of the reflector region in transient nodal reactor calculations are formulated. Spatially flat frequency approximations for the temporal neutron behavior and two types of transverse leakage approximations in the reflector region are introduced to solve the transverse-integrated time-dependent one-dimensional diffusion equation and then to obtain relationships between net current and flux at the core-reflector interfaces. To examine the effectiveness of new core-reflector boundary conditions in transient nodal reactor computations, nodal expansion method (NEM) computations with and without explicit representation of the reflector are performed for Laboratorium fuer Reaktorregelung und Anlagen (LRA) boilingmore » water reactor (BWR) and Nuclear Energy Agency Committee on Reactor Physics (NEACRP) pressurized water reactor (PWR) rod ejection kinetics benchmark problems. Good agreement between two NEM computations is demonstrated in all the important transient parameters of two benchmark problems. A significant amount of CPU time saving is also demonstrated with the boundary condition model with transverse leakage (BCMTL) approximations in the reflector region. In the three-dimensional LRA BWR, the BCMTL and the explicit reflector model computations differ by {approximately}4% in transient peak power density while the BCMTL results in >40% of CPU time saving by excluding both the axial and the radial reflector regions from explicit computational nodes. In the NEACRP PWR problem, which includes six different transient cases, the largest difference is 24.4% in the transient maximum power in the one-node-per-assembly B1 transient results. This difference in the transient maximum power of the B1 case is shown to reduce to 11.7% in the four-node-per-assembly computations. As for the computing time, BCMTL is shown to reduce the CPU time >20% in all six transient cases of the NEACRP PWR.« less

  2. Early Sexual Intercourse: Prospective Associations with Adolescents Physical Activity and Screen Time

    PubMed Central

    Wijtzes, Anne; van de Bongardt, Daphne; van de Looij-Jansen, Petra; Bannink, Rienke; Raat, Hein

    2016-01-01

    Objectives To assess the prospective associations of physical activity behaviors and screen time with early sexual intercourse initiation (i.e., before 15 years) in a large sample of adolescents. Methods We used two waves of data from the Rotterdam Youth Monitor, a longitudinal study conducted in the Netherlands. The analysis sample consisted of 2,141 adolescents aged 12 to 14 years (mean age at baseline = 12.2 years, SD = 0.43). Physical activity (e.g., sports outside school), screen time (e.g., computer use), and early sexual intercourse initiation were assessed by means of self-report questionnaires. Logistic regression models were tested to assess the associations of physical activity behaviors and screen time (separately and simultaneously) with early sexual intercourse initiation, controlling for confounders (i.e., socio-demographics and substance use). Interaction effects with gender were tested to assess whether these associations differed significantly between boys and girls. Results The only physical activity behavior that was a significant predictor of early sexual intercourse initiation was sports club membership. Adolescent boys and girls who were members of a sports club) were more likely to have had early sex (OR = 2.17; 95% CI = 1.33, 3.56. Significant gender interaction effects indicated that boys who watched TV ≥2 hours/day (OR = 2.00; 95% CI = 1.08, 3.68) and girls who used the computer ≥2 hours/day (OR = 3.92; 95% CI = 1.76, 8.69) were also significantly more likely to have engaged in early sex. Conclusion These findings have implications for professionals in general pediatric healthcare, sexual health educators, policy makers, and parents, who should be aware of these possible prospective links between sports club membership, TV watching (for boys), and computer use (for girls), and early sexual intercourse initiation. However, continued research on determinants of adolescents’ early sexual initiation is needed to further contribute to the strategies for improving adolescents’ healthy sexual development and behaviors. PMID:27513323

  3. Regularized lattice Bhatnagar-Gross-Krook model for two- and three-dimensional cavity flow simulations.

    PubMed

    Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S

    2014-05-01

    We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead.

  4. Early Morning Challenge: The Potential Effects of Chronobiology on Taking the Scholastic Aptitude Test.

    ERIC Educational Resources Information Center

    Callan, Roger John

    1995-01-01

    Cites research to support the notion that the time of day in which the SAT is administered has a significant adverse impact on many students taking the test. Suggests that changes in testing procedures (making tests available via computer at any time of the day or year) will serve students. (RS)

  5. 2-D Model for Normal and Sickle Cell Blood Microcirculation

    NASA Astrophysics Data System (ADS)

    Tekleab, Yonatan; Harris, Wesley

    2011-11-01

    Sickle cell disease (SCD) is a genetic disorder that alters the red blood cell (RBC) structure and function such that hemoglobin (Hb) cannot effectively bind and release oxygen. Previous computational models have been designed to study the microcirculation for insight into blood disorders such as SCD. Our novel 2-D computational model represents a fast, time efficient method developed to analyze flow dynamics, O2 diffusion, and cell deformation in the microcirculation. The model uses a finite difference, Crank-Nicholson scheme to compute the flow and O2 concentration, and the level set computational method to advect the RBC membrane on a staggered grid. Several sets of initial and boundary conditions were tested. Simulation data indicate a few parameters to be significant in the perturbation of the blood flow and O2 concentration profiles. Specifically, the Hill coefficient, arterial O2 partial pressure, O2 partial pressure at 50% Hb saturation, and cell membrane stiffness are significant factors. Results were found to be consistent with those of Le Floch [2010] and Secomb [2006].

  6. Fast H.264/AVC FRExt intra coding using belief propagation.

    PubMed

    Milani, Simone

    2011-01-01

    In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.

  7. Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions

    NASA Astrophysics Data System (ADS)

    Soltani, S. S.; Cvetkovic, V.; Destouni, G.

    2017-12-01

    The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow hillslope compartments are controlled by topography, and therefore application and further development of the simple "kinematic pathway" approach is promising for their modeling.

  8. On modelling three-dimensional piezoelectric smart structures with boundary spectral element method

    NASA Astrophysics Data System (ADS)

    Zou, Fangxin; Aliabadi, M. H.

    2017-05-01

    The computational efficiency of the boundary element method in elastodynamic analysis can be significantly improved by employing high-order spectral elements for boundary discretisation. In this work, for the first time, the so-called boundary spectral element method is utilised to formulate the piezoelectric smart structures that are widely used in structural health monitoring (SHM) applications. The resultant boundary spectral element formulation has been validated by the finite element method (FEM) and physical experiments. The new formulation has demonstrated a lower demand on computational resources and a higher numerical stability than commercial FEM packages. Comparing to the conventional boundary element formulation, a significant reduction in computational expenses has been achieved. In summary, the boundary spectral element formulation presented in this paper provides a highly efficient and stable mathematical tool for the development of SHM applications.

  9. Implementation and evaluation of an efficient secure computation system using ‘R’ for healthcare statistics

    PubMed Central

    Chida, Koji; Morohashi, Gembu; Fuji, Hitoshi; Magata, Fumihiko; Fujimura, Akiko; Hamada, Koki; Ikarashi, Dai; Yamamoto, Ryuichi

    2014-01-01

    Background and objective While the secondary use of medical data has gained attention, its adoption has been constrained due to protection of patient privacy. Making medical data secure by de-identification can be problematic, especially when the data concerns rare diseases. We require rigorous security management measures. Materials and methods Using secure computation, an approach from cryptography, our system can compute various statistics over encrypted medical records without decrypting them. An issue of secure computation is that the amount of processing time required is immense. We implemented a system that securely computes healthcare statistics from the statistical computing software ‘R’ by effectively combining secret-sharing-based secure computation with original computation. Results Testing confirmed that our system could correctly complete computation of average and unbiased variance of approximately 50 000 records of dummy insurance claim data in a little over a second. Computation including conditional expressions and/or comparison of values, for example, t test and median, could also be correctly completed in several tens of seconds to a few minutes. Discussion If medical records are simply encrypted, the risk of leaks exists because decryption is usually required during statistical analysis. Our system possesses high-level security because medical records remain in encrypted state even during statistical analysis. Also, our system can securely compute some basic statistics with conditional expressions using ‘R’ that works interactively while secure computation protocols generally require a significant amount of processing time. Conclusions We propose a secure statistical analysis system using ‘R’ for medical data that effectively integrates secret-sharing-based secure computation and original computation. PMID:24763677

  10. Implementation and evaluation of an efficient secure computation system using 'R' for healthcare statistics.

    PubMed

    Chida, Koji; Morohashi, Gembu; Fuji, Hitoshi; Magata, Fumihiko; Fujimura, Akiko; Hamada, Koki; Ikarashi, Dai; Yamamoto, Ryuichi

    2014-10-01

    While the secondary use of medical data has gained attention, its adoption has been constrained due to protection of patient privacy. Making medical data secure by de-identification can be problematic, especially when the data concerns rare diseases. We require rigorous security management measures. Using secure computation, an approach from cryptography, our system can compute various statistics over encrypted medical records without decrypting them. An issue of secure computation is that the amount of processing time required is immense. We implemented a system that securely computes healthcare statistics from the statistical computing software 'R' by effectively combining secret-sharing-based secure computation with original computation. Testing confirmed that our system could correctly complete computation of average and unbiased variance of approximately 50,000 records of dummy insurance claim data in a little over a second. Computation including conditional expressions and/or comparison of values, for example, t test and median, could also be correctly completed in several tens of seconds to a few minutes. If medical records are simply encrypted, the risk of leaks exists because decryption is usually required during statistical analysis. Our system possesses high-level security because medical records remain in encrypted state even during statistical analysis. Also, our system can securely compute some basic statistics with conditional expressions using 'R' that works interactively while secure computation protocols generally require a significant amount of processing time. We propose a secure statistical analysis system using 'R' for medical data that effectively integrates secret-sharing-based secure computation and original computation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. Visual cluster analysis and pattern recognition template and methods

    DOEpatents

    Osbourn, Gordon Cecil; Martinez, Rubel Francisco

    1999-01-01

    A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  12. Multimodal airway evaluation in growing patients after rapid maxillary expansion.

    PubMed

    Fastuca, R; Meneghel, M; Zecca, P A; Mangano, F; Antonello, M; Nucera, R; Caprioglio, A

    2015-06-01

    The objective of this study was to evaluate the airway volume of growing patients combining a morphological approach using cone beam computed tomography associated with functional data obtained by polysomnography examination after rapid maxillary expansion treatment. 22 Caucasian patients (mean age 8.3±0.9 years) undergoing rapid maxillary expansion with Haas type expander banded on second deciduous upper molars were enrolled for this prospective study. Cone beam computed tomography scans and polysomnography exams were collected before placing the appliance (T0) and after 12 months (T1). Image processing with airway volume computing and analyses of oxygen saturation and apnoea/hypopnoea index were performed. Airway volume, oxygen saturation and apnea/hypopnea index underwent significant increase over time. However, no significant correlation was seen between their increases. The rapid maxillary expansion treatment induced significant increases in the total airway volume and respiratory performance. Functional respiratory parameters should be included in studies evaluating the RME treatment effects on the respiratory performance.

  13. A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models

    NASA Astrophysics Data System (ADS)

    Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng

    2012-09-01

    Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.

  14. Distributive, Non-destructive Real-time System and Method for Snowpack Monitoring

    NASA Technical Reports Server (NTRS)

    Frolik, Jeff (Inventor); Skalka, Christian (Inventor)

    2013-01-01

    A ground-based system that provides quasi real-time measurement and collection of snow-water equivalent (SWE) data in remote settings is provided. The disclosed invention is significantly less expensive and easier to deploy than current methods and less susceptible to terrain and snow bridging effects. Embodiments of the invention include remote data recovery solutions. Compared to current infrastructure using existing SWE technology, the disclosed invention allows more SWE sites to be installed for similar cost and effort, in a greater variety of terrain; thus, enabling data collection at improved spatial resolutions. The invention integrates a novel computational architecture with new sensor technologies. The invention's computational architecture is based on wireless sensor networks, comprised of programmable, low-cost, low-powered nodes capable of sophisticated sensor control and remote data communication. The invention also includes measuring attenuation of electromagnetic radiation, an approach that is immune to snow bridging and significantly reduces sensor footprints.

  15. Assessing the effects of manual dexterity and playing computer games on catheter-wire manipulation for inexperienced operators.

    PubMed

    Alsafi, Z; Hameed, Y; Amin, P; Shamsad, S; Raja, U; Alsafi, A; Hamady, M S

    2017-09-01

    To investigate the effect of playing computer games and manual dexterity on catheter-wire manipulation in a mechanical aortic model. Medical student volunteers filled in a preprocedure questionnaire assessing their exposure to computer games. Their manual dexterity was measured using a smartphone game. They were then shown a video clip demonstrating renal artery cannulation and were asked to reproduce this. All attempts were timed. Two-tailed Student's t-test was used to compare continuous data, while Fisher's exact test was used for categorical data. Fifty students aged 18-22 years took part in the study. Forty-six completed the task at an average of 168 seconds (range 103-301 seconds). There was no significant difference in the dexterity score or time to cannulate the renal artery between male and female students. Students who played computer games for >10 hours per week had better dexterity scores than those who did not play computer games: 9.1 versus 10.2 seconds (p=0.0237). Four of 19 students who did not play computer games failed to complete the task, while all of those who played computer games regularly completed the task (p=0.0168). Playing computer games is associated with better manual dexterity and ability to complete a basic interventional radiology task for novices. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  16. SU-D-206-02: Evaluation of Partial Storage of the System Matrix for Cone Beam Computed Tomography Using a GPU Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matenine, D; Cote, G; Mascolo-Fortin, J

    2016-06-15

    Purpose: Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersections between the photons’ trajectories and the object, also called ray-tracing or system matrix computation. This work evaluates different ways to store the system matrix, aiming to reconstruct dense image grids in reasonable time. Methods: We propose an optimized implementation of the Siddon’s algorithm using graphics processing units (GPUs) with a novel data storage scheme. The algorithm computes a part of the system matrix on demand, typically, for one projection angle. The proposed method was enhanced with accelerating options: storage of larger subsets of themore » system matrix, systematic reuse of data via geometric symmetries, an arithmetic-rich parallel code and code configuration via machine learning. It was tested on geometries mimicking a cone beam CT acquisition of a human head. To realistically assess the execution time, the ray-tracing routines were integrated into a regularized Poisson-based reconstruction algorithm. The proposed scheme was also compared to a different approach, where the system matrix is fully pre-computed and loaded at reconstruction time. Results: Fast ray-tracing of realistic acquisition geometries, which often lack spatial symmetry properties, was enabled via the proposed method. Ray-tracing interleaved with projection and backprojection operations required significant additional time. In most cases, ray-tracing was shown to use about 66 % of the total reconstruction time. In absolute terms, tracing times varied from 3.6 s to 7.5 min, depending on the problem size. The presence of geometrical symmetries allowed for non-negligible ray-tracing and reconstruction time reduction. Arithmetic-rich parallel code and machine learning permitted a modest reconstruction time reduction, in the order of 1 %. Conclusion: Partial system matrix storage permitted the reconstruction of higher 3D image grid sizes and larger projection datasets at the cost of additional time, when compared to the fully pre-computed approach. This work was supported in part by the Fonds de recherche du Quebec - Nature et technologies (FRQ-NT). The authors acknowledge partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council of Canada (Grant No. 432290).« less

  17. BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark

    PubMed Central

    Gulzar, Muhammad Ali; Interlandi, Matteo; Yoo, Seunghyun; Tetali, Sai Deep; Condie, Tyson; Millstein, Todd; Kim, Miryung

    2016-01-01

    Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today’s data-centers is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires re-thinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user. First, BIGDEBUG’s simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BIGDEBUG scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BIGDEBUG supports debugging at interactive speeds with minimal performance impact. PMID:27390389

  18. Markov reward processes

    NASA Technical Reports Server (NTRS)

    Smith, R. M.

    1991-01-01

    Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.

  19. Does Video Laryngoscopy Offer Advantages over Direct Laryngoscopy during Cardiopulmonary Resuscitation?

    PubMed

    Saraçoğlu, Ayten; Bezen, Olgaç; Şengül, Türker; Uğur, Egin Hüsnü; Şener, Sibel; Yüzer, Fisun

    2015-08-01

    Interruption of chest compressions should be minimized because of its negative effects on survival. This randomized, controlled, cross-over study aimed to analyze the effectiveness of Macintosh, Miller, McCoy and McGrath laryngoscopes during with or without chest compressions in the scope of a simulated cardiopulmonary resuscitation scenario. The time required for successful tracheal intubation, number of attempts, dental trauma severity and the need for optimization manoeuvres were recorded during cardiopulmonary resuscitation with and without chest compressions. The experience with computer games during the last 10 years were asked to the participants and recorded. McCoy laryngoscope yielded the shortest time for successful tracheal intubation both in the presence of and without chest compressions. During the use of McCoy laryngoscopes, fewer tracheal intubation attempts, lower incidence of dental trauma and lower visual analogue scale scores on the ease of intubation were recorded. Participants who are experienced computer game players using Macintosh, McCoy and McGrath achieved successful tracheal intubation in a significantly shorter time during resuscitation without chest compressions. Dental trauma incidence and number of tracheal intubation attempts did not show any significant difference between the four laryngoscopes being related to the rate of playing computer games. McGrath video laryngoscopes do not appear to have advantages over direct laryngoscopes for securing a smooth and successful tracheal intubation during rhythmic chest compressions. We believe that as McCoy laryngoscope provided tracheal intubation in a shorter time and with fewer attempts, this laryngoscope may increase the success rate of resuscitation.

  20. Seasonal Variations of Stratospheric Age Spectra in GEOSCCM

    NASA Technical Reports Server (NTRS)

    Li, Feng; Waugh, Darryn; Douglass, Anne R.; Newman, Paul A.; Pawson, Steven; Stolarski, Richard S.; Strahan, Susan E.; Nielsen, J. Eric

    2011-01-01

    There are many pathways for an air parcel to travel from the troposphere to the stratosphere, each of which takes different time. The distribution of all the possible transient times, i.e. the stratospheric age spectrum, contains important information on transport characteristics. However, it is computationally very expensive to compute seasonally varying age spectra, and previous studies have focused mainly on the annual mean properties of the age spectra. To date our knowledge of the seasonality of the stratospheric age spectra is very limited. In this study we investigate the seasonal variations of the stratospheric age spectra in the Goddard Earth Observing System Chemistry Climate Model (GEOSCCM). We introduce a method to significantly reduce the computational cost for calculating seasonally dependent age spectra. Our simulations show that stratospheric age spectra in GEOSCCM have strong seasonal cycles and the seasonal cycles change with latitude and height. In the lower stratosphere extratropics, the average transit times and the most probable transit times in the winter/early spring spectra are more than twice as old as those in the summer/early fall spectra. But the seasonal cycle in the subtropical lower stratosphere is nearly out of phase with that in the extratropics. In the middle and upper stratosphere, significant seasonal variations occur in the sUbtropics. The spectral shapes also show dramatic seasonal change, especially at high latitudes. These seasonal variations reflect the seasonal evolution of the slow Brewer-Dobson circulation (with timescale of years) and the fast isentropic mixing (with timescale of days to months).

  1. On a fast calculation of structure factors at a subatomic resolution.

    PubMed

    Afonine, P V; Urzhumtsev, A

    2004-01-01

    In the last decade, the progress of protein crystallography allowed several protein structures to be solved at a resolution higher than 0.9 A. Such studies provide researchers with important new information reflecting very fine structural details. The signal from these details is very weak with respect to that corresponding to the whole structure. Its analysis requires high-quality data, which previously were available only for crystals of small molecules, and a high accuracy of calculations. The calculation of structure factors using direct formulae, traditional for 'small-molecule' crystallography, allows a relatively simple accuracy control. For macromolecular crystals, diffraction data sets at a subatomic resolution contain hundreds of thousands of reflections, and the number of parameters used to describe the corresponding models may reach the same order. Therefore, the direct way of calculating structure factors becomes very time expensive when applied to large molecules. These problems of high accuracy and computational efficiency require a re-examination of computer tools and algorithms. The calculation of model structure factors through an intermediate generation of an electron density [Sayre (1951). Acta Cryst. 4, 362-367; Ten Eyck (1977). Acta Cryst. A33, 486-492] may be much more computationally efficient, but contains some parameters (grid step, 'effective' atom radii etc.) whose influence on the accuracy of the calculation is not straightforward. At the same time, the choice of parameters within safety margins that largely ensure a sufficient accuracy may result in a significant loss of the CPU time, making it close to the time for the direct-formulae calculations. The impact of the different parameters on the computer efficiency of structure-factor calculation is studied. It is shown that an appropriate choice of these parameters allows the structure factors to be obtained with a high accuracy and in a significantly shorter time than that required when using the direct formulae. Practical algorithms for the optimal choice of the parameters are suggested.

  2. Time-frequency analysis of band-limited EEG with BMFLC and Kalman filter for BCI applications

    PubMed Central

    2013-01-01

    Background Time-Frequency analysis of electroencephalogram (EEG) during different mental tasks received significant attention. As EEG is non-stationary, time-frequency analysis is essential to analyze brain states during different mental tasks. Further, the time-frequency information of EEG signal can be used as a feature for classification in brain-computer interface (BCI) applications. Methods To accurately model the EEG, band-limited multiple Fourier linear combiner (BMFLC), a linear combination of truncated multiple Fourier series models is employed. A state-space model for BMFLC in combination with Kalman filter/smoother is developed to obtain accurate adaptive estimation. By virtue of construction, BMFLC with Kalman filter/smoother provides accurate time-frequency decomposition of the bandlimited signal. Results The proposed method is computationally fast and is suitable for real-time BCI applications. To evaluate the proposed algorithm, a comparison with short-time Fourier transform (STFT) and continuous wavelet transform (CWT) for both synthesized and real EEG data is performed in this paper. The proposed method is applied to BCI Competition data IV for ERD detection in comparison with existing methods. Conclusions Results show that the proposed algorithm can provide optimal time-frequency resolution as compared to STFT and CWT. For ERD detection, BMFLC-KF outperforms STFT and BMFLC-KS in real-time applicability with low computational requirement. PMID:24274109

  3. Molecular Sticker Model Stimulation on Silicon for a Maximum Clique Problem

    PubMed Central

    Ning, Jianguo; Li, Yanmei; Yu, Wen

    2015-01-01

    Molecular computers (also called DNA computers), as an alternative to traditional electronic computers, are smaller in size but more energy efficient, and have massive parallel processing capacity. However, DNA computers may not outperform electronic computers owing to their higher error rates and some limitations of the biological laboratory. The stickers model, as a typical DNA-based computer, is computationally complete and universal, and can be viewed as a bit-vertically operating machine. This makes it attractive for silicon implementation. Inspired by the information processing method on the stickers computer, we propose a novel parallel computing model called DEM (DNA Electronic Computing Model) on System-on-a-Programmable-Chip (SOPC) architecture. Except for the significant difference in the computing medium—transistor chips rather than bio-molecules—the DEM works similarly to DNA computers in immense parallel information processing. Additionally, a plasma display panel (PDP) is used to show the change of solutions, and helps us directly see the distribution of assignments. The feasibility of the DEM is tested by applying it to compute a maximum clique problem (MCP) with eight vertices. Owing to the limited computing sources on SOPC architecture, the DEM could solve moderate-size problems in polynomial time. PMID:26075867

  4. Real-time multiple human perception with color-depth cameras on a mobile robot.

    PubMed

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an accurate system for real-time 3-D perception of humans by a mobile robot.

  5. Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-01-01

    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.

  6. Computing quantum discord is NP-complete

    NASA Astrophysics Data System (ADS)

    Huang, Yichen

    2014-03-01

    We study the computational complexity of quantum discord (a measure of quantum correlation beyond entanglement), and prove that computing quantum discord is NP-complete. Therefore, quantum discord is computationally intractable: the running time of any algorithm for computing quantum discord is believed to grow exponentially with the dimension of the Hilbert space so that computing quantum discord in a quantum system of moderate size is not possible in practice. As by-products, some entanglement measures (namely entanglement cost, entanglement of formation, relative entropy of entanglement, squashed entanglement, classical squashed entanglement, conditional entanglement of mutual information, and broadcast regularization of mutual information) and constrained Holevo capacity are NP-hard/NP-complete to compute. These complexity-theoretic results are directly applicable in common randomness distillation, quantum state merging, entanglement distillation, superdense coding, and quantum teleportation; they may offer significant insights into quantum information processing. Moreover, we prove the NP-completeness of two typical problems: linear optimization over classical states and detecting classical states in a convex set, providing evidence that working with classical states is generically computationally intractable.

  7. Precision of computer-assisted core decompression drilling of the femoral head.

    PubMed

    Beckmann, J; Goetz, J; Baethis, H; Kalteis, T; Grifka, J; Perlick, L

    2006-08-01

    Osteonecrosis of the femoral head is a local destructive disease with progression into devastating stages. Left untreated it mostly leads to severe secondary osteoarthrosis and early endoprosthetic joint replacement. Core decompression by exact drilling into the ischemic areas can be performed in early stages according to Ficat or ARCO. Computer-aided surgery might enhance the precision of the drilling and lower the radiation exposure time of both staff and patients. The aim of this study was to evaluate the precision of the fluoroscopically based VectorVision navigation system in an in vitro model. Thirty sawbones were prepared with a defect filled up with a radiopaque gypsum sphere mimicking the osteonecrosis. Twenty sawbones were drilled by guidance of an intraoperative navigation system VectorVision (BrainLAB, Munich, Germany) and 10 sawbones by fluoroscopic control only. No gypsum sphere was missed. There was a statistically significant difference regarding the three-dimensional deviation (Euclidian norm) as well as maximum deviation in x-, y- or z-direction (maximum norm) to the desired mid-point of the lesion, with a mean of 0.51 and 0.4 mm in the navigated group and 1.1 and 0.88 mm in the control group, respectively. Furthermore, significant difference was found in the number of drilling corrections as well as the radiation time needed: no second drilling or correction of drilling direction was necessary in the navigated group compared to 1.4 in the control group. The radiation time needed was less than 1 s compared to 3.1 s, respectively. The fluoroscopy-based VectorVision navigation system shows a high feasibility of computer-guided drilling with a clear reduction of radiation exposure time and can therefore be integrated into clinical routine. The additional time needed is acceptable regarding the simultaneous reduction of radiation time.

  8. Improving the Computational Effort of Set-Inversion-Based Prandial Insulin Delivery for Its Integration in Insulin Pumps

    PubMed Central

    León-Vargas, Fabian; Calm, Remei; Bondia, Jorge; Vehí, Josep

    2012-01-01

    Objective Set-inversion-based prandial insulin delivery is a new model-based bolus advisor for postprandial glucose control in type 1 diabetes mellitus (T1DM). It automatically coordinates the values of basal–bolus insulin to be infused during the postprandial period so as to achieve some predefined control objectives. However, the method requires an excessive computation time to compute the solution set of feasible insulin profiles, which impedes its integration into an insulin pump. In this work, a new algorithm is presented, which reduces computation time significantly and enables the integration of this new bolus advisor into current processing features of smart insulin pumps. Methods A new strategy was implemented that focused on finding the combined basal–bolus solution of interest rather than an extensive search of the feasible set of solutions. Analysis of interval simulations, inclusion of physiological assumptions, and search domain contractions were used. Data from six real patients with T1DM were used to compare the performance between the optimized and the conventional computations. Results In all cases, the optimized version yielded the basal–bolus combination recommended by the conventional method and in only 0.032% of the computation time. Simulations show that the mean number of iterations for the optimized computation requires approximately 3.59 s at 20 MHz processing power, in line with current features of smart pumps. Conclusions A computationally efficient method for basal–bolus coordination in postprandial glucose control has been presented and tested. The results indicate that an embedded algorithm within smart insulin pumps is now feasible. Nonetheless, we acknowledge that a clinical trial will be needed in order to justify this claim. PMID:23294789

  9. Application of the MacCormack scheme to overland flow routing for high-spatial resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia

    2018-03-01

    Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.

  10. Real-time computer treatment of THz passive device images with the high image quality

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  11. [Sedentary behaviour 13-years-olds and its association with selected health behaviours, parenting practices and body mass].

    PubMed

    Jodkowska, Maria; Tabak, Izabela; Oblacińska, Anna; Stalmach, Magdalena

    2013-01-01

    1. To estimate the time spent in sedentary behaviour (watching TV, using the computer, doing homework). 2. To assess the link between the total time spent on watching TV, using the computer, doing homework and dietary habits, physical activity, parental practices and body mass. Cross-sectional study was conducted in Poland in 2008 among 13-year olds (n=600). They self-reported their time of TV viewing, computer use and homework. Their dietary behaviours, physical activity (MVPA) and parenting practices were also self-reported. Height and weight were measured by school nurses. Descriptive statistics and correlation were used in this analysis. The mean time spent watching television in school days was 2.3 hours for girls and 2.2 for boys. Boys spent significantly more time using the computer than girls - respectively 1.8 and 1.5 hours, while girls took longer doing homework - respectively 1.7 and 1.3 hours. Mean screen time was about 4 hours in school days and about 6 hours during weekend, statistically longer for boys in weekdays. Screen time was positively associated with intake of sweets, chips, soft drinks, "fast food" and meals consumption during TV, and negatively with regularity of meals and parental supervision. There was no correlation between screen time with physical activity and body mass. Sedentary behaviours and physical activity are not competing behaviours in Polish teenagers, but their relationship with unhealthy dietary patterns may lead to development of obesity. Good parental practices, both mother's and father's supervision seems to be crucial for screen time limitation in their children. Parents should become aware that relevant lifestyle monitoring of their children is a crucial element of health education in prevention of civilization diseases. This is a task for both healthcare workers and educational staff.

  12. Computer use at work is associated with self-reported depressive and anxiety disorder.

    PubMed

    Kim, Taeshik; Kang, Mo-Yeol; Yoo, Min-Sang; Lee, Dongwook; Hong, Yun-Chul

    2016-01-01

    With the development of technology, extensive use of computers in the workplace is prevalent and increases efficiency. However, computer users are facing new harmful working conditions with high workloads and longer hours. This study aimed to investigate the association between computer use at work and self-reported depressive and anxiety disorder (DAD) in a nationally representative sample of South Korean workers. This cross-sectional study was based on the third Korean Working Conditions Survey (2011), and 48,850 workers were analyzed. Information about computer use and DAD was obtained from a self-administered questionnaire. We investigated the relation between computer use at work and DAD using logistic regression. The 12-month prevalence of DAD in computer-using workers was 1.46 %. After adjustment for socio-demographic factors, the odds ratio for DAD was higher in workers using computers more than 75 % of their workday (OR 1.69, 95 % CI 1.30-2.20) than in workers using computers less than 50 % of their shift. After stratifying by working hours, computer use for over 75 % of the work time was significantly associated with increased odds of DAD in 20-39, 41-50, 51-60, and over 60 working hours per week. After stratifying by occupation, education, and job status, computer use for more than 75 % of the work time was related with higher odds of DAD in sales and service workers, those with high school and college education, and those who were self-employed and employers. A high proportion of computer use at work may be associated with depressive and anxiety disorder. This finding suggests the necessity of a work guideline to help the workers suffering from high computer use at work.

  13. Mathematical Description of Complex Chemical Kinetics and Application to CFD Modeling Codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  14. Mathematical description of complex chemical kinetics and application to CFD modeling codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  15. Detection and characterization of Budd-Chiari syndrome with inferior vena cava obstruction: Comparison of fixed and flexible delayed scan time of computed tomography venography.

    PubMed

    Zhou, Peng-Li; Wu, Gang; Han, Xin-Wei; Bi, Yong-Hua; Zhang, Wen-Guang; Wu, Zheng-Yang

    2017-06-01

    To compare the results of computed tomography venography (CTV) with a fixed and a flexible delayed scan time for Budd-Chiari syndrome (BCS) with inferior vena cava (IVC) obstruction. A total of 209 consecutive BCS patients with IVC obstruction underwent either a CTV with a fixed delayed scan time of 180s (n=87) or a flexible delayed scan time for good image quality according to IVC blood flow in color Doppler ultrasonography (n=122). The IVC blood flow velocity was measured using a color Doppler ultrasound prior to CT scan. Image quality was classified as either good, moderate, or poor. Image quality, surrounding structures and the morphology of the IVC obstruction were compared between the two groups using a χ 2 -test or paired or unpaired t-tests as appropriate. Inter-observer agreement was assessed using Kappa statistics. There was no significant difference in IVC blood flow velocity between the two groups. Overall image quality, surrounding structures and IVC obstruction morphology delineation on the flexible delayed scan time of CTV images were rated better relative to those obtained by fixed delayed scan time of CTV images (p<0.001). Evaluation of CTV data sets was significantly facilitated with flexible delayed scan time of CTV. There were no significant differences in Kappa statistics between Group A and Group B. The flexible delayed scan time of CTV was associated with better detection and more reliable characterization of BCS with IVC obstruction compared to a fixed delayed scan time. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    PubMed

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  17. Impact of IQ, computer-gaming skills, general dexterity, and laparoscopic experience on performance with the da Vinci surgical system.

    PubMed

    Hagen, Monika E; Wagner, Oliver J; Inan, Ihsan; Morel, Philippe

    2009-09-01

    Due to improved ergonomics and dexterity, robotic surgery is promoted as being easily performed by surgeons with no special skills necessary. We tested this hypothesis by measuring IQ elements, computer gaming skills, general dexterity with chopsticks, and evaluating laparoscopic experience in correlation to performance ability with the da Vinci robot. Thirty-four individuals were tested for robotic dexterity, IQ elements, computer-gaming skills and general dexterity. Eighteen surgically inexperienced and 16 laparoscopically trained surgeons were included. Each individual performed three different tasks with the da Vinci surgical system and their times were recorded. An IQ test (elements: logical thinking, 3D imagination and technical understanding) was completed by each participant. Computer skills were tested with a simple computer game (hand-eye coordination) and general dexterity was evaluated by the ability to use chopsticks. We found no correlation between logical thinking, 3D imagination and robotic skills. Both computer gaming and general dexterity showed a slight but non-significant improvement in performance with the da Vinci robot (p > 0.05). A significant correlation between robotic skills, technical understanding and laparoscopic experience was observed (p < 0.05). The data support the conclusion that there are no significant correlations between robotic performance and logical thinking, 3D understanding, computer gaming skills and general dexterity. A correlation between robotic skills and technical understanding may exist. Laparoscopic experience seems to be the strongest predictor of performance with the da Vinci surgical system. Generally, it appears difficult to determine non-surgical predictors for robotic surgery.

  18. Impact of isotropic constitutive descriptions on the predicted peak wall stress in abdominal aortic aneurysms.

    PubMed

    Man, V; Polzer, S; Gasser, T C; Novotny, T; Bursa, J

    2018-03-01

    Biomechanics-based assessment of Abdominal Aortic Aneurysm (AAA) rupture risk has gained considerable scientific and clinical momentum. However, computation of peak wall stress (PWS) using state-of-the-art finite element models is time demanding. This study investigates which features of the constitutive description of AAA wall are decisive for achieving acceptable stress predictions in it. Influence of five different isotropic constitutive descriptions of AAA wall is tested; models reflect realistic non-linear, artificially stiff non-linear, or artificially stiff pseudo-linear constitutive descriptions of AAA wall. Influence of the AAA wall model is tested on idealized (n=4) and patient-specific (n=16) AAA geometries. Wall stress computations consider a (hypothetical) load-free configuration and include residual stresses homogenizing the stresses across the wall. Wall stress differences amongst the different descriptions were statistically analyzed. When the qualitatively similar non-linear response of the AAA wall with low initial stiffness and subsequent strain stiffening was taken into consideration, wall stress (and PWS) predictions did not change significantly. Keeping this non-linear feature when using an artificially stiff wall can save up to 30% of the computational time, without significant change in PWS. In contrast, a stiff pseudo-linear elastic model may underestimate the PWS and is not reliable for AAA wall stress computations. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.

  19. A transfer function type of simplified electrochemical model with modified boundary conditions and Padé approximation for Li-ion battery: Part 1. lithium concentration estimation

    NASA Astrophysics Data System (ADS)

    Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi

    2017-06-01

    To guarantee the safety, high efficiency and long lifetime for lithium-ion battery, an advanced battery management system requires a physics-meaningful yet computationally efficient battery model. The pseudo-two dimensional (P2D) electrochemical model can provide physical information about the lithium concentration and potential distributions across the cell dimension. However, the extensive computation burden caused by the temporal and spatial discretization limits its real-time application. In this research, we propose a new simplified electrochemical model (SEM) by modifying the boundary conditions for electrolyte diffusion equations, which significantly facilitates the analytical solving process. Then to obtain a reduced order transfer function, the Padé approximation method is adopted to simplify the derived transcendental impedance solution. The proposed model with the reduced order transfer function can be briefly computable and preserve physical meanings through the presence of parameters such as the solid/electrolyte diffusion coefficients (Ds&De) and particle radius. The simulation illustrates that the proposed simplified model maintains high accuracy for electrolyte phase concentration (Ce) predictions, saying 0.8% and 0.24% modeling error respectively, when compared to the rigorous model under 1C-rate pulse charge/discharge and urban dynamometer driving schedule (UDDS) profiles. Meanwhile, this simplified model yields significantly reduced computational burden, which benefits its real-time application.

  20. Computers in medical education 2. Use of a computer package to supplement the clinical experience in a surgical clerkship: an objective evaluation.

    PubMed

    Devitt, P; Cehic, D; Palmer, E

    1998-06-01

    Student teaching of surgery has been devolved from the university in an effort to increase and broaden undergraduate clinical experience. In order to ensure uniformity of learning we have defined learning objectives and provided a computer-based package to supplement clinical teaching. A study was undertaken to evaluate the place of computer-based learning in a clinical environment. Twelve modules were provided for study during a 6-week attachment. These covered clinical problems related to cardiology, neurosurgery and gastrointestinal haemorrhage. Eighty-four fourth-year students undertook a pre- and post-test assessment on these three topics as well as acute abdominal pain. No extra learning material on the latter topic was provided during the attachment. While all students showed significant improvement in performance in the post-test assessment, those who had access to the computer material performed significantly better than did the controls. Within the topics, students in both groups performed equally well on the post-test assessment of acute abdominal pain but the control group's performance was significantly lacking on the topic of gastrointestinal haemorrhage, suggesting that the bulk of learning on this subject came from the computer material and little from the clinical attachment. This type of learning resource can be used to supplement the student's clinical experience and at the same time monitor what they learn during clinical clerkships and identify areas of weakness.

  1. Manual vs. computer-assisted sperm analysis: can CASA replace manual assessment of human semen in clinical practice?

    PubMed

    Talarczyk-Desole, Joanna; Berger, Anna; Taszarek-Hauke, Grażyna; Hauke, Jan; Pawelczyk, Leszek; Jedrzejczak, Piotr

    2017-01-01

    The aim of the study was to check the quality of computer-assisted sperm analysis (CASA) system in comparison to the reference manual method as well as standardization of the computer-assisted semen assessment. The study was conducted between January and June 2015 at the Andrology Laboratory of the Division of Infertility and Reproductive Endocrinology, Poznań University of Medical Sciences, Poland. The study group consisted of 230 men who gave sperm samples for the first time in our center as part of an infertility investigation. The samples underwent manual and computer-assisted assessment of concentration, motility and morphology. A total of 184 samples were examined twice: manually, according to the 2010 WHO recommendations, and with CASA, using the program set-tings provided by the manufacturer. Additionally, 46 samples underwent two manual analyses and two computer-assisted analyses. The p-value of p < 0.05 was considered as statistically significant. Statistically significant differences were found between all of the investigated sperm parameters, except for non-progressive motility, measured with CASA and manually. In the group of patients where all analyses with each method were performed twice on the same sample we found no significant differences between both assessments of the same probe, neither in the samples analyzed manually nor with CASA, although standard deviation was higher in the CASA group. Our results suggest that computer-assisted sperm analysis requires further improvement for a wider application in clinical practice.

  2. Comfort and experience with online learning: trends over nine years and associations with knowledge.

    PubMed

    Cook, David A; Thompson, Warren G

    2014-07-01

    Some evidence suggests that attitude toward computer-based instruction is an important determinant of success in online learning. We sought to determine how comfort using computers and perceptions of prior online learning experiences have changed over the past decade, and how these associate with learning outcomes. Each year from 2003-2011 we conducted a prospective trial of online learning. As part of each year's study, we asked medicine residents about their comfort using computers and if their previous experiences with online learning were favorable. We assessed knowledge using a multiple-choice test. We used regression to analyze associations and changes over time. 371 internal medicine and family medicine residents participated. Neither comfort with computers nor perceptions of prior online learning experiences showed a significant change across years (p > 0.61), with mean comfort rating 3.96 (maximum 5 = very comfortable) and mean experience rating 4.42 (maximum 6 = strongly agree [favorable]). Comfort showed no significant association with knowledge scores (p = 0.39) but perceptions of prior experiences did, with a 1.56% rise in knowledge score for a 1-point rise in experience score (p = 0.02). Correlations among comfort, perceptions of prior experiences, and number of prior experiences were all small and not statistically significant. Comfort with computers and perceptions of prior experience with online learning remained stable over nine years. Prior good experiences (but not comfort with computers) demonstrated a modest association with knowledge outcomes, suggesting that prior course satisfaction may influence subsequent learning.

  3. Socio-Demographic, Social-Cognitive, Health-Related and Physical Environmental Variables Associated with Context-Specific Sitting Time in Belgian Adolescents: A One-Year Follow-Up Study.

    PubMed

    Busschaert, Cedric; Ridgers, Nicola D; De Bourdeaudhuij, Ilse; Cardon, Greet; Van Cauwenberg, Jelle; De Cocker, Katrien

    2016-01-01

    More knowledge is warranted about multilevel ecological variables associated with context-specific sitting time among adolescents. The present study explored cross-sectional and longitudinal associations of ecological domains of sedentary behaviour, including socio-demographic, social-cognitive, health-related and physical-environmental variables with sitting during TV viewing, computer use, electronic gaming and motorized transport among adolescents. For this longitudinal study, a sample of Belgian adolescents completed questionnaires at school on context-specific sitting time and associated ecological variables. At baseline, complete data were gathered from 513 adolescents (15.0±1.7 years). At one-year follow-up, complete data of 340 participants were available (retention rate: 66.3%). Multilevel linear regression analyses were conducted to explore cross-sectional correlates (baseline variables) and longitudinal predictors (change scores variables) of context-specific sitting time. Social-cognitive correlates/predictors were most frequently associated with context-specific sitting time. Longitudinal analyses revealed that increases over time in considering it pleasant to watch TV (p < .001), in perceiving TV watching as a way to relax (p < .05), in TV time of parents/care givers (p < .01) and in TV time of siblings (p < .001) were associated with more sitting during TV viewing at follow-up. Increases over time in considering it pleasant to use a computer in leisure time (p < .01) and in the computer time of siblings (p < .001) were associated with more sitting during computer use at follow-up. None of the changes in potential predictors were significantly related to changes in sitting during motorized transport or during electronic gaming. Future intervention studies aiming to decrease TV viewing and computer use should acknowledge the importance of the behaviour of siblings and the pleasure adolescents experience during these screen-related behaviours. In addition, more time parents or care givers spent sitting may lead to more sitting during TV viewing of the adolescents, so that a family-based approach may be preferable for interventions. Experimental study designs are warranted to confirm the present findings.

  4. Socio-Demographic, Social-Cognitive, Health-Related and Physical Environmental Variables Associated with Context-Specific Sitting Time in Belgian Adolescents: A One-Year Follow-Up Study

    PubMed Central

    Busschaert, Cedric; Ridgers, Nicola D.; De Bourdeaudhuij, Ilse; Cardon, Greet; Van Cauwenberg, Jelle; De Cocker, Katrien

    2016-01-01

    Introduction More knowledge is warranted about multilevel ecological variables associated with context-specific sitting time among adolescents. The present study explored cross-sectional and longitudinal associations of ecological domains of sedentary behaviour, including socio-demographic, social-cognitive, health-related and physical-environmental variables with sitting during TV viewing, computer use, electronic gaming and motorized transport among adolescents. Methods For this longitudinal study, a sample of Belgian adolescents completed questionnaires at school on context-specific sitting time and associated ecological variables. At baseline, complete data were gathered from 513 adolescents (15.0±1.7 years). At one-year follow-up, complete data of 340 participants were available (retention rate: 66.3%). Multilevel linear regression analyses were conducted to explore cross-sectional correlates (baseline variables) and longitudinal predictors (change scores variables) of context-specific sitting time. Results Social-cognitive correlates/predictors were most frequently associated with context-specific sitting time. Longitudinal analyses revealed that increases over time in considering it pleasant to watch TV (p < .001), in perceiving TV watching as a way to relax (p < .05), in TV time of parents/care givers (p < .01) and in TV time of siblings (p < .001) were associated with more sitting during TV viewing at follow-up. Increases over time in considering it pleasant to use a computer in leisure time (p < .01) and in the computer time of siblings (p < .001) were associated with more sitting during computer use at follow-up. None of the changes in potential predictors were significantly related to changes in sitting during motorized transport or during electronic gaming. Conclusions Future intervention studies aiming to decrease TV viewing and computer use should acknowledge the importance of the behaviour of siblings and the pleasure adolescents experience during these screen-related behaviours. In addition, more time parents or care givers spent sitting may lead to more sitting during TV viewing of the adolescents, so that a family-based approach may be preferable for interventions. Experimental study designs are warranted to confirm the present findings. PMID:27936073

  5. [Efficacy of Sacroiliac Joint Anterior Approach with Double Reconstruction Plate and Computer Assisted Navigation Percutaneous Sacroiliac Screw for Treating Tile C1 Pelvic Fractures].

    PubMed

    Tan, Zhen; Fang, Yue; Zhang, Hui; Liu, Lei; Xiang, Zhou; Zhong, Gang; Huang, Fu-Guo; Wang, Guang-Lin

    2017-09-01

    To compare the efficacy of sacroiliac joint anterior approach with double reconstruction plate and computer assisted navigation percutaneous sacroiliac screw for treating Tile C1 pelvic fractures. Fifty patients with pelvic Tile C1 fractures were randomly divided into two groups ( n =25 for each) in the orthopedic department of West China Hospital of Sichuan University from December 2012 to November 2014. Patients in group A were treated by sacroiliac joint dislocation with anterior plate fixation. Patients in group B were treated with computerized navigation for percutaneous sacroiliac screw. The operation duration,intraoperative blood loss,incision length,and postoperative complications (nausea,vomiting,pulmonary infection,wound complications,etc.) were compared between the two groups. The postoperative fracture healing time,postoperative patient satisfaction,and postoperative fractures MATTA scores (to evaluate fracture reduction),postoperative MAJEED function scores,and SF36 scores of the patients were also recorded and compared. No significant differences in baseline characteristics were found between the two groups of patients. All of the patients in both groups had their operations successfully completed. Patients in group B had significantly shorter operations and lower intraoperative blood loss,incision length and postoperative complications than those in group A ( P <0.05). Patients in group B also had higher levels of satisfaction than those in group A ( P <0.05). No significant differences were found between the two groups in postoperative followup time,fracture healing time,postoperative MATTA scores,postoperative MAJEED function scores and SF36 scores ( P >0.05). Sacroiliac joint anterior approach with double reconstruction plate and computer assisted navigation percutaneous sacroiliac screws are both effective for treating Tile C1type pelvic fractures,with similar longterm efficacies. However,computer assisted navigation percutaneous sacroiliac screw has the advantages of less trauma,less bleeding,and quicker.

  6. Computing Real-time Streamflow Using Emerging Technologies: Non-contact Radars and the Probability Concept

    NASA Astrophysics Data System (ADS)

    Fulton, J. W.; Bjerklie, D. M.; Jones, J. W.; Minear, J. T.

    2015-12-01

    Measuring streamflow, developing, and maintaining rating curves at new streamgaging stations is both time-consuming and problematic. Hydro 21 was an initiative by the U.S. Geological Survey to provide vision and leadership to identify and evaluate new technologies and methods that had the potential to change the way in which streamgaging is conducted. Since 2014, additional trials have been conducted to evaluate some of the methods promoted by the Hydro 21 Committee. Emerging technologies such as continuous-wave radars and computationally-efficient methods such as the Probability Concept require significantly less field time, promote real-time velocity and streamflow measurements, and apply to unsteady flow conditions such as looped ratings and unsteady-flood flows. Portable and fixed-mount radars have advanced beyond the development phase, are cost effective, and readily available in the marketplace. The Probability Concept is based on an alternative velocity-distribution equation developed by C.-L. Chiu, who pioneered the concept. By measuring the surface-water velocity and correcting for environmental influences such as wind drift, radars offer a reliable alternative for measuring and computing real-time streamflow for a variety of hydraulic conditions. If successful, these tools may allow us to establish ratings more efficiently, assess unsteady flow conditions, and report real-time streamflow at new streamgaging stations.

  7. Time accurate application of the MacCormack 2-4 scheme on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Hudson, Dale A.; Long, Lyle N.

    1995-01-01

    Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.

  8. Predictability Experiments With the Navy Operational Global Atmospheric Prediction System

    NASA Astrophysics Data System (ADS)

    Reynolds, C. A.; Gelaro, R.; Rosmond, T. E.

    2003-12-01

    There are several areas of research in numerical weather prediction and atmospheric predictability, such as targeted observations and ensemble perturbation generation, where it is desirable to combine information about the uncertainty of the initial state with information about potential rapid perturbation growth. Singular vectors (SVs) provide a framework to accomplish this task in a mathematically rigorous and computationally feasible manner. In this study, SVs are calculated using the tangent and adjoint models of the Navy Operational Global Atmospheric Prediction System (NOGAPS). The analysis error variance information produced by the NRL Atmospheric Variational Data Assimilation System is used as the initial-time SV norm. These VAR SVs are compared to SVs for which total energy is both the initial and final time norms (TE SVs). The incorporation of analysis error variance information has a significant impact on the structure and location of the SVs. This in turn has a significant impact on targeted observing applications. The utility and implications of such experiments in assessing the analysis error variance estimates will be explored. Computing support has been provided by the Department of Defense High Performance Computing Center at the Naval Oceanographic Office Major Shared Resource Center at Stennis, Mississippi.

  9. Concurrent processing simulation of the space station

    NASA Technical Reports Server (NTRS)

    Gluck, R.; Hale, A. L.; Sunkel, John W.

    1989-01-01

    The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.

  10. Extreme learning machine for reduced order modeling of turbulent geophysical flows.

    PubMed

    San, Omer; Maulik, Romit

    2018-04-01

    We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.

  11. Extreme learning machine for reduced order modeling of turbulent geophysical flows

    NASA Astrophysics Data System (ADS)

    San, Omer; Maulik, Romit

    2018-04-01

    We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.

  12. Computer vision and augmented reality in gastrointestinal endoscopy

    PubMed Central

    Mahmud, Nadim; Cohen, Jonah; Tsourides, Kleovoulos; Berzin, Tyler M.

    2015-01-01

    Augmented reality (AR) is an environment-enhancing technology, widely applied in the computer sciences, which has only recently begun to permeate the medical field. Gastrointestinal endoscopy—which relies on the integration of high-definition video data with pathologic correlates—requires endoscopists to assimilate and process a tremendous amount of data in real time. We believe that AR is well positioned to provide computer-guided assistance with a wide variety of endoscopic applications, beginning with polyp detection. In this article, we review the principles of AR, describe its potential integration into an endoscopy set-up, and envisage a series of novel uses. With close collaboration between physicians and computer scientists, AR promises to contribute significant improvements to the field of endoscopy. PMID:26133175

  13. Selection of a model of Earth's albedo radiation, practical calculation of its effect on the trajectory of a satellite

    NASA Technical Reports Server (NTRS)

    Walch, J. J.

    1985-01-01

    Theoretical models of Earth's albedo radiation was proposed. By comparing disturbing accelerations computed from a model to those measured in flight with the CACTUS Accelerometer, modified according to the results. Computation of the satellite orbit perturbations from a model is very long because for each position of this satellite the fluxes coming from each elementary surface of the terrestrial portion visible from the satellite must be summed. The speed of computation is increased ten times without significant loss of accuracy thanks to a stocking of some intermediate results. Now it is possible to confront the orbit perturbations computed from the selected model with the measurements of these perturbations found with satellite as LAGEOS.

  14. Acoustic environmental accuracy requirements for response determination

    NASA Technical Reports Server (NTRS)

    Pettitt, M. R.

    1983-01-01

    A general purpose computer program was developed for the prediction of vehicle interior noise. This program, named VIN, has both modal and statistical energy analysis capabilities for structural/acoustic interaction analysis. The analytic models and their computer implementation were verified through simple test cases with well-defined experimental results. The model was also applied in a space shuttle payload bay launch acoustics prediction study. The computer program processes large and small problems with equal efficiency because all arrays are dynamically sized by program input variables at run time. A data base is built and easily accessed for design studies. The data base significantly reduces the computational costs of such studies by allowing the reuse of the still-valid calculated parameters of previous iterations.

  15. GPU-accelerated Modeling and Element-free Reverse-time Migration with Gauss Points Partition

    NASA Astrophysics Data System (ADS)

    Zhen, Z.; Jia, X.

    2014-12-01

    Element-free method (EFM) has been applied to seismic modeling and migration. Compared with finite element method (FEM) and finite difference method (FDM), it is much cheaper and more flexible because only the information of the nodes and the boundary of the study area are required in computation. In the EFM, the number of Gauss points should be consistent with the number of model nodes; otherwise the accuracy of the intermediate coefficient matrices would be harmed. Thus when we increase the nodes of velocity model in order to obtain higher resolution, we find that the size of the computer's memory will be a bottleneck. The original EFM can deal with at most 81×81 nodes in the case of 2G memory, as tested by Jia and Hu (2006). In order to solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition (GPP), and utilize the GPUs to improve the computation efficiency. Considering the characteristics of the Gaussian points, the GPP method doesn't influence the propagation of seismic wave in the velocity model. To overcome the time-consuming computation of the stiffness matrix (K) and the mass matrix (M), we also use the GPUs in our computation program. We employ the compressed sparse row (CSR) format to compress the intermediate sparse matrices and try to simplify the operations by solving the linear equations with the CULA Sparse's Conjugate Gradient (CG) solver instead of the linear sparse solver 'PARDISO'. It is observed that our strategy can significantly reduce the computational time of K and Mcompared with the algorithm based on CPU. The model tested is Marmousi model. The length of the model is 7425m and the depth is 2990m. We discretize the model with 595x298 nodes, 300x300 Gauss cells and 3x3 Gauss points in each cell. In contrast to the computational time of the conventional EFM, the GPUs-GPP approach can substantially improve the efficiency. The speedup ratio of time consumption of computing K, M is 120 and the speedup ratio time consumption of RTM is 11.5. At the same time, the accuracy of imaging is not harmed. Another advantage of the GPUs-GPP method is its easy applications in other numerical methods such as the FEM. Finally, in the GPUs-GPP method, the arrays require quite limited memory storage, which makes the method promising in dealing with large-scale 3D problems.

  16. Using SSTAs (Significant Sequences of TIR Anomalies) to trigger Natural Time Analysis: a Long Term Study on Earthquakes (M>4) occurred over Greece in 2004-2013

    NASA Astrophysics Data System (ADS)

    Tramutoli, V.; Eleftheriou, A.; Filizzola, C.; Genzano, N.; Lacava, T.; Lisi, M.; Paciello, R.; Pergola, N.; Vallianatos, F.

    2016-12-01

    From an appropriate identification and real-time integration of independent observations we expect to significantly improve our present capability of dynamically assess Seismic Hazard. Sometime one specific observation (e.g. anomaly in one parameter) can be used as a trigger or as a reference point (in the space and/or time domain) for activating/improving analysis on other independent parameters (e.g. b-value computation and/or Natural Time Analysis on seismic data) whose systematic computation could result otherwise very computationally expensive or impossible. In this paper one of these parameter (the Earth's emitted radiation in the Thermal Infra-Red spectral region) will be used to drive the application of Natural Time Analysis of seismic data in order to verify possible improvements in the forecast of earthquakes (with M≥4) occurred in Greece in the 10 years period 2004-2013. The RST (Robust Satellite Technique) data analysis approach and RETIRA (Robust Estimator of TIR Anomalies) index were used to preliminarily define, and then to identify, Significant Sequences of TIR Anomalies (SSTAs) in 10 years (2004-2013) of daily TIR images acquired by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite. A previous paper already demonstrated that more than 93% of all identified SSTAs occurred in a pre-fixed space-time window around earthquakes time (30 days before up to 15 after) and location (within 150 km or Dorbrovolsky distance) with a false positive rate smaller than 7%. In this paper just the barycenter of (and not all the alerted area) SSTAs is used to define the center of the circular area from which collect seismic data required for NTA analysis. Changes in the quality of earthquake forecast achieved by using each individual parameter in different configurations as well as the improvement rising by their joint use will be presented with reference to the 10 years considered period and to several recent events occurred in Greece.

  17. Computing exponentially faster: implementing a non-deterministic universal Turing machine using DNA

    PubMed Central

    Currin, Andrew; Korovin, Konstantin; Ababi, Maria; Roper, Katherine; Kell, Douglas B.; Day, Philip J.

    2017-01-01

    The theory of computer science is based around universal Turing machines (UTMs): abstract machines able to execute all possible algorithms. Modern digital computers are physical embodiments of classical UTMs. For the most important class of problem in computer science, non-deterministic polynomial complete problems, non-deterministic UTMs (NUTMs) are theoretically exponentially faster than both classical UTMs and quantum mechanical UTMs (QUTMs). However, no attempt has previously been made to build an NUTM, and their construction has been regarded as impossible. Here, we demonstrate the first physical design of an NUTM. This design is based on Thue string rewriting systems, and thereby avoids the limitations of most previous DNA computing schemes: all the computation is local (simple edits to strings) so there is no need for communication, and there is no need to order operations. The design exploits DNA's ability to replicate to execute an exponential number of computational paths in P time. Each Thue rewriting step is embodied in a DNA edit implemented using a novel combination of polymerase chain reactions and site-directed mutagenesis. We demonstrate that the design works using both computational modelling and in vitro molecular biology experimentation: the design is thermodynamically favourable, microprogramming can be used to encode arbitrary Thue rules, all classes of Thue rule can be implemented, and non-deterministic rule implementation. In an NUTM, the resource limitation is space, which contrasts with classical UTMs and QUTMs where it is time. This fundamental difference enables an NUTM to trade space for time, which is significant for both theoretical computer science and physics. It is also of practical importance, for to quote Richard Feynman ‘there's plenty of room at the bottom’. This means that a desktop DNA NUTM could potentially utilize more processors than all the electronic computers in the world combined, and thereby outperform the world's current fastest supercomputer, while consuming a tiny fraction of its energy. PMID:28250099

  18. Formulation of a dynamic analysis method for a generic family of hoop-mast antenna systems

    NASA Technical Reports Server (NTRS)

    Gabriele, A.; Loewy, R.

    1981-01-01

    Analytical studies of mast-cable-hoop-membrane type antennas were conducted using a transfer matrix numerical analysis approach. This method, by virtue of its specialization and the inherently easy compartmentalization of the formulation and numerical procedures, can be significantly more efficient in computer time required and in the time needed to review and interpret the results.

  19. SENSITIVITY OF HELIOSEISMIC TRAVEL TIMES TO THE IMPOSITION OF A LORENTZ FORCE LIMITER IN COMPUTATIONAL HELIOSEISMOLOGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moradi, Hamed; Cally, Paul S., E-mail: hamed.moradi@monash.edu

    The rapid exponential increase in the Alfvén wave speed with height above the solar surface presents a serious challenge to physical modeling of the effects of magnetic fields on solar oscillations, as it introduces a significant Courant-Friedrichs-Lewy time-step constraint for explicit numerical codes. A common approach adopted in computational helioseismology, where long simulations in excess of 10 hr (hundreds of wave periods) are often required, is to cap the Alfvén wave speed by artificially modifying the momentum equation when the ratio between the Lorentz and hydrodynamic forces becomes too large. However, recent studies have demonstrated that the Alfvén wave speedmore » plays a critical role in the MHD mode conversion process, particularly in determining the reflection height of the upwardly propagating helioseismic fast wave. Using numerical simulations of helioseismic wave propagation in constant inclined (relative to the vertical) magnetic fields we demonstrate that the imposition of such artificial limiters significantly affects time-distance travel times unless the Alfvén wave-speed cap is chosen comfortably in excess of the horizontal phase speeds under investigation.« less

  20. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  1. An efficient hybrid pseudospectral/finite-difference scheme for solving the TTI pure P-wave equation

    NASA Astrophysics Data System (ADS)

    Zhan, Ge; Pestana, Reynam C.; Stoffa, Paul L.

    2013-04-01

    The pure P-wave equation for modelling and migration in tilted transversely isotropic (TTI) media has attracted more and more attention in imaging seismic data with anisotropy. The desirable feature is that it is absolutely free of shear-wave artefacts and the consequent alleviation of numerical instabilities generally suffered by some systems of coupled equations. However, due to several forward-backward Fourier transforms in wavefield updating at each time step, the computational cost is significant, and thereby hampers its prevalence. We propose to use a hybrid pseudospectral (PS) and finite-difference (FD) scheme to solve the pure P-wave equation. In the hybrid solution, most of the cost-consuming wavenumber terms in the equation are replaced by inexpensive FD operators, which in turn accelerates the computation and reduces the computational cost. To demonstrate the benefit in cost saving of the new scheme, 2D and 3D reverse-time migration (RTM) examples using the hybrid solution to the pure P-wave equation are carried out, and respective runtimes are listed and compared. Numerical results show that the hybrid strategy demands less computation time and is faster than using the PS method alone. Furthermore, this new TTI RTM algorithm with the hybrid method is computationally less expensive than that with the FD solution to conventional TTI coupled equations.

  2. Mechanistic experimental pain assessment in computer users with and without chronic musculoskeletal pain.

    PubMed

    Ge, Hong-You; Vangsgaard, Steffen; Omland, Øyvind; Madeleine, Pascal; Arendt-Nielsen, Lars

    2014-12-06

    Musculoskeletal pain from the upper extremity and shoulder region is commonly reported by computer users. However, the functional status of central pain mechanisms, i.e., central sensitization and conditioned pain modulation (CPM), has not been investigated in this population. The aim was to evaluate sensitization and CPM in computer users with and without chronic musculoskeletal pain. Pressure pain threshold (PPT) mapping in the neck-shoulder (15 points) and the elbow (12 points) was assessed together with PPT measurement at mid-point in the tibialis anterior (TA) muscle among 47 computer users with chronic pain in the upper extremity and/or neck-shoulder pain (pain group) and 17 pain-free computer users (control group). Induced pain intensities and profiles over time were recorded using a 0-10 cm electronic visual analogue scale (VAS) in response to different levels of pressure stimuli on the forearm with a new technique of dynamic pressure algometry. The efficiency of CPM was assessed using cuff-induced pain as conditioning pain stimulus and PPT at TA as test stimulus. The demographics, job seniority and number of working hours/week using a computer were similar between groups. The PPTs measured at all 15 points in the neck-shoulder region were not significantly different between groups. There were no significant differences between groups neither in PPTs nor pain intensity induced by dynamic pressure algometry. No significant difference in PPT was observed in TA between groups. During CPM, a significant increase in PPT at TA was observed in both groups (P < 0.05) without significant differences between groups. For the chronic pain group, higher clinical pain intensity, lower PPT values from the neck-shoulder and higher pain intensity evoked by the roller were all correlated with less efficient descending pain modulation (P < 0.05). This suggests that the excitability of the central pain system is normal in a large group of computer users with low pain intensity chronic upper extremity and/or neck-shoulder pain and that increased excitability of the pain system cannot explain the reported pain. However, computer users with higher pain intensity and lower PPTs were found to have decreased efficiency in descending pain modulation.

  3. Prognosis of canine patients with nasal tumors according to modified clinical stages based on computed tomography: a retrospective study.

    PubMed

    Kondo, Yumi; Matsunaga, Satoru; Mochizuki, Manabu; Kadosawa, Tsuyoshi; Nakagawa, Takayuki; Nishimura, Ryohei; Sasaki, Nobuo

    2008-03-01

    To evaluate the efficacy of clinical staging based on computed tomography (CT) imaging over the World Health Organization (WHO) staging system based on radiography for nasal tumors in dogs, a retrospective study was conducted. This study used 112 dogs that had nasal tumors; they had undergone radiography and CT and had been histologically confirmed as having nasal tumors. Among 112 dogs, 85 (75.9%) were diagnosed as adenocarcinoma. Then they were analyzed for survival time according to each staging system. More than 70% of the patients with adenocarcinoma were classified as having WHO stage III. The patients classified under WHO stage II tended to survive longer than those classified under WHO stage III. Dogs classified under WHO stage III were further grouped into CT stages III and IV, and CT stage III patients had a significantly longer survival time than CT stage IV patients. In addition, patients treated with a combination of surgery and radiation had a significantly longer survival time than the patients who did not receive any treatment in CT stage III. On the other hand, different treatment modalities did not show a significant difference in the survival time of CT stage IV dogs. The results suggest that WHO stage III dogs may have various levels of tumor progression, indicating that the CT staging system may be more accurate than the WHO staging system.

  4. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  5. Building a symbolic computer algebra toolbox to compute 2D Fourier transforms in polar coordinates.

    PubMed

    Dovlo, Edem; Baddour, Natalie

    2015-01-01

    The development of a symbolic computer algebra toolbox for the computation of two dimensional (2D) Fourier transforms in polar coordinates is presented. Multidimensional Fourier transforms are widely used in image processing, tomographic reconstructions and in fact any application that requires a multidimensional convolution. By examining a function in the frequency domain, additional information and insights may be obtained. The advantages of our method include: •The implementation of the 2D Fourier transform in polar coordinates within the toolbox via the combination of two significantly simpler transforms.•The modular approach along with the idea of lookup tables implemented help avoid the issue of indeterminate results which may occur when attempting to directly evaluate the transform.•The concept also helps prevent unnecessary computation of already known transforms thereby saving memory and processing time.

  6. Facial Animations: Future Research Directions & Challenges

    NASA Astrophysics Data System (ADS)

    Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul

    2014-06-01

    Nowadays, computer facial animation is used in a significant multitude fields that brought human and social to study the computer games, films and interactive multimedia reality growth. Authoring the computer facial animation, complex and subtle expressions are challenging and fraught with problems. As a result, the current most authored using universal computer animation techniques often limit the production quality and quantity of facial animation. With the supplement of computer power, facial appreciative, software sophistication and new face-centric methods emerging are immature in nature. Therefore, this paper concentrates to define and managerially categorize current and emerged surveyed facial animation experts to define the recent state of the field, observed bottlenecks and developing techniques. This paper further presents a real-time simulation model of human worry and howling with detail discussion about their astonish, sorrow, annoyance and panic perception.

  7. Modeling a Wireless Network for International Space Station

    NASA Technical Reports Server (NTRS)

    Alena, Richard; Yaprak, Ece; Lamouri, Saad

    2000-01-01

    This paper describes the application of wireless local area network (LAN) simulation modeling methods to the hybrid LAN architecture designed for supporting crew-computing tools aboard the International Space Station (ISS). These crew-computing tools, such as wearable computers and portable advisory systems, will provide crew members with real-time vehicle and payload status information and access to digital technical and scientific libraries, significantly enhancing human capabilities in space. A wireless network, therefore, will provide wearable computer and remote instruments with the high performance computational power needed by next-generation 'intelligent' software applications. Wireless network performance in such simulated environments is characterized by the sustainable throughput of data under different traffic conditions. This data will be used to help plan the addition of more access points supporting new modules and more nodes for increased network capacity as the ISS grows.

  8. System reliability of randomly vibrating structures: Computational modeling and laboratory testing

    NASA Astrophysics Data System (ADS)

    Sundar, V. S.; Ammanagi, S.; Manohar, C. S.

    2015-09-01

    The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10-degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four-post test rig.

  9. Lattice Boltzmann and Navier-Stokes Cartesian CFD Approaches for Airframe Noise Predictions

    NASA Technical Reports Server (NTRS)

    Barad, Michael F.; Kocheemoolayil, Joseph G.; Kiris, Cetin C.

    2017-01-01

    Lattice Boltzmann (LB) and compressible Navier-Stokes (NS) equations based computational fluid dynamics (CFD) approaches are compared for simulating airframe noise. Both LB and NS CFD approaches are implemented within the Launch Ascent and Vehicle Aerodynamics (LAVA) framework. Both schemes utilize the same underlying Cartesian structured mesh paradigm with provision for local adaptive grid refinement and sub-cycling in time. We choose a prototypical massively separated, wake-dominated flow ideally suited for Cartesian-grid based approaches in this study - The partially-dressed, cavity-closed nose landing gear (PDCC-NLG) noise problem from AIAA's Benchmark problems for Airframe Noise Computations (BANC) series of workshops. The relative accuracy and computational efficiency of the two approaches are systematically compared. Detailed comments are made on the potential held by LB to significantly reduce time-to-solution for a desired level of accuracy within the context of modeling airframes noise from first principles.

  10. Balancing Particle and Mesh Computation in a Particle-In-Cell Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Worley, Patrick H; D'Azevedo, Eduardo; Hager, Robert

    2016-01-01

    The XGC1 plasma microturbulence particle-in-cell simulation code has both particle-based and mesh-based computational kernels that dominate performance. Both of these are subject to load imbalances that can degrade performance and that evolve during a simulation. Each separately can be addressed adequately, but optimizing just for one can introduce significant load imbalances in the other, degrading overall performance. A technique has been developed based on Golden Section Search that minimizes wallclock time given prior information on wallclock time, and on current particle distribution and mesh cost per cell, and also adapts to evolution in load imbalance in both particle and meshmore » work. In problems of interest this doubled the performance on full system runs on the XK7 at the Oak Ridge Leadership Computing Facility compared to load balancing only one of the kernels.« less

  11. Semiempirical Quantum Chemical Calculations Accelerated on a Hybrid Multicore CPU-GPU Computing Platform.

    PubMed

    Wu, Xin; Koslowski, Axel; Thiel, Walter

    2012-07-10

    In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.

  12. More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server

    PubMed Central

    Ho, Qirong; Cipar, James; Cui, Henggang; Kim, Jin Kyu; Lee, Seunghak; Gibbons, Phillip B.; Gibson, Garth A.; Ganger, Gregory R.; Xing, Eric P.

    2014-01-01

    We propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees. The parameter server provides an easy-to-use shared interface for read/write access to an ML model’s values (parameters and variables), and the SSP model allows distributed workers to read older, stale versions of these values from a local cache, instead of waiting to get them from a central storage. This significantly increases the proportion of time workers spend computing, as opposed to waiting. Furthermore, the SSP model ensures ML algorithm correctness by limiting the maximum age of the stale values. We provide a proof of correctness under SSP, as well as empirical results demonstrating that the SSP model achieves faster algorithm convergence on several different ML problems, compared to fully-synchronous and asynchronous schemes. PMID:25400488

  13. Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU

    NASA Astrophysics Data System (ADS)

    Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang

    2017-10-01

    Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.

  14. Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.

    PubMed

    Junker, André; Brenner, Karl-Heinz

    2018-03-01

    The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.

  15. A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case

    PubMed Central

    Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing

    2014-01-01

    This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038

  16. A high-performance genetic algorithm: using traveling salesman problem as a case.

    PubMed

    Tsai, Chun-Wei; Tseng, Shih-Pang; Chiang, Ming-Chao; Yang, Chu-Sing; Hong, Tzung-Pei

    2014-01-01

    This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA.

  17. Unobtrusive measurement of daily computer use to detect mild cognitive impairment

    PubMed Central

    Kaye, Jeffrey; Mattek, Nora; Dodge, Hiroko H; Campbell, Ian; Hayes, Tamara; Austin, Daniel; Hatt, William; Wild, Katherine; Jimison, Holly; Pavel, Michael

    2013-01-01

    Background Mild disturbances of higher order activities of daily living are present in people diagnosed with mild cognitive impairment (MCI). These deficits may be difficult to detect among those still living independently. Unobtrusive continuous assessment of a complex activity such as home computer use may detect mild functional changes and identify MCI. We sought to determine whether long-term changes in remotely monitored computer use differ in persons with MCI in comparison to cognitively intact volunteers. Methods Participants enrolled in a longitudinal cohort study of unobtrusive in-home technologies to detect cognitive and motor decline in independently living seniors were assessed for computer usage (number of days with use, mean daily usage and coefficient of variation of use) measured by remotely monitoring computer session start and end times. Results Over 230,000 computer sessions from 113 computer users (mean age, 85; 38 with MCI) were acquired during a mean of 36 months. In mixed effects models there was no difference in computer usage at baseline between MCI and intact participants controlling for age, sex, education, race and computer experience. However, over time, between MCI and intact participants, there was a significant decrease in number of days with use (p=0.01), mean daily usage (~1% greater decrease/month; p=0.009) and an increase in day-to-day use variability (p=0.002). Conclusions Computer use change can be unobtrusively monitored and indicate individuals with MCI. With 79% of those 55–64 years old now online, this may be an ecologically valid and efficient approach to track subtle clinically meaningful change with aging. PMID:23688576

  18. Aggregated channels network for real-time pedestrian detection

    NASA Astrophysics Data System (ADS)

    Ghorban, Farzin; Marín, Javier; Su, Yu; Colombo, Alessandro; Kummert, Anton

    2018-04-01

    Convolutional neural networks (CNNs) have demonstrated their superiority in numerous computer vision tasks, yet their computational cost results prohibitive for many real-time applications such as pedestrian detection which is usually performed on low-consumption hardware. In order to alleviate this drawback, most strategies focus on using a two-stage cascade approach. Essentially, in the first stage a fast method generates a significant but reduced amount of high quality proposals that later, in the second stage, are evaluated by the CNN. In this work, we propose a novel detection pipeline that further benefits from the two-stage cascade strategy. More concretely, the enriched and subsequently compressed features used in the first stage are reused as the CNN input. As a consequence, a simpler network architecture, adapted for such small input sizes, allows to achieve real-time performance and obtain results close to the state-of-the-art while running significantly faster without the use of GPU. In particular, considering that the proposed pipeline runs in frame rate, the achieved performance is highly competitive. We furthermore demonstrate that the proposed pipeline on itself can serve as an effective proposal generator.

  19. Patients who reattend after head injury: a high risk group.

    PubMed Central

    Voss, M.; Knottenbelt, J. D.; Peden, M. M.

    1995-01-01

    OBJECTIVE--To assess risk factors for important neurosurgical effects in patients who reattend after head injury. DESIGN--Retrospective study. SUBJECTS--606 patients who reattended a trauma unit after minor head injury. MAIN OUTCOME MEASURES--Intracranial abnormality detected on computed tomography or the need for neurosurgical intervention. RESULTS--Five patients died: two from unrelated causes and three from raised intracranial pressure. On multiple regression analysis the only significant predictor for both abnormality on computed tomography (14.4% of reattenders) and the need for operation (5% of reattenders) was vault fracture seen on the skull radiograph (P < 10(-6)); predictors for abnormal computed tomogram were a Glasgow coma scale score < 15 at either first or second attendance (P < 0.0001) and convulsion at second attendance (P < 0.05); predictive for operation only was penetrating injury of the skull (P < 10(-6)). On contingency table analysis these associations were confirmed. In addition significant associations with both abnormality on computed tomography and operation were focal neurological abnormality, weakness, or speech disturbance. Amnesia or loss of consciousness at the time of initial injury, personality change, and seizures were significantly associated only with abnormality on computed tomography. Headache, dizziness, nausea, and vomiting were common in reattenders but were found to have no independent significance. CONCLUSIONS--All patients who reattend after head injury should undergo computed tomography as at least 14% of scans can be expected to yield positive results. Where this facility is not available patients with predictors for operation should be urgently referred for neurosurgical opinion. Other patients can be readmitted and need referral only if symptoms persist despite symptomatic treatment or there is neurological deterioration while under observation. These patients are a high risk group and should be treated seriously. PMID:8520273

  20. Parallel solution of the symmetric tridiagonal eigenproblem. Research report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-10-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  1. Parallel solution of the symmetric tridiagonal eigenproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-01-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  2. Massive parallelization of serial inference algorithms for a complex generalized linear model

    PubMed Central

    Suchard, Marc A.; Simpson, Shawn E.; Zorych, Ivan; Ryan, Patrick; Madigan, David

    2014-01-01

    Following a series of high-profile drug safety disasters in recent years, many countries are redoubling their efforts to ensure the safety of licensed medical products. Large-scale observational databases such as claims databases or electronic health record systems are attracting particular attention in this regard, but present significant methodological and computational concerns. In this paper we show how high-performance statistical computation, including graphics processing units, relatively inexpensive highly parallel computing devices, can enable complex methods in large databases. We focus on optimization and massive parallelization of cyclic coordinate descent approaches to fit a conditioned generalized linear model involving tens of millions of observations and thousands of predictors in a Bayesian context. We find orders-of-magnitude improvement in overall run-time. Coordinate descent approaches are ubiquitous in high-dimensional statistics and the algorithms we propose open up exciting new methodological possibilities with the potential to significantly improve drug safety. PMID:25328363

  3. Statistical significance of task related deep brain EEG dynamic changes in the time-frequency domain.

    PubMed

    Chládek, J; Brázdil, M; Halámek, J; Plešinger, F; Jurák, P

    2013-01-01

    We present an off-line analysis procedure for exploring brain activity recorded from intra-cerebral electroencephalographic data (SEEG). The objective is to determine the statistical differences between different types of stimulations in the time-frequency domain. The procedure is based on computing relative signal power change and subsequent statistical analysis. An example of characteristic statistically significant event-related de/synchronization (ERD/ERS) detected across different frequency bands following different oddball stimuli is presented. The method is used for off-line functional classification of different brain areas.

  4. Local sharpening and subspace wavefront correction with predictive dynamic digital holography

    NASA Astrophysics Data System (ADS)

    Sulaiman, Sennan; Gibson, Steve

    2017-09-01

    Digital holography holds several advantages over conventional imaging and wavefront sensing, chief among these being significantly fewer and simpler optical components and the retrieval of complex field. Consequently, many imaging and sensing applications including microscopy and optical tweezing have turned to using digital holography. A significant obstacle for digital holography in real-time applications, such as wavefront sensing for high energy laser systems and high speed imaging for target racking, is the fact that digital holography is computationally intensive; it requires iterative virtual wavefront propagation and hill-climbing to optimize some sharpness criteria. It has been shown recently that minimum-variance wavefront prediction can be integrated with digital holography and image sharpening to reduce significantly large number of costly sharpening iterations required to achieve near-optimal wavefront correction. This paper demonstrates further gains in computational efficiency with localized sharpening in conjunction with predictive dynamic digital holography for real-time applications. The method optimizes sharpness of local regions in a detector plane by parallel independent wavefront correction on reduced-dimension subspaces of the complex field in a spectral plane.

  5. EnergyPlus Run Time Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less

  6. Predicting Intracerebral Hemorrhage Growth With the Spot Sign: The Effect of Onset-to-Scan Time.

    PubMed

    Dowlatshahi, Dar; Brouwers, H Bart; Demchuk, Andrew M; Hill, Michael D; Aviv, Richard I; Ufholz, Lee-Anne; Reaume, Michael; Wintermark, Max; Hemphill, J Claude; Murai, Yasuo; Wang, Yongjun; Zhao, Xingquan; Wang, Yilong; Li, Na; Sorimachi, Takatoshi; Matsumae, Mitsunori; Steiner, Thorsten; Rizos, Timolaos; Greenberg, Steven M; Romero, Javier M; Rosand, Jonathan; Goldstein, Joshua N; Sharma, Mukul

    2016-03-01

    Hematoma expansion after acute intracerebral hemorrhage is common and is associated with early deterioration and poor clinical outcome. The computed tomographic angiography (CTA) spot sign is a promising predictor of expansion; however, frequency and predictive values are variable across studies, possibly because of differences in onset-to-CTA time. We performed a patient-level meta-analysis to define the relationship between onset-to-CTA time and frequency and predictive ability of the spot sign. We completed a systematic review for studies of CTA spot sign and hematoma expansion. We subsequently pooled patient-level data on the frequency and predictive values for significant hematoma expansion according to 5 predefined categorized onset-to-CTA times. We calculated spot-sign frequency both as raw and frequency-adjusted rates. Among 2051 studies identified, 12 met our inclusion criteria. Baseline hematoma volume, spot-sign status, and time-to-CTA were available for 1176 patients, and 1039 patients had follow-up computed tomographies for hematoma expansion analysis. The overall spot sign frequency was 26%, decreasing from 39% within 2 hours of onset to 13% beyond 8 hours (P<0.001). There was a significant decrease in hematoma expansion in spot-positive patients as onset-to-CTA time increased (P=0.004), with positive predictive values decreasing from 53% to 33%. The frequency of the CTA spot sign is inversely related to intracerebral hemorrhage onset-to-CTA time. Furthermore, the positive predictive value of the spot sign for significant hematoma expansion decreases as time-to-CTA increases. Our results offer more precise risk stratification for patients with acute intracerebral hemorrhage and will help refine clinical prediction rules for intracerebral hemorrhage expansion. © 2016 American Heart Association, Inc.

  7. Visual cluster analysis and pattern recognition template and methods

    DOEpatents

    Osbourn, G.C.; Martinez, R.F.

    1999-05-04

    A method of clustering using a novel template to define a region of influence is disclosed. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques. 30 figs.

  8. Impact of computer-assisted data collection, evaluation and management on the cancer genetic counselor's time providing patient care.

    PubMed

    Cohen, Stephanie A; McIlvried, Dawn E

    2011-06-01

    Cancer genetic counseling sessions traditionally encompass collecting medical and family history information, evaluating that information for the likelihood of a genetic predisposition for a hereditary cancer syndrome, conveying that information to the patient, offering genetic testing when appropriate, obtaining consent and subsequently documenting the encounter with a clinic note and pedigree. Software programs exist to collect family and medical history information electronically, intending to improve efficiency and simplicity of collecting, managing and storing this data. This study compares the genetic counselor's time spent in cancer genetic counseling tasks in a traditional model and one using computer-assisted data collection, which is then used to generate a pedigree, risk assessment and consult note. Genetic counselor time spent collecting family and medical history and providing face-to-face counseling for a new patient session decreased from an average of 85-69 min when using the computer-assisted data collection. However, there was no statistically significant change in overall genetic counselor time on all aspects of the genetic counseling process, due to an increased amount of time spent generating an electronic pedigree and consult note. Improvements in the computer program's technical design would potentially minimize data manipulation. Certain aspects of this program, such as electronic collection of family history and risk assessment, appear effective in improving cancer genetic counseling efficiency while others, such as generating an electronic pedigree and consult note, do not.

  9. Using ordinal partition transition networks to analyze ECG data

    NASA Astrophysics Data System (ADS)

    Kulp, Christopher W.; Chobot, Jeremy M.; Freitas, Helena R.; Sprechini, Gene D.

    2016-07-01

    Electrocardiogram (ECG) data from patients with a variety of heart conditions are studied using ordinal pattern partition networks. The ordinal pattern partition networks are formed from the ECG time series by symbolizing the data into ordinal patterns. The ordinal patterns form the nodes of the network and edges are defined through the time ordering of the ordinal patterns in the symbolized time series. A network measure, called the mean degree, is computed from each time series-generated network. In addition, the entropy and number of non-occurring ordinal patterns (NFP) is computed for each series. The distribution of mean degrees, entropies, and NFPs for each heart condition studied is compared. A statistically significant difference between healthy patients and several groups of unhealthy patients with varying heart conditions is found for the distributions of the mean degrees, unlike for any of the distributions of the entropies or NFPs.

  10. On the study of control effectiveness and computational efficiency of reduced Saint-Venant model in model predictive control of open channel flow

    NASA Astrophysics Data System (ADS)

    Xu, M.; van Overloop, P. J.; van de Giesen, N. C.

    2011-02-01

    Model predictive control (MPC) of open channel flow is becoming an important tool in water management. The complexity of the prediction model has a large influence on the MPC application in terms of control effectiveness and computational efficiency. The Saint-Venant equations, called SV model in this paper, and the Integrator Delay (ID) model are either accurate but computationally costly, or simple but restricted to allowed flow changes. In this paper, a reduced Saint-Venant (RSV) model is developed through a model reduction technique, Proper Orthogonal Decomposition (POD), on the SV equations. The RSV model keeps the main flow dynamics and functions over a large flow range but is easier to implement in MPC. In the test case of a modeled canal reach, the number of states and disturbances in the RSV model is about 45 and 16 times less than the SV model, respectively. The computational time of MPC with the RSV model is significantly reduced, while the controller remains effective. Thus, the RSV model is a promising means to balance the control effectiveness and computational efficiency.

  11. Flame-Vortex Studies to Quantify Markstein Numbers Needed to Model Flame Extinction Limits

    NASA Technical Reports Server (NTRS)

    Driscoll, James F.; Feikema, Douglas A.

    2003-01-01

    This has quantified a database of Markstein numbers for unsteady flames; future work will quantify a database of flame extinction limits for unsteady conditions. Unsteady extinction limits have not been documented previously; both a stretch rate and a residence time must be measured, since extinction requires that the stretch rate be sufficiently large for a sufficiently long residence time. Ma was measured for an inwardly-propagating flame (IPF) that is negatively-stretched under microgravity conditions. Computations also were performed using RUN-1DL to explain the measurements. The Markstein number of an inwardly-propagating flame, for both the microgravity experiment and the computations, is significantly larger than that of an outwardy-propagating flame. The computed profiles of the various species within the flame suggest reasons. Computed hydrogen concentrations build up ahead of the IPF but not the OPF. Understanding was gained by running the computations for both simplified and full-chemistry conditions. Numerical Simulations. To explain the experimental findings, numerical simulations of both inwardly and outwardly propagating spherical flames (with complex chemistry) were generated using the RUN-1DL code, which includes 16 species and 46 reactions.

  12. Fast neuromimetic object recognition using FPGA outperforms GPU implementations.

    PubMed

    Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph

    2013-08-01

    Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.

  13. Cellular automata-based modelling and simulation of biofilm structure on multi-core computers.

    PubMed

    Skoneczny, Szymon

    2015-01-01

    The article presents a mathematical model of biofilm growth for aerobic biodegradation of a toxic carbonaceous substrate. Modelling of biofilm growth has fundamental significance in numerous processes of biotechnology and mathematical modelling of bioreactors. The process following double-substrate kinetics with substrate inhibition proceeding in a biofilm has not been modelled so far by means of cellular automata. Each process in the model proposed, i.e. diffusion of substrates, uptake of substrates, growth and decay of microorganisms and biofilm detachment, is simulated in a discrete manner. It was shown that for flat biofilm of constant thickness, the results of the presented model agree with those of a continuous model. The primary outcome of the study was to propose a mathematical model of biofilm growth; however a considerable amount of focus was also placed on the development of efficient algorithms for its solution. Two parallel algorithms were created, differing in the way computations are distributed. Computer programs were created using OpenMP Application Programming Interface for C++ programming language. Simulations of biofilm growth were performed on three high-performance computers. Speed-up coefficients of computer programs were compared. Both algorithms enabled a significant reduction of computation time. It is important, inter alia, in modelling and simulation of bioreactor dynamics.

  14. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    PubMed

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  15. Benchmarking Undedicated Cloud Computing Providers for Analysis of Genomic Datasets

    PubMed Central

    Yazar, Seyhan; Gooden, George E. C.; Mackey, David A.; Hewitt, Alex W.

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5–78.2) for E.coli and 53.5% (95% CI: 34.4–72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5–303.1) and 173.9% (95% CI: 134.6–213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE. PMID:25247298

  16. Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)

    NASA Astrophysics Data System (ADS)

    Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.

    2013-12-01

    This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.

  17. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    PubMed

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-05

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.

  18. Improving the numerical integration solution of satellite orbits in the presence of solar radiation pressure using modified back differences

    NASA Technical Reports Server (NTRS)

    Lundberg, J. B.; Feulner, M. R.; Abusali, P. A. M.; Ho, C. S.

    1991-01-01

    The method of modified back differences, a technique that significantly reduces the numerical integration errors associated with crossing shadow boundaries using a fixed-mesh multistep integrator without a significant increase in computer run time, is presented. While Hubbard's integral approach can produce significant improvements to the trajectory solution, the interpolation method provides the best overall results. It is demonstrated that iterating on the point mass term correction is also important for achieving the best overall results. It is also shown that the method of modified back differences can be implemented with only a small increase in execution time.

  19. Experimental validation of pulsed column inventory estimators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beyerlein, A.L.; Geldard, J.F.; Weh, R.

    Near-real-time accounting (NRTA) for reprocessing plants relies on the timely measurement of all transfers through the process area and all inventory in the process. It is difficult to measure the inventory of the solvent contractors; therefore, estimation techniques are considered. We have used experimental data obtained at the TEKO facility in Karlsruhe and have applied computer codes developed at Clemson University to analyze this data. For uranium extraction, the computer predictions agree to within 15% of the measured inventories. We believe this study is significant in demonstrating that using theoretical models with a minimum amount of process data may bemore » an acceptable approach to column inventory estimation for NRTA. 15 refs., 7 figs.« less

  20. The effects of syntactic complexity on the human-computer interaction

    NASA Technical Reports Server (NTRS)

    Chechile, R. A.; Fleischman, R. N.; Sadoski, D. M.

    1986-01-01

    Three divided-attention experiments were performed to evaluate the effectiveness of a syntactic analysis of the primary task of editing flight route-way-point information. For all editing conditions, a formal syntactic expression was developed for the operator's interaction with the computer. In terms of the syntactic expression, four measures of syntactic were examined. Increased syntactic complexity did increase the time to train operators, but once the operators were trained, syntactic complexity did not influence the divided-attention performance. However, the number of memory retrievals required of the operator significantly accounted for the variation in the accuracy, workload, and task completion time found on the different editing tasks under attention-sharing conditions.

  1. Phantom bite: a real or a phantom diagnosis? A case report.

    PubMed

    Sutter, Ben A

    2017-01-01

    This case report describes computer-guided occlusal therapy in a patient who met the unified diagnostic criteria for phantom bite. After a review of the patient's medical history, along with a diagnostic work-up that included cone beam computed tomography, temporomandibular joint vibration analysis, and digital occlusal analysis, problematic dental components were discovered (including prolonged disclusion time and imbalanced bite force). A digital occlusal analyzer evaluated the patient's occlusion and systematically guided the necessary changes. After reduction of the disclusion time and correction of the occlusal force imbalance, the patient reported significant improvement in comfort. The results suggest that phantom bite could be an abnormal occlusal condition and not a psychological or neurologic somatoform disorder.

  2. Association of unipedal standing time and bone mineral density in community-dwelling Japanese women.

    PubMed

    Sakai, A; Toba, N; Takeda, M; Suzuki, M; Abe, Y; Aoyagi, K; Nakamura, T

    2009-05-01

    Bone mineral density (BMD) and physical performance of the lower extremities decrease with age. In community-dwelling Japanese women, unipedal standing time, timed up and go test, and age are associated with BMD while in women aged 70 years and over, unipedal standing time is associated with BMD. The aim of this study was to clarify whether unipedal standing time is significantly associated with BMD in community-dwelling women. The subjects were 90 community-dwelling Japanese women aged 54.7 years. BMD of the second metacarpal bone was measured by computed X-ray densitometry. We measured unipedal standing time as well as timed up and go test to assess physical performance of the lower extremities. Unipedal standing time decreased with increased age. Timed up and go test significantly correlated with age. Low BMD was significantly associated with old age, short unipedal standing time, and long timed up and go test. Stepwise regression analysis revealed that age, unipedal standing time, and timed up and go test were significant factors associated with BMD. In 21 participants aged 70 years and over, body weight and unipedal standing time, but not age, were significantly associated with BMD. BMD and physical performance of the lower extremities decrease with older age. Unipedal standing time, timed up and go test, and age are associated with BMD in community-dwelling Japanese women. In women aged 70 years and over, unipedal standing time is significantly associated with BMD.

  3. Software for simulation of a computed tomography imaging spectrometer using optical design software

    NASA Astrophysics Data System (ADS)

    Spuhler, Peter T.; Willer, Mark R.; Volin, Curtis E.; Descour, Michael R.; Dereniak, Eustace L.

    2000-11-01

    Our Imaging Spectrometer Simulation Software known under the name Eikon should improve and speed up the design of a Computed Tomography Imaging Spectrometer (CTIS). Eikon uses existing raytracing software to simulate a virtual instrument. Eikon enables designers to virtually run through the design, calibration and data acquisition, saving significant cost and time when designing an instrument. We anticipate that Eikon simulations will improve future designs of CTIS by allowing engineers to explore more instrument options.

  4. Computer aided design environment for the analysis and design of multi-body flexible structures

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant V.; Singh, Ramen P.

    1989-01-01

    A computer aided design environment consisting of the programs NASTRAN, TREETOPS and MATLAB is presented in this paper. With links for data transfer between these programs, the integrated design of multi-body flexible structures is significantly enhanced. The CAD environment is used to model the Space Shuttle/Pinhole Occulater Facility. Then a controller is designed and evaluated in the nonlinear time history sense. Recent enhancements and ongoing research to add more capabilities are also described.

  5. Cumulative IT Use Is Associated with Psychosocial Stress Factors and Musculoskeletal Symptoms.

    PubMed

    So, Billy C L; Cheng, Andy S K; Szeto, Grace P Y

    2017-12-08

    This study aimed to examine the relationship between cumulative use of electronic devices and musculoskeletal symptoms. Smartphones and tablet computers are very popular and people may own or operate several devices at the same time. High prevalence rates of musculoskeletal symptoms associated with intensive computer use have been reported. However, research focusing on mobile devices is only just emerging in recent years. In this study, 285 persons participated including 140 males and 145 females (age range 18-50). The survey consisted of self-reported estimation of daily information technology (IT) exposure hours, tasks performed, psychosocial stress factors and relationship to musculoskeletal discomfort in the past 12 months. Total IT exposure time was an average of 7.38 h (±5.2) per day. The psychosocial factor of "working through pain" showed the most significant association with odds ratio (OR) ranging from 1.078 (95% CI = 1.021-1.138) for elbow discomfort, to 1.111 (95% CI = 1.046-1.180) for shoulder discomfort. Desktop time was also significantly associated with wrist/hand discomfort (OR = 1.103). These findings indicate only a modest relationship but one that is statistically significant with accounting for confounders. It is anticipated that prevalence rates of musculoskeletal disorders would rise in the future with increasing contribution due to psychosocial stress factors.

  6. FPGA-Based High-Performance Embedded Systems for Adaptive Edge Computing in Cyber-Physical Systems: The ARTICo³ Framework.

    PubMed

    Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo

    2018-06-08

    Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.

  7. A coarse-grid projection method for accelerating incompressible flow computations

    NASA Astrophysics Data System (ADS)

    San, Omer; Staples, Anne E.

    2013-01-01

    We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.

  8. Volunteered Cloud Computing for Disaster Management

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Hao, W.; Chettri, S. R.

    2014-12-01

    Disaster management relies increasingly on interpreting earth observations and running numerical models; which require significant computing capacity - usually on short notice and at irregular intervals. Peak computing demand during event detection, hazard assessment, or incident response may exceed agency budgets; however some of it can be met through volunteered computing, which distributes subtasks to participating computers via the Internet. This approach has enabled large projects in mathematics, basic science, and climate research to harness the slack computing capacity of thousands of desktop computers. This capacity is likely to diminish as desktops give way to battery-powered mobile devices (laptops, smartphones, tablets) in the consumer market; but as cloud computing becomes commonplace, it may offer significant slack capacity -- if its users are given an easy, trustworthy mechanism for participating. Such a "volunteered cloud computing" mechanism would also offer several advantages over traditional volunteered computing: tasks distributed within a cloud have fewer bandwidth limitations; granular billing mechanisms allow small slices of "interstitial" computing at no marginal cost; and virtual storage volumes allow in-depth, reversible machine reconfiguration. Volunteered cloud computing is especially suitable for "embarrassingly parallel" tasks, including ones requiring large data volumes: examples in disaster management include near-real-time image interpretation, pattern / trend detection, or large model ensembles. In the context of a major disaster, we estimate that cloud users (if suitably informed) might volunteer hundreds to thousands of CPU cores across a large provider such as Amazon Web Services. To explore this potential, we are building a volunteered cloud computing platform and targeting it to a disaster management context. Using a lightweight, fault-tolerant network protocol, this platform helps cloud users join parallel computing projects; automates reconfiguration of their virtual machines; ensures accountability for donated computing; and optimizes the use of "interstitial" computing. Initial applications include fire detection from multispectral satellite imagery and flood risk mapping through hydrological simulations.

  9. Television screen time, but not computer use and reading time, is associated with cardio-metabolic biomarkers in a multiethnic Asian population: a cross-sectional study.

    PubMed

    Nang, Ei Ei Khaing; Salim, Agus; Wu, Yi; Tai, E Shyong; Lee, Jeannette; Van Dam, Rob M

    2013-05-30

    Recent evidence shows that sedentary behaviour may be an independent risk factor for cardiovascular diseases, diabetes, cancers and all-cause mortality. However, results are not consistent and different types of sedentary behaviour might have different effects on health. Thus the aim of this study was to evaluate the association between television screen time, computer/reading time and cardio-metabolic biomarkers in a multiethnic urban Asian population. We also sought to understand the potential mediators of this association. The Singapore Prospective Study Program (2004-2007), was a cross-sectional population-based study in a multiethnic population in Singapore. We studied 3305 Singaporean adults of Chinese, Malay and Indian ethnicity who did not have pre-existing diseases and conditions that could affect their physical activity. Multiple linear regression analysis was used to assess the association of television screen time and computer/reading time with cardio-metabolic biomarkers [blood pressure, lipids, glucose, adiponectin, C reactive protein and homeostasis model assessment of insulin resistance (HOMA-IR)]. Path analysis was used to examine the role of mediators of the observed association. Longer television screen time was significantly associated with higher systolic blood pressure, total cholesterol, triglycerides, C reactive protein, HOMA-IR, and lower adiponectin after adjustment for potential socio-demographic and lifestyle confounders. Dietary factors and body mass index, but not physical activity, were potential mediators that explained most of these associations between television screen time and cardio-metabolic biomarkers. The associations of television screen time with triglycerides and HOMA-IR were only partly explained by dietary factors and body mass index. No association was observed between computer/ reading time and worse levels of cardio-metabolic biomarkers. In this urban Asian population, television screen time was associated with worse levels of various cardio-metabolic risk factors. This may reflect detrimental effects of television screen time on dietary habits rather than replacement of physical activity.

  10. Pediatric minor head trauma: do cranial CT scans change the therapeutic approach?

    PubMed

    Andrade, Felipe P; Montoro, Roberto; Oliveira, Renan; Loures, Gabriela; Flessak, Luana; Gross, Roberta; Donnabella, Camille; Puchnick, Andrea; Suzuki, Lisa; Regacini, Rodrigo

    2016-10-01

    1) To verify clinical signs correlated with appropriate cranial computed tomography scan indications and changes in the therapeutic approach in pediatric minor head trauma scenarios. 2) To estimate the radiation exposure of computed tomography scans with low dose protocols in the context of trauma and the additional associated risk. Investigators reviewed the medical records of all children with minor head trauma, which was defined as a Glasgow coma scale ≥13 at the time of admission to the emergency room, who underwent computed tomography scans during the years of 2013 and 2014. A change in the therapeutic approach was defined as a neurosurgical intervention performed within 30 days, hospitalization, >12 hours of observation, or neuro-specialist evaluation. Of the 1006 children evaluated, 101 showed some abnormality on head computed tomography scans, including 49 who were hospitalized, 16 who remained under observation and 36 who were dismissed. No patient underwent neurosurgery. No statistically significant relationship was observed between patient age, time between trauma and admission, or signs/symptoms related to trauma and abnormal imaging results. A statistically significant relationship between abnormal image results and a fall higher than 1.0 meter was observed (p=0.044). The mean effective dose was 2.0 mSv (0.1 to 6.8 mSv), corresponding to an estimated additional cancer risk of 0.05%. A computed tomography scan after minor head injury in pediatric patients did not show clinically relevant abnormalities that could lead to neurosurgical indications. Patients who fell more than 1.0 m were more likely to have changes in imaging tests, although these changes did not require neurosurgical intervention; therefore, the use of computed tomography scans may be questioned in this group. The results support the trend of more careful indications for cranial computed tomography scans for children with minor head trauma.

  11. Development and Application of a Numerical Framework for Improving Building Foundation Heat Transfer Calculations

    NASA Astrophysics Data System (ADS)

    Kruis, Nathanael J. F.

    Heat transfer from building foundations varies significantly in all three spatial dimensions and has important dynamic effects at all timescales, from one hour to several years. With the additional consideration of moisture transport, ground freezing, evapotranspiration, and other physical phenomena, the estimation of foundation heat transfer becomes increasingly sophisticated and computationally intensive to the point where accuracy must be compromised for reasonable computation time. The tools currently available to calculate foundation heat transfer are often either too limited in their capabilities to draw meaningful conclusions or too sophisticated to use in common practices. This work presents Kiva, a new foundation heat transfer computational framework. Kiva provides a flexible environment for testing different numerical schemes, initialization methods, spatial and temporal discretizations, and geometric approximations. Comparisons within this framework provide insight into the balance of computation speed and accuracy relative to highly detailed reference solutions. The accuracy and computational performance of six finite difference numerical schemes are verified against established IEA BESTEST test cases for slab-on-grade heat conduction. Of the schemes tested, the Alternating Direction Implicit (ADI) scheme demonstrates the best balance between accuracy, performance, and numerical stability. Kiva features four approaches of initializing soil temperatures for an annual simulation. A new accelerated initialization approach is shown to significantly reduce the required years of presimulation. Methods of approximating three-dimensional heat transfer within a representative two-dimensional context further improve computational performance. A new approximation called the boundary layer adjustment method is shown to improve accuracy over other established methods with a negligible increase in computation time. This method accounts for the reduced heat transfer from concave foundation shapes, which has not been adequately addressed to date. Within the Kiva framework, three-dimensional heat transfer that can require several days to simulate is approximated in two-dimensions in a matter of seconds while maintaining a mean absolute deviation within 3%.

  12. Computational time reduction for sequential batch solutions in GNSS precise point positioning technique

    NASA Astrophysics Data System (ADS)

    Martín Furones, Angel; Anquela Julián, Ana Belén; Dimas-Pages, Alejandro; Cos-Gayón, Fernando

    2017-08-01

    Precise point positioning (PPP) is a well established Global Navigation Satellite System (GNSS) technique that only requires information from the receiver (or rover) to obtain high-precision position coordinates. This is a very interesting and promising technique because eliminates the need for a reference station near the rover receiver or a network of reference stations, thus reducing the cost of a GNSS survey. From a computational perspective, there are two ways to solve the system of observation equations produced by static PPP either in a single step (so-called batch adjustment) or with a sequential adjustment/filter. The results of each should be the same if they are both well implemented. However, if a sequential solution (that is, not only the final coordinates, but also those observed in previous GNSS epochs), is needed, as for convergence studies, finding a batch solution becomes a very time consuming task owing to the need for matrix inversion that accumulates with each consecutive epoch. This is not a problem for the filter solution, which uses information computed in the previous epoch for the solution of the current epoch. Thus filter implementations need extra considerations of user dynamics and parameter state variations between observation epochs with appropriate stochastic update parameter variances from epoch to epoch. These filtering considerations are not needed in batch adjustment, which makes it attractive. The main objective of this research is to significantly reduce the computation time required to obtain sequential results using batch adjustment. The new method we implemented in the adjustment process led to a mean reduction in computational time by 45%.

  13. Desktop-based computer-assisted orthopedic training system for spinal surgery.

    PubMed

    Rambani, Rohit; Ward, James; Viant, Warren

    2014-01-01

    Simulation and surgical training has moved on since its inception during the end of the last century. The trainees are getting more exposed to computers and laboratory training in different subspecialties. More needs to be done in orthopedic simulation in spinal surgery. To develop a training system for pedicle screw fixation and validate its effectiveness in a cohort of junior orthopedic trainees. Fully simulated computer-navigated training system is used to train junior orthopedic trainees perform pedicle screw insertion in the lumbar spine. Real patient computed tomography scans are used to produce the real-time fluoroscopic images of the lumbar spine. The training system was developed to simulate pedicle screw insertion in the lumbar spine. A total of 12 orthopedic senior house officers performed pedicle screw insertion in the lumbar spine before and after the training on training system. The results were assessed based on the scoring system, which included the amount of time taken, accuracy of pedicle screw insertion, and the number of exposures requested to complete the procedure. The result shows a significant improvement in amount of time taken, accuracy of fixation, and the number of exposures after the training on simulator system. This was statistically significant using paired Student t test (p < 0.05). Fully simulated computer-navigated training system is an efficient training tool for young orthopedic trainees. This system can be used to augment training in the operating room, and trainees acquire their skills in the comfort of their study room or in the training room in the hospital. The system has the potential to be used in various other orthopedic procedures for learning of technical skills in a manner aimed at ensuring a smooth escalation in task complexity leading to the better performance of procedures in the operating theater. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  14. On the timing problem in optical PPM communications.

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.

    1971-01-01

    Investigation of the effects of imperfect timing in a direct-detection (noncoherent) optical system using pulse-position-modulation bits. Special emphasis is placed on specification of timing accuracy, and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors, from which average error probabilities can be computed for specific synchronization methods. Of significant importance is shown to be the presence of a residual, or irreducible error probability, due entirely to the timing system, that cannot be overcome by the data channel.

  15. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    PubMed

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  16. New “Tau-Leap” Strategy for Accelerated Stochastic Simulation

    PubMed Central

    2015-01-01

    The “Tau-Leap” strategy for stochastic simulations of chemical reaction systems due to Gillespie and co-workers has had considerable impact on various applications. This strategy is reexamined with Chebyshev’s inequality for random variables as it provides a rigorous probabilistic basis for a measured τ-leap thus adding significantly to simulation efficiency. It is also shown that existing strategies for simulation times have no probabilistic assurance that they satisfy the τ-leap criterion while the use of Chebyshev’s inequality leads to a specified degree of certainty with which the τ-leap criterion is satisfied. This reduces the loss of sample paths which do not comply with the τ-leap criterion. The performance of the present algorithm is assessed, with respect to one discussed by Cao et al. (J. Chem. Phys.2006, 124, 044109), a second pertaining to binomial leap (Tian and Burrage J. Chem. Phys.2004, 121, 10356; Chatterjee et al. J. Chem. Phys.2005, 122, 024112; Peng et al. J. Chem. Phys.2007, 126, 224109), and a third regarding the midpoint Poisson leap (Peng et al., 2007; Gillespie J. Chem. Phys.2001, 115, 1716). The performance assessment is made by estimating the error in the histogram measured against that obtained with the so-called stochastic simulation algorithm. It is shown that the current algorithm displays notably less histogram error than its predecessor for a fixed computation time and, conversely, less computation time for a fixed accuracy. This computational advantage is an asset in repetitive calculations essential for modeling stochastic systems. The importance of stochastic simulations is derived from diverse areas of application in physical and biological sciences, process systems, and economics, etc. Computational improvements such as those reported herein are therefore of considerable significance. PMID:25620846

  17. New "Tau-Leap" Strategy for Accelerated Stochastic Simulation.

    PubMed

    Ramkrishna, Doraiswami; Shu, Che-Chi; Tran, Vu

    2014-12-10

    The "Tau-Leap" strategy for stochastic simulations of chemical reaction systems due to Gillespie and co-workers has had considerable impact on various applications. This strategy is reexamined with Chebyshev's inequality for random variables as it provides a rigorous probabilistic basis for a measured τ-leap thus adding significantly to simulation efficiency. It is also shown that existing strategies for simulation times have no probabilistic assurance that they satisfy the τ-leap criterion while the use of Chebyshev's inequality leads to a specified degree of certainty with which the τ-leap criterion is satisfied. This reduces the loss of sample paths which do not comply with the τ-leap criterion. The performance of the present algorithm is assessed, with respect to one discussed by Cao et al. ( J. Chem. Phys. 2006 , 124 , 044109), a second pertaining to binomial leap (Tian and Burrage J. Chem. Phys. 2004 , 121 , 10356; Chatterjee et al. J. Chem. Phys. 2005 , 122 , 024112; Peng et al. J. Chem. Phys. 2007 , 126 , 224109), and a third regarding the midpoint Poisson leap (Peng et al., 2007; Gillespie J. Chem. Phys. 2001 , 115 , 1716). The performance assessment is made by estimating the error in the histogram measured against that obtained with the so-called stochastic simulation algorithm. It is shown that the current algorithm displays notably less histogram error than its predecessor for a fixed computation time and, conversely, less computation time for a fixed accuracy. This computational advantage is an asset in repetitive calculations essential for modeling stochastic systems. The importance of stochastic simulations is derived from diverse areas of application in physical and biological sciences, process systems, and economics, etc. Computational improvements such as those reported herein are therefore of considerable significance.

  18. Problematic computer game use as expression of Internet addiction and its association with self-rated health in the Lithuanian adolescent population.

    PubMed

    Ustinavičienė, Ruta; Škėmienė, Lina; Lukšienė, Dalia; Radišauskas, Ričardas; Kalinienė, Gintarė; Vasilavičius, Paulius

    2016-01-01

    Computers and the Internet have become an integral part of today's life. Problematic gaming is related to adolescent's health. The aim of our study was to evaluate the prevalence of Internet addiction among 13-18-year-old schoolchildren and its relation to sex, age, and time spent playing computer games, game type, and subjective health evaluation. A total of 1806 schoolchildren aged 13-18 years were interviewed. The evaluation of Internet addiction was conducted by the Diagnostic Questionnaire according to Young's methodology. The relation between the choice of computer games type, time spent while playing computer games and respondents' Internet addiction were assessed by using multivariate logistic regression analysis. One-tenth (10.6%) of the boys and 7.7% of the girls aged 13-18 years were Internet addicted. Internet addiction was associated with the type of computer game (action or combat vs. logic) among boys (OR=2.42; 95% CI, 1.03-5.67) and with the amount of time spent playing computer games per day during the last month (≥5 vs. <5h) among girls (OR=2.10; 95% CI, 1.19-3.70). The boys who were addicted to the Internet were more likely to rate their health poorer in comparison to their peers who were not addicted to the Internet (OR=2.48; 95% CI, 1.33-4.62). Internet addiction was significantly associated with poorer self-rated health among boys. Copyright © 2016 The Lithuanian University of Health Sciences. Production and hosting by Elsevier Urban & Partner Sp. z o.o. All rights reserved.

  19. Molecular dynamics studies of transport properties and equation of state of supercritical fluids

    NASA Astrophysics Data System (ADS)

    Nwobi, Obika C.

    Many chemical propulsion systems operate with one or more of the reactants above the critical point in order to enhance their performance. Most of the computational fluid dynamics (CFD) methods used to predict these flows require accurate information on the transport properties and equation of state at these supercritical conditions. This work involves the determination of transport coefficients and equation of state of supercritical fluids by equilibrium molecular dynamics (MD) simulations on parallel computers using the Green-Kubo formulae and the virial equation of state, respectively. MD involves the solution of equations of motion of a system of molecules that interact with each other through an intermolecular potential. Provided that an accurate potential can be found for the system of interest, MD can be used regardless of the phase and thermodynamic conditions of the substances involved. The MD program uses the effective Lennard-Jones potential, with system sizes of 1000-1200 molecules and, simulations of 2,000,000 time-steps for computing transport coefficients and 200,000 time-steps for pressures. The computer code also uses linked cell lists for efficient sorting of molecules, periodic boundary conditions, and a modified velocity Verlet algorithm for particle displacement. Particle decomposition is used for distributing the molecules to different processors of a parallel computer. Simulations have been carried out on pure argon, nitrogen, oxygen and ethylene at various supercritical conditions, with self-diffusion coefficients, shear viscosity coefficients, thermal conductivity coefficients and pressures computed for most of the conditions. Results compare well with experimental and the National Institute of Standards and Technology (NIST) values. The results show that the number of molecules and the potential cut-off radius have no significant effect on the computed coefficients, while long-time integration is necessary for accurate determination of the coefficients.

  20. High Fidelity Virtual Environments: Does Shader Quality or Higher Polygon Count Models Increase Presence and Learning

    NASA Astrophysics Data System (ADS)

    Horton, Scott

    This research study investigated the effects of high fidelity graphics on both learning and presence, or the "sense of being there," inside a Virtual Learning Environment (VLE). Four versions of a VLE on the subject of the element mercury were created, each with a different combination of high and low fidelity polygon models and high and low fidelity shaders. A total of 76 college age (18+ years of age) participants were randomly assigned to one of the four conditions. The participants interacted with the VLE and then completed several posttest measures on learning, presence, and attitudes towards the VLE experience. Demographic information was also collected, including age, computer gameplay experience, number of virtual environments interacted with, gender and time spent in this virtual environment. The data was analyzed as a 2 x 2 between subjects ANOVA. The main effects of shader fidelity and polygon fidelity were both non-significant for both learning and all presence subscales inside the VLE. In addition, there was no significant interaction between shader fidelity and model fidelity. However, there were two significant results on the supplementary variables. First, gender was found to have a significant main effect on all the presence subscales. Females reported higher average levels of presence than their male counterparts. Second, gameplay hours, or the number of hours a participant played computer games per week, also had a significant main effect on participant score on the learning measure. The participants who reported playing 15+ hours of computer games per week, the highest amount of time in the variable, had the highest score as a group on the mercury learning measure while those participants that played 1-5 hours per week had the lowest scores.

  1. Use of handheld computers in clinical practice: a systematic review.

    PubMed

    Mickan, Sharon; Atherton, Helen; Roberts, Nia Wyn; Heneghan, Carl; Tilson, Julie K

    2014-07-06

    Many healthcare professionals use smartphones and tablets to inform patient care. Contemporary research suggests that handheld computers may support aspects of clinical diagnosis and management. This systematic review was designed to synthesise high quality evidence to answer the question; Does healthcare professionals' use of handheld computers improve their access to information and support clinical decision making at the point of care? A detailed search was conducted using Cochrane, MEDLINE, EMBASE, PsycINFO, Science and Social Science Citation Indices since 2001. Interventions promoting healthcare professionals seeking information or making clinical decisions using handheld computers were included. Classroom learning and the use of laptop computers were excluded. Two authors independently selected studies, assessed quality using the Cochrane Risk of Bias tool and extracted data. High levels of data heterogeneity negated statistical synthesis. Instead, evidence for effectiveness was summarised narratively, according to each study's aim for assessing the impact of handheld computer use. We included seven randomised trials investigating medical or nursing staffs' use of Personal Digital Assistants. Effectiveness was demonstrated across three distinct functions that emerged from the data: accessing information for clinical knowledge, adherence to guidelines and diagnostic decision making. When healthcare professionals used handheld computers to access clinical information, their knowledge improved significantly more than peers who used paper resources. When clinical guideline recommendations were presented on handheld computers, clinicians made significantly safer prescribing decisions and adhered more closely to recommendations than peers using paper resources. Finally, healthcare professionals made significantly more appropriate diagnostic decisions using clinical decision making tools on handheld computers compared to colleagues who did not have access to these tools. For these clinical decisions, the numbers need to test/screen were all less than 11. Healthcare professionals' use of handheld computers may improve their information seeking, adherence to guidelines and clinical decision making. Handheld computers can provide real time access to and analysis of clinical information. The integration of clinical decision support systems within handheld computers offers clinicians the highest level of synthesised evidence at the point of care. Future research is needed to replicate these early results and to identify beneficial clinical outcomes.

  2. Use of handheld computers in clinical practice: a systematic review

    PubMed Central

    2014-01-01

    Background Many healthcare professionals use smartphones and tablets to inform patient care. Contemporary research suggests that handheld computers may support aspects of clinical diagnosis and management. This systematic review was designed to synthesise high quality evidence to answer the question; Does healthcare professionals’ use of handheld computers improve their access to information and support clinical decision making at the point of care? Methods A detailed search was conducted using Cochrane, MEDLINE, EMBASE, PsycINFO, Science and Social Science Citation Indices since 2001. Interventions promoting healthcare professionals seeking information or making clinical decisions using handheld computers were included. Classroom learning and the use of laptop computers were excluded. Two authors independently selected studies, assessed quality using the Cochrane Risk of Bias tool and extracted data. High levels of data heterogeneity negated statistical synthesis. Instead, evidence for effectiveness was summarised narratively, according to each study’s aim for assessing the impact of handheld computer use. Results We included seven randomised trials investigating medical or nursing staffs’ use of Personal Digital Assistants. Effectiveness was demonstrated across three distinct functions that emerged from the data: accessing information for clinical knowledge, adherence to guidelines and diagnostic decision making. When healthcare professionals used handheld computers to access clinical information, their knowledge improved significantly more than peers who used paper resources. When clinical guideline recommendations were presented on handheld computers, clinicians made significantly safer prescribing decisions and adhered more closely to recommendations than peers using paper resources. Finally, healthcare professionals made significantly more appropriate diagnostic decisions using clinical decision making tools on handheld computers compared to colleagues who did not have access to these tools. For these clinical decisions, the numbers need to test/screen were all less than 11. Conclusion Healthcare professionals’ use of handheld computers may improve their information seeking, adherence to guidelines and clinical decision making. Handheld computers can provide real time access to and analysis of clinical information. The integration of clinical decision support systems within handheld computers offers clinicians the highest level of synthesised evidence at the point of care. Future research is needed to replicate these early results and to identify beneficial clinical outcomes. PMID:24998515

  3. Using Wavelet Analysis To Assist in Identification of Significant Events in Molecular Dynamics Simulations.

    PubMed

    Heidari, Zahra; Roe, Daniel R; Galindo-Murillo, Rodrigo; Ghasemi, Jahan B; Cheatham, Thomas E

    2016-07-25

    Long time scale molecular dynamics (MD) simulations of biological systems are becoming increasingly commonplace due to the availability of both large-scale computational resources and significant advances in the underlying simulation methodologies. Therefore, it is useful to investigate and develop data mining and analysis techniques to quickly and efficiently extract the biologically relevant information from the incredible amount of generated data. Wavelet analysis (WA) is a technique that can quickly reveal significant motions during an MD simulation. Here, the application of WA on well-converged long time scale (tens of μs) simulations of a DNA helix is described. We show how WA combined with a simple clustering method can be used to identify both the physical and temporal locations of events with significant motion in MD trajectories. We also show that WA can not only distinguish and quantify the locations and time scales of significant motions, but by changing the maximum time scale of WA a more complete characterization of these motions can be obtained. This allows motions of different time scales to be identified or ignored as desired.

  4. Use of a Tracing Task to Assess Visuomotor Performance: Effects of Age, Sex, and Handedness

    PubMed Central

    2013-01-01

    Background. Visuomotor abnormalities are common in aging and age-related disease, yet difficult to quantify. This study investigated the effects of healthy aging, sex, and handedness on the performance of a tracing task. Participants (n = 150, aged 21–95 years, 75 females) used a stylus to follow a moving target around a circle on a tablet computer with their dominant and nondominant hands. Participants also performed the Trail Making Test (a measure of executive function). Methods. Deviations from the circular path were computed to derive an “error” time series. For each time series, absolute mean, variance, and complexity index (a proposed measure of system functionality and adaptability) were calculated. Using the moving target and stylus coordinates, the percentage of task time within the target region and the cumulative micropause duration (a measure of motion continuity) were computed. Results. All measures showed significant effects of aging (p < .0005). Post hoc age group comparisons showed that with increasing age, the absolute mean and variance of the error increased, complexity index decreased, percentage of time within the target region decreased, and cumulative micropause duration increased. Only complexity index showed a significant difference between dominant versus nondominant hands within each age group (p < .0005). All measures showed relationships to the Trail Making Test (p < .05). Conclusions. Measures derived from a tracing task identified performance differences in healthy individuals as a function of age, sex, and handedness. Studies in populations with specific neuromotor syndromes are warranted to test the utility of measures based on the dynamics of tracking a target as a clinical assessment tool. PMID:23388876

  5. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  6. Development of a Distributed Parallel Computing Framework to Facilitate Regional/Global Gridded Crop Modeling with Various Scenarios

    NASA Astrophysics Data System (ADS)

    Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.

    2017-12-01

    Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.

  7. Computerized Fleet Maintenance.

    ERIC Educational Resources Information Center

    Cataldo, John J.

    The computerization of school bus maintenance records by the Niskayuna (New York) Central School District enabled the district's transportation department to engage in management practices resulting in significant savings. The district obtains computer analyses of the work performed on all vehicles, including time spent, parts, labor, costs,…

  8. Mesoscale Models of Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Boghosian, Bruce M.; Hadjiconstantinou, Nicolas G.

    During the last half century, enormous progress has been made in the field of computational materials modeling, to the extent that in many cases computational approaches are used in a predictive fashion. Despite this progress, modeling of general hydrodynamic behavior remains a challenging task. One of the main challenges stems from the fact that hydrodynamics manifests itself over a very wide range of length and time scales. On one end of the spectrum, one finds the fluid's "internal" scale characteristic of its molecular structure (in the absence of quantum effects, which we omit in this chapter). On the other end, the "outer" scale is set by the characteristic sizes of the problem's domain. The resulting scale separation or lack thereof as well as the existence of intermediate scales are key to determining the optimal approach. Successful treatments require a judicious choice of the level of description which is a delicate balancing act between the conflicting requirements of fidelity and manageable computational cost: a coarse description typically requires models for underlying processes occuring at smaller length and time scales; on the other hand, a fine-scale model will incur a significantly larger computational cost.

  9. Parallelization strategies for continuum-generalized method of moments on the multi-thread systems

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.

    2017-07-01

    Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.

  10. Simple and practical approach for computing the ray Hessian matrix in geometrical optics.

    PubMed

    Lin, Psang Dain

    2018-02-01

    A method is proposed for simplifying the computation of the ray Hessian matrix in geometrical optics by replacing the angular variables in the system variable vector with their equivalent cosine and sine functions. The variable vector of a boundary surface is similarly defined in such a way as to exclude any angular variables. It is shown that the proposed formulations reduce the computation time of the Hessian matrix by around 10 times compared to the previous method reported by the current group in Advanced Geometrical Optics (2016). Notably, the method proposed in this study involves only polynomial differentiation, i.e., trigonometric function calls are not required. As a consequence, the computation complexity is significantly reduced. Five illustrative examples are given. The first three examples show that the proposed method is applicable to the determination of the Hessian matrix for any pose matrix, irrespective of the order in which the rotation and translation motions are specified. The last two examples demonstrate the use of the proposed Hessian matrix in determining the axial and lateral chromatic aberrations of a typical optical system.

  11. An FPGA computing demo core for space charge simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Jinyuan; Huang, Yifei; /Fermilab

    2009-01-01

    In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computedmore » using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.« less

  12. Quantum computation with realistic magic-state factories

    NASA Astrophysics Data System (ADS)

    O'Gorman, Joe; Campbell, Earl T.

    2017-03-01

    Leading approaches to fault-tolerant quantum computation dedicate a significant portion of the hardware to computational factories that churn out high-fidelity ancillas called magic states. Consequently, efficient and realistic factory design is of paramount importance. Here we present the most detailed resource assessment to date of magic-state factories within a surface code quantum computer, along the way introducing a number of techniques. We show that the block codes of Bravyi and Haah [Phys. Rev. A 86, 052329 (2012), 10.1103/PhysRevA.86.052329] have been systematically undervalued; we track correlated errors both numerically and analytically, providing fidelity estimates without appeal to the union bound. We also introduce a subsystem code realization of these protocols with constant time and low ancilla cost. Additionally, we confirm that magic-state factories have space-time costs that scale as a constant factor of surface code costs. We find that the magic-state factory required for postclassical factoring can be as small as 6.3 million data qubits, ignoring ancilla qubits, assuming 10-4 error gates and the availability of long-range interactions.

  13. Performance evaluation of GPU parallelization, space-time adaptive algorithms, and their combination for simulating cardiac electrophysiology.

    PubMed

    Sachetto Oliveira, Rafael; Martins Rocha, Bernardo; Burgarelli, Denise; Meira, Wagner; Constantinides, Christakis; Weber Dos Santos, Rodrigo

    2018-02-01

    The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy. Copyright © 2017 John Wiley & Sons, Ltd.

  14. 24 CFR 180.405 - Time computations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Time computations. 180.405 Section... Hearing § 180.405 Time computations. (a) In computing time under this part, the time period begins the day... Saturday, Sunday, or legal holiday observed by the Federal Government, in which case the time period...

  15. 24 CFR 180.405 - Time computations.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 1 2012-04-01 2012-04-01 false Time computations. 180.405 Section... Hearing § 180.405 Time computations. (a) In computing time under this part, the time period begins the day... Saturday, Sunday, or legal holiday observed by the Federal Government, in which case the time period...

  16. Temporal acceleration of spatially distributed kinetic Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Abhijit; Vlachos, Dionisios G.

    The computational intensity of kinetic Monte Carlo (KMC) simulation is a major impediment in simulating large length and time scales. In recent work, an approximate method for KMC simulation of spatially uniform systems, termed the binomial {tau}-leap method, was introduced [A. Chatterjee, D.G. Vlachos, M.A. Katsoulakis, Binomial distribution based {tau}-leap accelerated stochastic simulation, J. Chem. Phys. 122 (2005) 024112], where molecular bundles instead of individual processes are executed over coarse-grained time increments. This temporal coarse-graining can lead to significant computational savings but its generalization to spatially lattice KMC simulation has not been realized yet. Here we extend the binomial {tau}-leapmore » method to lattice KMC simulations by combining it with spatially adaptive coarse-graining. Absolute stability and computational speed-up analyses for spatial systems along with simulations provide insights into the conditions where accuracy and substantial acceleration of the new spatio-temporal coarse-graining method are ensured. Model systems demonstrate that the r-time increment criterion of Chatterjee et al. obeys the absolute stability limit for values of r up to near 1.« less

  17. Seismpol_ a visual-basic computer program for interactive and automatic earthquake waveform analysis

    NASA Astrophysics Data System (ADS)

    Patanè, Domenico; Ferrari, Ferruccio

    1997-11-01

    A Microsoft Visual-Basic computer program for waveform analysis of seismic signals is presented. The program combines interactive and automatic processing of digital signals using data recorded by three-component seismic stations. The analysis procedure can be used in either an interactive earthquake analysis or an automatic on-line processing of seismic recordings. The algorithm works in the time domain using the Covariance Matrix Decomposition method (CMD), so that polarization characteristics may be computed continuously in real time and seismic phases can be identified and discriminated. Visual inspection of the particle motion in hortogonal planes of projection (hodograms) reduces the danger of misinterpretation derived from the application of the polarization filter. The choice of time window and frequency intervals improves the quality of the extracted polarization information. In fact, the program uses a band-pass Butterworth filter to process the signals in the frequency domain by analysis of a selected signal window into a series of narrow frequency bands. Significant results supported by well defined polarizations and source azimuth estimates for P and S phases are also obtained for short-period seismic events (local microearthquakes).

  18. Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.

    PubMed

    Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S

    2015-02-09

    A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.

  19. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  20. NASA's Participation in the National Computational Grid

    NASA Technical Reports Server (NTRS)

    Feiereisen, William J.; Zornetzer, Steve F. (Technical Monitor)

    1998-01-01

    Over the last several years it has become evident that the character of NASA's supercomputing needs has changed. One of the major missions of the agency is to support the design and manufacture of aero- and space-vehicles with technologies that will significantly reduce their cost. It is becoming clear that improvements in the process of aerospace design and manufacturing will require a high performance information infrastructure that allows geographically dispersed teams to draw upon resources that are broader than traditional supercomputing. A computational grid draws together our information resources into one system. We can foresee the time when a Grid will allow engineers and scientists to use the tools of supercomputers, databases and on line experimental devices in a virtual environment to collaborate with distant colleagues. The concept of a computational grid has been spoken of for many years, but several events in recent times are conspiring to allow us to actually build one. In late 1997 the National Science Foundation initiated the Partnerships for Advanced Computational Infrastructure (PACI) which is built around the idea of distributed high performance computing. The Alliance lead, by the National Computational Science Alliance (NCSA), and the National Partnership for Advanced Computational Infrastructure (NPACI), lead by the San Diego Supercomputing Center, have been instrumental in drawing together the "Grid Community" to identify the technology bottlenecks and propose a research agenda to address them. During the same period NASA has begun to reformulate parts of two major high performance computing research programs to concentrate on distributed high performance computing and has banded together with the PACI centers to address the research agenda in common.

Top