Sample records for total computation time

  1. Cross-sectional associations of total sitting and leisure screen time with cardiometabolic risk in adults. Results from the HUNT Study, Norway.

    PubMed

    Chau, Josephine Y; Grunseit, Anne; Midthjell, Kristian; Holmen, Jostein; Holmen, Turid L; Bauman, Adrian E; van der Ploeg, Hidde P

    2014-01-01

    To examine associations of total sitting time, TV-viewing and leisure-time computer use with cardiometabolic risk biomarkers in adults. Population based cross-sectional study. Waist circumference, BMI, total cholesterol, HDL cholesterol, blood pressure, non-fasting glucose, gamma glutamyltransferase (GGT) and triglycerides were measured in 48,882 adults aged 20 years or older from the Nord-Trøndelag Health Study 2006-2008 (HUNT3). Adjusted multiple regression models were used to test for associations between these biomarkers and self-reported total sitting time, TV-viewing and leisure-time computer use in the whole sample and by cardiometabolic disease status sub-groups. In the whole sample, reporting total sitting time ≥10 h/day was associated with poorer BMI, waist circumference, total cholesterol, HDL cholesterol, diastolic blood pressure, systolic blood pressure, non-fasting glucose, GGT and triglyceride levels compared to those reporting total sitting time <4h/day (all p<0.05). TV-viewing ≥4 h/day was associated with poorer BMI, waist circumference, total cholesterol, HDL cholesterol, systolic blood pressure, GGT and triglycerides compared to TV-viewing <1h/day (all p<0.05). Leisure-time computer use ≥1 h/day was associated with poorer BMI, total cholesterol, diastolic blood pressure, GGT and triglycerides compared with those reporting no leisure-time computing. Sub-group analyses by cardiometabolic disease status showed similar patterns in participants free of cardiometabolic disease, while similar albeit non-significant patterns were observed in those with cardiometabolic disease. Total sitting time, TV-viewing and leisure-time computer use are associated with poorer cardiometabolic risk profiles in adults. Reducing sedentary behaviour throughout the day and limiting TV-viewing and leisure-time computer use may have health benefits. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  2. Job-shop scheduling applied to computer vision

    NASA Astrophysics Data System (ADS)

    Sebastian y Zuniga, Jose M.; Torres-Medina, Fernando; Aracil, Rafael; Reinoso, Oscar; Jimenez, Luis M.; Garcia, David

    1997-09-01

    This paper presents a method for minimizing the total elapsed time spent by n tasks running on m differents processors working in parallel. The developed algorithm not only minimizes the total elapsed time but also reduces the idle time and waiting time of in-process tasks. This condition is very important in some applications of computer vision in which the time to finish the total process is particularly critical -- quality control in industrial inspection, real- time computer vision, guided robots. The scheduling algorithm is based on the use of two matrices, obtained from the precedence relationships between tasks, and the data obtained from the two matrices. The developed scheduling algorithm has been tested in one application of quality control using computer vision. The results obtained have been satisfactory in the application of different image processing algorithms.

  3. A new system of computer-assisted navigation leading to reduction in operating time in uncemented total hip replacement in a matched population.

    PubMed

    Chaudhry, Fouad A; Ismail, Sanaa Z; Davis, Edward T

    2018-05-01

    Computer-assisted navigation techniques are used to optimise component placement and alignment in total hip replacement. It has developed in the last 10 years but despite its advantages only 0.3% of all total hip replacements in England and Wales are done using computer navigation. One of the reasons for this is that computer-assisted technology increases operative time. A new method of pelvic registration has been developed without the need to register the anterior pelvic plane (BrainLab hip 6.0) which has shown to improve the accuracy of THR. The purpose of this study was to find out if the new method reduces the operating time. This was a retrospective analysis of comparing operating time in computer navigated primary uncemented total hip replacement using two methods of registration. Group 1 included 128 cases that were performed using BrainLab versions 2.1-5.1. This version relied on the acquisition of the anterior pelvic plane for registration. Group 2 included 128 cases that were performed using the newest navigation software, BrainLab hip 6.0 (registration possible with the patient in the lateral decubitus position). The operating time was 65.79 (40-98) minutes using the old method of registration and was 50.87 (33-74) minutes using the new method of registration. This difference was statistically significant. The body mass index (BMI) was comparable in both groups. The study supports the use of new method of registration in improving the operating time in computer navigated primary uncemented total hip replacements.

  4. Television viewing, computer use and total screen time in Canadian youth.

    PubMed

    Mark, Amy E; Boyce, William F; Janssen, Ian

    2006-11-01

    Research has linked excessive television viewing and computer use in children and adolescents to a variety of health and social problems. Current recommendations are that screen time in children and adolescents should be limited to no more than 2 h per day. To determine the percentage of Canadian youth meeting the screen time guideline recommendations. The representative study sample consisted of 6942 Canadian youth in grades 6 to 10 who participated in the 2001/2002 World Health Organization Health Behaviour in School-Aged Children survey. Only 41% of girls and 34% of boys in grades 6 to 10 watched 2 h or less of television per day. Once the time of leisure computer use was included and total daily screen time was examined, only 18% of girls and 14% of boys met the guidelines. The prevalence of those meeting the screen time guidelines was higher in girls than boys. Fewer than 20% of Canadian youth in grades 6 to 10 met the total screen time guidelines, suggesting that increased public health interventions are needed to reduce the number of leisure time hours that Canadian youth spend watching television and using the computer.

  5. DISE Summary Report (1992)

    DTIC Science & Technology

    1994-03-01

    Specification and Network Time Protocol(NTP) over the Implementation. RFC-o 119, Network OSI Remote Operations Service. RFC- Working Group, September...approximately ISIS implements a powerful model of 94% of the total computation time. distributed computation known as modelo Timing results are

  6. Optimisation of multiplet identifier processing on a PLAYSTATION® 3

    NASA Astrophysics Data System (ADS)

    Hattori, Masami; Mizuno, Takashi

    2010-02-01

    To enable high-performance computing (HPC) for applications with large datasets using a Sony® PLAYSTATION® 3 (PS3™) video game console, we configured a hybrid system consisting of a Windows® PC and a PS3™. To validate this system, we implemented the real-time multiplet identifier (RTMI) application, which identifies multiplets of microearthquakes in terms of the similarity of their waveforms. The cross-correlation computation, which is a core algorithm of the RTMI application, was optimised for the PS3™ platform, while the rest of the computation, including data input and output remained on the PC. With this configuration, the core part of the algorithm ran 69 times faster than the original program, accelerating total computation speed more than five times. As a result, the system processed up to 2100 total microseismic events, whereas the original implementation had a limit of 400 events. These results indicate that this system enables high-performance computing for large datasets using the PS3™, as long as data transfer time is negligible compared with computation time.

  7. Upper extremities and spinal musculoskeletal disorders and risk factors in students using computers

    PubMed Central

    Calik, Bilge Basakci; Yagci, Nesrin; Gursoy, Suleyman; Zencir, Mehmet

    2014-01-01

    Objective: To examine the effects of computer usage on the musculoskeletal system discomforts (MSD) of Turkish university students, the possible risk factors and study implications (SI). Methods: The study comprised a total of 871 students. Demographic information was recorded and the Student Specific Cornell Musculoskeletal Discomfort Questionnaire (SsCMDQ) was used to evaluate musculoskeletal system discomforts. Results: The neck, lower back and upper back areas were determined to be the most affected areas and percentages for SI were 21.6%, 19.3% and 16.3% respectively. Significant differences were found to be daily computer usage time for the lower back, total usage time for the neck, being female and below the age of 21 years (p<0.05) had an increased risk. Conclusions: The neck, lower back and upper back areas were found to be the most affected areas due to computer usage in university students. Risk factors for MSD were seen to be daily and total computer usage time, female gender and age below 21 years and these were deemed to cause study interference PMID:25674139

  8. Impact of increasing social media use on sitting time and body mass index.

    PubMed

    Alley, Stephanie; Wellens, Pauline; Schoeppe, Stephanie; de Vries, Hein; Rebar, Amanda L; Short, Camille E; Duncan, Mitch J; Vandelanotte, Corneel

    2017-08-01

    Issue addressed Sedentary behaviours, in particular sitting, increases the risk of cardiovascular disease, type 2 diabetes, obesity and poorer mental health status. In Australia, 70% of adults sit for more than 8h per day. The use of social media applications (e.g. Facebook, Twitter, and Instagram) is on the rise; however, no studies have explored the association of social media use with sitting time and body mass index (BMI). Methods Cross-sectional self-report data on demographics, BMI and sitting time were collected from 1140 participants in the 2013 Queensland Social Survey. Generalised linear models were used to estimate associations of a social media score calculated from social media use, perceived importance of social media, and number of social media contacts with sitting time and BMI. Results Participants with a high social media score had significantly greater sitting times while using a computer in leisure time and significantly greater total sitting time on non-workdays. However, no associations were found between social media score and sitting to view TV, use motorised transport, work or participate in other leisure activities; or total workday, total sitting time or BMI. Conclusions These results indicate that social media use is associated with increased sitting time while using a computer, and total sitting time on non-workdays. So what? The rise in social media use may have a negative impact on health by contributing to computer sitting and total sitting time on non-workdays. Future longitudinal research with a representative sample and objective sitting measures is needed to confirm findings.

  9. Computer/gaming station use in youth: Correlations among use, addiction and functional impairment

    PubMed Central

    Baer, Susan; Saran, Kelly; Green, David A

    2012-01-01

    OBJECTIVE: Computer/gaming station use is ubiquitous in the lives of youth today. Overuse is a concern, but it remains unclear whether problems arise from addictive patterns of use or simply excessive time spent on use. The goal of the present study was to evaluate computer/gaming station use in youth and to examine the relationship between amounts of use, addictive features of use and functional impairment. METHOD: A total of 110 subjects (11 to 17 years of age) from local schools participated. Time spent on television, video gaming and non-gaming recreational computer activities was measured. Addictive features of computer/gaming station use were ascertained, along with emotional/behavioural functioning. Multiple linear regressions were used to understand how youth functioning varied with time of use and addictive features of use. RESULTS: Mean (± SD) total screen time was 4.5±2.4 h/day. Addictive features of use were consistently correlated with functional impairment across multiple measures and informants, whereas time of use, after controlling for addiction, was not. CONCLUSIONS: Youth are spending many hours each day in front of screens. In the absence of addictive features of computer/gaming station use, time spent is not correlated with problems; however, youth with addictive features of use show evidence of poor emotional/ behavioural functioning. PMID:24082802

  10. [Electronic medical records: Evolution of physician-patient relationship in the Primary Care clinic].

    PubMed

    Pérez-Santonja, T; Gómez-Paredes, L; Álvarez-Montero, S; Cabello-Ballesteros, L; Mombiela-Muruzabal, M T

    2017-04-01

    The introduction of electronic medical records and computer media in clinics, has influenced the physician-patient relationship. These modifications have many advantages, but there is concern that the computer has become too important, going from a working tool to the centre of our attention during the clinical interview, decreasing doctor interaction with the patient. The objective of the study was to estimate the percentage of time that family physicians spend on computer media compared to interpersonal communication with the patient, and whether this time is modified depending on different variables such as, doctor's age or reason for the consultation. An observational and descriptive study was conducted for 10 weeks, with 2 healthcare centres involved. The researchers attended all doctor- patient interviews, recording the patient time in and out of the consultation. Each time the doctor fixed his gaze on computer media the time was clocked. A total of 436 consultations were collected. The doctors looked at the computer support a median 38.33% of the total duration of an interview. Doctors of 45 years and older spent more time fixing their eyes on computer media (P<.05). Family physicians used almost 40% of the consultation time looking at computer media, and depends on age of physician, number of queries, and number of medical appointments. Copyright © 2016 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España, S.L.U. All rights reserved.

  11. Some optimal considerations in attitude control systems. [evaluation of value of relative weighting between time and fuel for relay control law

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1973-01-01

    The conventional six-engine reaction control jet relay attitude control law with deadband is shown to be a good linear approximation to a weighted time-fuel optimal control law. Techniques for evaluating the value of the relative weighting between time and fuel for a particular relay control law is studied along with techniques to interrelate other parameters for the two control laws. Vehicle attitude control laws employing control moment gyros are then investigated. Steering laws obtained from the expression for the reaction torque of the gyro configuration are compared to a total optimal attitude control law that is derived from optimal linear regulator theory. This total optimal attitude control law has computational disadvantages in the solving of the matrix Riccati equation. Several computational algorithms for solving the matrix Riccati equation are investigated with respect to accuracy, computational storage requirements, and computational speed.

  12. Project Energise: Using participatory approaches and real time computer prompts to reduce occupational sitting and increase work time physical activity in office workers.

    PubMed

    Gilson, Nicholas D; Ng, Norman; Pavey, Toby G; Ryde, Gemma C; Straker, Leon; Brown, Wendy J

    2016-11-01

    This efficacy study assessed the added impact real time computer prompts had on a participatory approach to reduce occupational sedentary exposure and increase physical activity. Quasi-experimental. 57 Australian office workers (mean [SD]; age=47 [11] years; BMI=28 [5]kg/m 2 ; 46 men) generated a menu of 20 occupational 'sit less and move more' strategies through participatory workshops, and were then tasked with implementing strategies for five months (July-November 2014). During implementation, a sub-sample of workers (n=24) used a chair sensor/software package (Sitting Pad) that gave real time prompts to interrupt desk sitting. Baseline and intervention sedentary behaviour and physical activity (GENEActiv accelerometer; mean work time percentages), and minutes spent sitting at desks (Sitting Pad; mean total time and longest bout) were compared between non-prompt and prompt workers using a two-way ANOVA. Workers spent close to three quarters of their work time sedentary, mostly sitting at desks (mean [SD]; total desk sitting time=371 [71]min/day; longest bout spent desk sitting=104 [43]min/day). Intervention effects were four times greater in workers who used real time computer prompts (8% decrease in work time sedentary behaviour and increase in light intensity physical activity; p<0.01). Respective mean differences between baseline and intervention total time spent sitting at desks, and the longest bout spent desk sitting, were 23 and 32min/day lower in prompt than in non-prompt workers (p<0.01). In this sample of office workers, real time computer prompts facilitated the impact of a participatory approach on reductions in occupational sedentary exposure, and increases in physical activity. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  13. Advantages of Parallel Processing and the Effects of Communications Time

    NASA Technical Reports Server (NTRS)

    Eddy, Wesley M.; Allman, Mark

    2000-01-01

    Many computing tasks involve heavy mathematical calculations, or analyzing large amounts of data. These operations can take a long time to complete using only one computer. Networks such as the Internet provide many computers with the ability to communicate with each other. Parallel or distributed computing takes advantage of these networked computers by arranging them to work together on a problem, thereby reducing the time needed to obtain the solution. The drawback to using a network of computers to solve a problem is the time wasted in communicating between the various hosts. The application of distributed computing techniques to a space environment or to use over a satellite network would therefore be limited by the amount of time needed to send data across the network, which would typically take much longer than on a terrestrial network. This experiment shows how much faster a large job can be performed by adding more computers to the task, what role communications time plays in the total execution time, and the impact a long-delay network has on a distributed computing system.

  14. Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.

    PubMed

    Yamamoto, Loren; Kanemori, Joan

    2010-06-01

    Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  15. Partitioning medical image databases for content-based queries on a Grid.

    PubMed

    Montagnat, J; Breton, V; E Magnin, I

    2005-01-01

    In this paper we study the impact of executing a medical image database query application on the grid. For lowering the total computation time, the image database is partitioned into subsets to be processed on different grid nodes. A theoretical model of the application complexity and estimates of the grid execution overhead are used to efficiently partition the database. We show results demonstrating that smart partitioning of the database can lead to significant improvements in terms of total computation time. Grids are promising for content-based image retrieval in medical databases.

  16. 77 FR 5017 - Agency Information Collection Activities; Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-01

    ... 30 percent for computer programming, 20 percent for attorney services, 30 percent for skilled... workers, 25 percent for computer programming, and 20 percent for management time. (d) Dispute resolution...), or a total cost of $421,200. \\12\\ The blended rate is 40 percent for computer programming, 10 percent...

  17. Blessing and curse of chaos in numerical turbulence simulations

    NASA Astrophysics Data System (ADS)

    Lee, Jon

    1994-03-01

    Because of the trajectory instability, time reversal is not possible beyond a certain evolution time and hence the time irreversibility prevails under the finite-accuracy trajectory computation. This therefore provides a practical reconciliation of the dynamic reversibility and macroscopic irreversibility (blessing of chaos). On the other hand, the trajectory instability is also responsible for a limited evolution time, so that finite-accuracy computation would yield a pseudo-orbit which is totally unrelated to the true trajectory (curse of chaos). For the inviscid 2D flow, however, we can accurately compute the long- time average of flow quantities with a pseudo-orbit by invoking the ergodic theorem.

  18. Computer-assisted versus conventional free fibula flap technique for craniofacial reconstruction: an outcomes comparison.

    PubMed

    Seruya, Mitchel; Fisher, Mark; Rodriguez, Eduardo D

    2013-11-01

    There has been rising interest in computer-aided design/computer-aided manufacturing for preoperative planning and execution of osseous free flap reconstruction. The purpose of this study was to compare outcomes between computer-assisted and conventional fibula free flap techniques for craniofacial reconstruction. A two-center, retrospective review was carried out on patients who underwent fibula free flap surgery for craniofacial reconstruction from 2003 to 2012. Patients were categorized by the type of reconstructive technique: conventional (between 2003 and 2009) or computer-aided design/computer-aided manufacturing (from 2010 to 2012). Demographics, surgical factors, and perioperative and long-term outcomes were compared. A total of 68 patients underwent microsurgical craniofacial reconstruction: 58 conventional and 10 computer-aided design and manufacturing fibula free flaps. By demographics, patients undergoing the computer-aided design/computer-aided manufacturing method were significantly older and had a higher rate of radiotherapy exposure compared with conventional patients. Intraoperatively, the median number of osteotomies was significantly higher (2.0 versus 1.0, p=0.002) and the median ischemia time was significantly shorter (120 minutes versus 170 minutes, p=0.004) for the computer-aided design/computer-aided manufacturing technique compared with conventional techniques; operative times were shorter for patients undergoing the computer-aided design/computer-aided manufacturing technique, although this did not reach statistical significance. Perioperative and long-term outcomes were equivalent for the two groups, notably, hospital length of stay, recipient-site infection, partial and total flap loss, and rate of soft-tissue and bony tissue revisions. Microsurgical craniofacial reconstruction using a computer-assisted fibula flap technique yielded significantly shorter ischemia times amidst a higher number of osteotomies compared with conventional techniques. Therapeutic, III.

  19. Radiotherapy Monte Carlo simulation using cloud computing technology.

    PubMed

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-12-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  20. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms

    PubMed Central

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  1. Total reduction of distorted echelle spectrograms - An automatic procedure. [for computer controlled microdensitometer

    NASA Technical Reports Server (NTRS)

    Peterson, R. C.; Title, A. M.

    1975-01-01

    A total reduction procedure, notable for its use of a computer-controlled microdensitometer for semi-automatically tracing curved spectra, is applied to distorted high-dispersion echelle spectra recorded by an image tube. Microdensitometer specifications are presented and the FORTRAN, TRACEN and SPOTS programs are outlined. The intensity spectrum of the photographic or electrographic plate is plotted on a graphic display. The time requirements are discussed in detail.

  2. Computer Games and Instruction

    ERIC Educational Resources Information Center

    Tobias, Sigmund, Ed.; Fletcher, J. D., Ed.

    2011-01-01

    There is intense interest in computer games. A total of 65 percent of all American households play computer games, and sales of such games increased 22.9 percent last year. The average amount of game playing time was found to be 13.2 hours per week. The popularity and market success of games is evident from both the increased earnings from games,…

  3. The Effectiveness of a Web-Based Computer-Tailored Intervention on Workplace Sitting: A Randomized Controlled Trial.

    PubMed

    De Cocker, Katrien; De Bourdeaudhuij, Ilse; Cardon, Greet; Vandelanotte, Corneel

    2016-05-31

    Effective interventions to influence workplace sitting are needed, as office-based workers demonstrate high levels of continued sitting, and sitting too much is associated with adverse health effects. Therefore, we developed a theory-driven, Web-based, interactive, computer-tailored intervention aimed at reducing and interrupting sitting at work. The objective of our study was to investigate the effects of this intervention on objectively measured sitting time, standing time, and breaks from sitting, as well as self-reported context-specific sitting among Flemish employees in a field-based approach. Employees (n=213) participated in a 3-group randomized controlled trial that assessed outcomes at baseline, 1-month follow-up, and 3-month follow-up through self-reports. A subsample (n=122) were willing to wear an activity monitor (activPAL) from Monday to Friday. The tailored group received an automated Web-based, computer-tailored intervention including personalized feedback and tips on how to reduce or interrupt workplace sitting. The generic group received an automated Web-based generic advice with tips. The control group was a wait-list control condition, initially receiving no intervention. Intervention effects were tested with repeated-measures multivariate analysis of variance. The tailored intervention was successful in decreasing self-reported total workday sitting (time × group: P<.001), sitting at work (time × group: P<.001), and leisure time sitting (time × group: P=.03), and in increasing objectively measured breaks at work (time × group: P=.07); this was not the case in the other conditions. The changes in self-reported total nonworkday sitting, sitting during transport, television viewing, and personal computer use, objectively measured total sitting time, and sitting and standing time at work did not differ between conditions. Our results point out the significance of computer tailoring for sedentary behavior and its potential use in public health promotion, as the effects of the tailored condition were superior to the generic and control conditions. Clinicaltrials.gov NCT02672215; http://clinicaltrials.gov/ct2/show/NCT02672215 (Archived by WebCite at http://www.webcitation.org/6glPFBLWv).

  4. The Effectiveness of a Web-Based Computer-Tailored Intervention on Workplace Sitting: A Randomized Controlled Trial

    PubMed Central

    De Bourdeaudhuij, Ilse; Cardon, Greet; Vandelanotte, Corneel

    2016-01-01

    Background Effective interventions to influence workplace sitting are needed, as office-based workers demonstrate high levels of continued sitting, and sitting too much is associated with adverse health effects. Therefore, we developed a theory-driven, Web-based, interactive, computer-tailored intervention aimed at reducing and interrupting sitting at work. Objective The objective of our study was to investigate the effects of this intervention on objectively measured sitting time, standing time, and breaks from sitting, as well as self-reported context-specific sitting among Flemish employees in a field-based approach. Methods Employees (n=213) participated in a 3-group randomized controlled trial that assessed outcomes at baseline, 1-month follow-up, and 3-month follow-up through self-reports. A subsample (n=122) were willing to wear an activity monitor (activPAL) from Monday to Friday. The tailored group received an automated Web-based, computer-tailored intervention including personalized feedback and tips on how to reduce or interrupt workplace sitting. The generic group received an automated Web-based generic advice with tips. The control group was a wait-list control condition, initially receiving no intervention. Intervention effects were tested with repeated-measures multivariate analysis of variance. Results The tailored intervention was successful in decreasing self-reported total workday sitting (time × group: P<.001), sitting at work (time × group: P<.001), and leisure time sitting (time × group: P=.03), and in increasing objectively measured breaks at work (time × group: P=.07); this was not the case in the other conditions. The changes in self-reported total nonworkday sitting, sitting during transport, television viewing, and personal computer use, objectively measured total sitting time, and sitting and standing time at work did not differ between conditions. Conclusions Our results point out the significance of computer tailoring for sedentary behavior and its potential use in public health promotion, as the effects of the tailored condition were superior to the generic and control conditions. Trial Registration Clinicaltrials.gov NCT02672215; http://clinicaltrials.gov/ct2/show/NCT02672215 (Archived by WebCite at http://www.webcitation.org/6glPFBLWv) PMID:27245789

  5. Time diary and questionnaire assessment of factors associated with academic and personal success among university undergraduates.

    PubMed

    George, Darren; Dixon, Sinikka; Stansal, Emory; Gelb, Shannon Lund; Pheri, Tabitha

    2008-01-01

    A sample of 231 students attending a private liberal arts university in central Alberta, Canada, completed a 5-day time diary and a 71-item questionnaire assessing the influence of personal, cognitive, and attitudinal factors on success. The authors used 3 success measures: cumulative grade point average (GPA), Personal Success--each participant's rating of congruence between stated goals and progress toward those goals--and Total Success--a measure that weighted GPA and Personal Success equally. The greatest predictors of GPA were time-management skills, intelligence, time spent studying, computer ownership, less time spent in passive leisure, and a healthy diet. Predictors of Personal Success scores were clearly defined goals, overall health, personal spirituality, and time-management skills. Predictors of Total Success scores were clearly defined goals, time-management skills, less time spent in passive leisure, healthy diet, waking up early, computer ownership, and less time spent sleeping. Results suggest alternatives to traditional predictors of academic success.

  6. Automated data acquisition and processing for a Hohlraum reflectometer

    NASA Technical Reports Server (NTRS)

    Difilippo, Frank; Mirtich, Michael J.

    1988-01-01

    A computer and data acquisition board were used to automate a Perkin-Elmer Model 13 spectrophotometer with a Hohlraum reflectivity attachment. Additional electronic circuitry was necessary for amplification, filtering, and debouncing. The computer was programmed to calculate spectral emittance from 1.7 to 14.7 micrometers and also total emittance versus temperature. Automation of the Hohlraum reflectometer reduced the time required to determine total emittance versus temperature from about three hours to about 40 minutes.

  7. Controlling total spot power from holographic laser by superimposing a binary phase grating.

    PubMed

    Liu, Xiang; Zhang, Jian; Gan, Yu; Wu, Liying

    2011-04-25

    By superimposing a tunable binary phase grating with a conventional computer-generated hologram, the total power of multiple holographic 3D spots can be easily controlled by changing the phase depth of grating with high accuracy to a random power value for real-time optical manipulation without extra power loss. Simulation and experiment results indicate that a resolution of 0.002 can be achieved at a lower time cost for normalized total spot power.

  8. Computer program for post-flight evaluation of a launch vehicle upper-stage on-off reaction control system

    NASA Technical Reports Server (NTRS)

    Knauber, R. N.

    1982-01-01

    This report describes a FORTRAN IV coded computer program for post-flight evaluation of a launch vehicle upper stage on-off reaction control system. Aerodynamic and thrust misalignment disturbances are computed as well as the total disturbing moments in pitch, yaw, and roll. Effective thrust misalignment angle time histories of the rocket booster motor are calculated. Disturbing moments are integrated and used to estimate the required control system total inpulse. Effective control system specific inpulse is computed for the boost and coast phases using measured control fuel useage. This method has been used for more than fifteen years for analyzing the NASA Scout launch vehicle second and third-stage reaction control system performance. The computer program is set up in FORTRAN IV for a CDC CYBER 175 system. With slight modification it can be used on other machines having a FORTRAN compiler. The program has optional CALCOMP plotting output. With this option the program requires 19K words of memory and has 786 cards. Running time on a CDC CYBER 175 system is less than three (3) seconds for a typical problem.

  9. Computer work and musculoskeletal disorders of the neck and upper extremity: A systematic review

    PubMed Central

    2010-01-01

    Background This review examines the evidence for an association between computer work and neck and upper extremity disorders (except carpal tunnel syndrome). Methods A systematic critical review of studies of computer work and musculoskeletal disorders verified by a physical examination was performed. Results A total of 22 studies (26 articles) fulfilled the inclusion criteria. Results show limited evidence for a causal relationship between computer work per se, computer mouse and keyboard time related to a diagnosis of wrist tendonitis, and for an association between computer mouse time and forearm disorders. Limited evidence was also found for a causal relationship between computer work per se and computer mouse time related to tension neck syndrome, but the evidence for keyboard time was insufficient. Insufficient evidence was found for an association between other musculoskeletal diagnoses of the neck and upper extremities, including shoulder tendonitis and epicondylitis, and any aspect of computer work. Conclusions There is limited epidemiological evidence for an association between aspects of computer work and some of the clinical diagnoses studied. None of the evidence was considered as moderate or strong and there is a need for more and better documentation. PMID:20429925

  10. Fast Computation of Ground Motion Shaking Map base on the Modified Stochastic Finite Fault Modeling

    NASA Astrophysics Data System (ADS)

    Shen, W.; Zhong, Q.; Shi, B.

    2012-12-01

    Rapidly regional MMI mapping soon after a moderate-large earthquake is crucial to loss estimation, emergency services and planning of emergency action by the government. In fact, many countries show different degrees of attention on the technology of rapid estimation of MMI , and this technology has made significant progress in earthquake-prone countries. In recent years, numerical modeling of strong ground motion has been well developed with the advances of computation technology and earthquake science. The computational simulation of strong ground motion caused by earthquake faulting has become an efficient way to estimate the regional MMI distribution soon after earthquake. In China, due to the lack of strong motion observation in network sparse or even completely missing areas, the development of strong ground motion simulation method has become an important means of quantitative estimation of strong motion intensity. In many of the simulation models, stochastic finite fault model is preferred to rapid MMI estimating for its time-effectiveness and accuracy. In finite fault model, a large fault is divided into N subfaults, and each subfault is considered as a small point source. The ground motions contributed by each subfault are calculated by the stochastic point source method which is developed by Boore, and then summed at the observation point to obtain the ground motion from the entire fault with a proper time delay. Further, Motazedian and Atkinson proposed the concept of Dynamic Corner Frequency, with the new approach, the total radiated energy from the fault and the total seismic moment are conserved independent of subfault size over a wide range of subfault sizes. In current study, the program EXSIM developed by Motazedian and Atkinson has been modified for local or regional computations of strong motion parameters such as PGA, PGV and PGD, which are essential for MMI estimating. To make the results more reasonable, we consider the impact of V30 for the ground shaking intensity, and the results of the comparisons between the simulated and observed MMI for the 2004 Mw 6.0 Parkfield earthquake, the 2008 Mw 7.9Wenchuan earthquake and the 1976 Mw 7.6Tangshan earthquake is fairly well. Take Parkfield earthquake as example, the simulative result reflect the directivity effect and the influence of the shallow velocity structure well. On the other hand, the simulative data is in good agreement with the network data and NGA (Next Generation Attenuation). The consumed time depends on the number of the subfaults and the number of the grid point. For the 2004 Mw 6.0 Parkfield earthquake, the grid size we calculated is 2.5° × 2.5°, the grid space is 0.025°, and the total time consumed is about 1.3hours. For the 2008 Mw 7.9 Wenchuan earthquake, the grid size calculated is 10° × 10°, the grid space is 0.05°, the total number of grid point is more than 40,000, and the total time consumed is about 7.5 hours. For t the 1976 Mw 7.6 Tangshan earthquake, the grid size we calculated is 4° × 6°, the grid space is 0.05°, and the total time consumed is about 2.1 hours. The CPU we used is 3.40GHz, and such computational time could further reduce by using GPU computing technique and other parallel computing technique. This is also our next focus.

  11. (abstract) A Comparison Between Measurements of the F-layer Critical Frequency and Values Derived from the PRISM Adjustment Algorithm Applied to Total Electron Content Data in the Equatorial Region

    NASA Technical Reports Server (NTRS)

    Mannucci, A. J.; Anderson, D. N.; Abdu, A. M.

    1994-01-01

    The Parametrized Real-Time Ionosphere Specification Model (PRISM) is a global ionospheric specification model that can incorporate real-time data to compute accurate electron density profiles. Time series of computed and measured data are compared in this paper. This comparison can be used to suggest methods of optimizing the PRISM adjustment algorithm for TEC data obtained at low altitudes.

  12. Cost optimization for buildings with hybrid ventilation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Kun; Lu, Yan

    A method including: computing a total cost for a first zone in a building, wherein the total cost is equal to an actual energy cost of the first zone plus a thermal discomfort cost of the first zone; and heuristically optimizing the total cost to identify temperature setpoints for a mechanical heating/cooling system and a start time and an end time of the mechanical heating/cooling system, based on external weather data and occupancy data of the first zone.

  13. Mitigating Spam Using Spatio-Temporal Reputation

    DTIC Science & Technology

    2010-01-01

    scalable; computation can occur in near real-time and over 500,000 emails can be scored an hour. 1 Introduction Roughly 90% of the total volume of email on...Sokolsky, and J. M. Smith. Dynamic trust management. IEEE Computer (Special Issue on Trust Mangement ), 2009. [11] P. Boykins and B. Roychowdhury

  14. Computer program determines thermal environment and temperature history of lunar orbiting space vehicles

    NASA Technical Reports Server (NTRS)

    Head, D. E.; Mitchell, K. L.

    1967-01-01

    Program computes the thermal environment of a spacecraft in a lunar orbit. The quantities determined include the incident flux /solar and lunar emitted radiation/, total radiation absorbed by a surface, and the resulting surface temperature as a function of time and orbital position.

  15. Energy-optimal path planning in the coastal ocean

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Haley, Patrick J.; Lermusiaux, Pierre F. J.

    2017-05-01

    We integrate data-driven ocean modeling with the stochastic Dynamically Orthogonal (DO) level-set optimization methodology to compute and study energy-optimal paths, speeds, and headings for ocean vehicles in the Middle-Atlantic Bight (MAB) region. We hindcast the energy-optimal paths from among exact time-optimal paths for the period 28 August 2006 to 9 September 2006. To do so, we first obtain a data-assimilative multiscale reanalysis, combining ocean observations with implicit two-way nested multiresolution primitive-equation simulations of the tidal-to-mesoscale dynamics in the region. Second, we solve the reduced-order stochastic DO level-set partial differential equations (PDEs) to compute the joint probability of minimum arrival time, vehicle-speed time series, and total energy utilized. Third, for each arrival time, we select the vehicle-speed time series that minimize the total energy utilization from the marginal probability of vehicle-speed and total energy. The corresponding energy-optimal path and headings are obtained through the exact particle-backtracking equation. Theoretically, the present methodology is PDE-based and provides fundamental energy-optimal predictions without heuristics. Computationally, it is 3-4 orders of magnitude faster than direct Monte Carlo methods. For the missions considered, we analyze the effects of the regional tidal currents, strong wind events, coastal jets, shelfbreak front, and other local circulations on the energy-optimal paths. Results showcase the opportunities for vehicles that intelligently utilize the ocean environment to minimize energy usage, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  16. Visual Fatigue Induced by Viewing a Tablet Computer with a High-resolution Display.

    PubMed

    Kim, Dong Ju; Lim, Chi Yeon; Gu, Namyi; Park, Choul Yong

    2017-10-01

    In the present study, the visual discomfort induced by smart mobile devices was assessed in normal and healthy adults. Fifty-nine volunteers (age, 38.16 ± 10.23 years; male : female = 19 : 40) were exposed to tablet computer screen stimuli (iPad Air, Apple Inc.) for 1 hour. Participants watched a movie or played a computer game on the tablet computer. Visual fatigue and discomfort were assessed using an asthenopia questionnaire, tear film break-up time, and total ocular wavefront aberration before and after viewing smart mobile devices. Based on the questionnaire, viewing smart mobile devices for 1 hour significantly increased mean total asthenopia score from 19.59 ± 8.58 to 22.68 ± 9.39 (p < 0.001). Specifically, the scores for five items (tired eyes, sore/aching eyes, irritated eyes, watery eyes, and hot/burning eye) were significantly increased by viewing smart mobile devices. Tear film break-up time significantly decreased from 5.09 ± 1.52 seconds to 4.63 ± 1.34 seconds (p = 0.003). However, total ocular wavefront aberration was unchanged. Visual fatigue and discomfort were significantly induced by viewing smart mobile devices, even though the devices were equipped with state-of-the-art display technology. © 2017 The Korean Ophthalmological Society

  17. Visual Fatigue Induced by Viewing a Tablet Computer with a High-resolution Display

    PubMed Central

    Kim, Dong Ju; Lim, Chi-Yeon; Gu, Namyi

    2017-01-01

    Purpose In the present study, the visual discomfort induced by smart mobile devices was assessed in normal and healthy adults. Methods Fifty-nine volunteers (age, 38.16 ± 10.23 years; male : female = 19 : 40) were exposed to tablet computer screen stimuli (iPad Air, Apple Inc.) for 1 hour. Participants watched a movie or played a computer game on the tablet computer. Visual fatigue and discomfort were assessed using an asthenopia questionnaire, tear film break-up time, and total ocular wavefront aberration before and after viewing smart mobile devices. Results Based on the questionnaire, viewing smart mobile devices for 1 hour significantly increased mean total asthenopia score from 19.59 ± 8.58 to 22.68 ± 9.39 (p < 0.001). Specifically, the scores for five items (tired eyes, sore/aching eyes, irritated eyes, watery eyes, and hot/burning eye) were significantly increased by viewing smart mobile devices. Tear film break-up time significantly decreased from 5.09 ± 1.52 seconds to 4.63 ± 1.34 seconds (p = 0.003). However, total ocular wavefront aberration was unchanged. Conclusions Visual fatigue and discomfort were significantly induced by viewing smart mobile devices, even though the devices were equipped with state-of-the-art display technology. PMID:28914003

  18. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  19. Teaching Electronic Health Record Communication Skills.

    PubMed

    Palumbo, Mary Val; Sandoval, Marie; Hart, Vicki; Drill, Clarissa

    2016-06-01

    This pilot study investigated nurse practitioner students' communication skills when utilizing the electronic health record during history taking. The nurse practitioner students (n = 16) were videotaped utilizing the electronic health record while taking health histories with standardized patients. The students were videotaped during two separate sessions during one semester. Two observers recorded the time spent (1) typing and talking, (2) typing only, and (3) looking at the computer without talking. Total history taking time, computer placement, and communication skills were also recorded. During the formative session, mean history taking time was 11.4 minutes, with 3.5 minutes engaged with the computer (30.6% of visit). During the evaluative session, mean history taking time was 12.4 minutes, with 2.95 minutes engaged with the computer (24% of visit). The percentage of time individuals spent changed over the two visits: typing and talking, -3.1% (P = .3); typing only, +12.8% (P = .038); and looking at the computer, -9.6% (P = .039). This study demonstrated that time spent engaged with the computer during a patient encounter does decrease with student practice and education. Therefore, students benefit from instruction on electronic health record-specific communication skills, and use of a simple mnemonic to reinforce this is suggested.

  20. Relationship between leisure time screen activity and aggressive and violent behaviour in Iranian children and adolescents: the CASPIAN-IV Study.

    PubMed

    Kelishadi, Roya; Qorbani, Mostafa; Motlagh, Mohammad Esmaeil; Heshmat, Ramin; Ardalan, Gelayol; Jari, Mohsen

    2014-08-21

    Background: This study aimed to assess the relationship between leisure time spent watching television (TV) and at a computer and aggressive and violent behaviour in children and adolescents. Methods: In this nationwide study, 14,880 school students, aged 6-18 years, were selected by cluster and stratified multi-stage sampling method from 30 provinces in Iran. The World Health Organization Global School-based Health Survey questionnaire (WHO-GSHS) was used. Results: Overall, 13,486 children and adolescents (50·8% boys, 75·6% urban residents) completed the study (participation rate 90·6%). The risk of physical fighting and quarrels increased by 29% (OR 1·29, 95% CI 1·19-1·40) with watching TV for >2 hr/day, by 38% (OR 1·38, 95% CI 1·21-1·57) with leisure time computer work of >2 hr/day, and by 42% (OR 1·42, 95% CI 1·28-1·58) with the total screen time of >2 hr/day. Watching TV or leisure time spent on a computer or total screen time of >2 hr/day increased the risk of bullying by 30% (OR 1·30, 95% CI 1·18-1·43), 57% (1·57, 95% CI 1·34-1·85) and 62% (OR 1·62, 95% CI 1·43-1·83). Spending >2 hr/day watching TV and total screen time increased the risk of being bullied by 12% (OR 1·12, 95% CI 1·02-1·22) and 15% (OR 1·15, 95% CI 1·02-1·28), respectively. This relationship was not statistically significant for leisure time spent on a computer (OR 1·10, 95% CI 0·9-1·27). Conclusions: Prolonged leisure time spent on screen activities is associated with violent and aggressive behaviour in children and adolescents. In addition to the duration of screen time, the association is likely to be explained also by the media content.

  1. Relationship between leisure time screen activity and aggressive and violent behaviour in Iranian children and adolescents: the CASPIAN-IV Study.

    PubMed

    Kelishadi, Roya; Qorbani, Mostafa; Motlagh, Mohammad Esmaeil; Heshmat, Ramin; Ardalan, Gelayol; Jari, Mohsen

    2015-01-01

    This study aimed to assess the relationship between leisure time spent watching television (TV) and at a computer and aggressive and violent behaviour in children and adolescents. In this nationwide study, 14,880 school students, aged 6-18 years, were selected by cluster and stratified multi-stage sampling method from 30 provinces in Iran. The World Health Organization Global School-based Health Survey questionnaire (WHO-GSHS) was used. Overall, 13,486 children and adolescents (50·8% boys, 75·6% urban residents) completed the study (participation rate 90·6%). The risk of physical fighting and quarrels increased by 29% (OR 1·29, 95% CI 1·19-1·40) with watching TV for >2 hr/day, by 38% (OR 1·38, 95% CI 1·21-1·57) with leisure time computer work of >2 hr/day, and by 42% (OR 1·42, 95% CI 1·28-1·58) with the total screen time of >2 hr/day. Watching TV or leisure time spent on a computer or total screen time of >2 hr/day increased the risk of bullying by 30% (OR 1·30, 95% CI 1·18-1·43), 57% (1·57, 95% CI 1·34-1·85) and 62% (OR 1·62, 95% CI 1·43-1·83). Spending >2 hr/day watching TV and total screen time increased the risk of being bullied by 12% (OR 1·12, 95% CI 1·02-1·22) and 15% (OR 1·15, 95% CI 1·02-1·28), respectively. This relationship was not statistically significant for leisure time spent on a computer (OR 1·10, 95% CI 0·9-1·27). Prolonged leisure time spent on screen activities is associated with violent and aggressive behaviour in children and adolescents. In addition to the duration of screen time, the association is likely to be explained also by the media content.

  2. Leisure time computer use and adolescent bone health--findings from the Tromsø Study, Fit Futures: a cross-sectional study.

    PubMed

    Winther, Anne; Ahmed, Luai Awad; Furberg, Anne-Sofie; Grimnes, Guri; Jorde, Rolf; Nilsen, Ole Andreas; Dennison, Elaine; Emaus, Nina

    2015-04-22

    Low levels of physical activity may have considerable negative effects on bone health in adolescence, and increasing screen time in place of sporting activity during growth is worrying. This study explored the associations between self-reported screen time at weekends and bone mineral density (BMD). In 2010/2011, 1038 (93%) of the region's first-year upper-secondary school students (15-18 years) attended the Tromsø Study, Fit Futures 1 (FF1). A follow-up survey (FF2) took place in 2012/2013. BMD at total hip, femoral neck and total body was measured as g/cm(²) by dual X-ray absorptiometry (GE Lunar prodigy). Lifestyle variables were self-reported, including questions on hours per day spent in front of television/computer during weekends and hours spent on leisure time physical activities. Complete data sets for 388/312 girls and 359/231 boys at FF1/FF2, respectively, were used in analyses. Sex stratified multiple regression analyses were performed. Many adolescents balanced 2-4 h screen time with moderate or high physical activity levels. Screen time was positively related to body mass index (BMI) in boys (p=0.002), who spent more time in front of the computer than girls did (p<0.001). In boys, screen time was adversely associated with BMDFF1 at all sites, and these associations remained robust to adjustments for age, puberty, height, BMI, physical activity, vitamin D levels, smoking, alcohol, calcium and carbonated drink consumption (p<0.05). Screen time was also negatively associated with total hip BMD(FF2) (p=0.031). In contrast, girls who spent 4-6 h in front of the computer had higher BMD than the reference (<2 h). In Norwegian boys, time spent on screen-based sedentary activity was negatively associated with BMD levels; this relationship persisted 2 years later. Such negative associations were not present among girls. Whether this surprising result is explained by biological differences remains unclear. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. Atmospheric absorption of sound - Update

    NASA Technical Reports Server (NTRS)

    Bass, H. E.; Sutherland, L. C.; Zuckerwar, A. J.

    1990-01-01

    Best current expressions for the vibrational relaxation times of oxygen and nitrogen in the atmosphere are used to compute total absorption. The resulting graphs of total absorption as a function of frequency for different humidities should be used in lieu of the graph published earlier by Evans et al (1972).

  4. Quantum gates by periodic driving

    PubMed Central

    Shi, Z. C.; Wang, W.; Yi, X. X.

    2016-01-01

    Topological quantum computation has been extensively studied in the past decades due to its robustness against decoherence. One way to realize the topological quantum computation is by adiabatic evolutions—it requires relatively long time to complete a gate, so the speed of quantum computation slows down. In this work, we present a method to realize single qubit quantum gates by periodic driving. Compared to adiabatic evolution, the single qubit gates can be realized at a fixed time much shorter than that by adiabatic evolution. The driving fields can be sinusoidal or square-well field. With the sinusoidal driving field, we derive an expression for the total operation time in the high-frequency limit, and an exact analytical expression for the evolution operator without any approximations is given for the square well driving. This study suggests that the period driving could provide us with a new direction in regulations of the operation time in topological quantum computation. PMID:26911900

  5. Quantum gates by periodic driving.

    PubMed

    Shi, Z C; Wang, W; Yi, X X

    2016-02-25

    Topological quantum computation has been extensively studied in the past decades due to its robustness against decoherence. One way to realize the topological quantum computation is by adiabatic evolutions-it requires relatively long time to complete a gate, so the speed of quantum computation slows down. In this work, we present a method to realize single qubit quantum gates by periodic driving. Compared to adiabatic evolution, the single qubit gates can be realized at a fixed time much shorter than that by adiabatic evolution. The driving fields can be sinusoidal or square-well field. With the sinusoidal driving field, we derive an expression for the total operation time in the high-frequency limit, and an exact analytical expression for the evolution operator without any approximations is given for the square well driving. This study suggests that the period driving could provide us with a new direction in regulations of the operation time in topological quantum computation.

  6. Hybrid Computational Architecture for Multi-Scale Modeling of Materials and Devices

    DTIC Science & Technology

    2016-01-03

    Equivalent: Total Number: Sub Contractors (DD882) Names of Faculty Supported Names of Under Graduate students supported Names of Personnel receiving masters...GHz, 20 cores (40 with hyper-threading ( HT )) Single node performance Node # of cores Total CPU time User CPU time System CPU time Elapsed time...INTEL20 40 (with HT ) 534.785 529.984 4.800 541.179 20 468.873 466.119 2.754 476.878 10 671.798 669.653 2.145 680.510 8 772.269 770.256 2.013

  7. Fluvial sediment in Double Creek subwatershed No. 5, Washington County, Oklahoma

    USGS Publications Warehouse

    Bednar, Gene A.; Waldrep, Thomas E.

    1973-01-01

    A total of 21,370 tons of fluvial sediment was transported into reservoir No. 5 and a total of 19,930 tons was deposited. Seventy-eight percent of the total fluvial sediment was deposited during the first 9.2 years, or 63 percent of time of reservoir operation. The computed trap efficiency of reservoir No. 5 was 93 percent.

  8. A digital computer program for the dynamic interaction simulation of controls and structure (DISCOS), volume 1

    NASA Technical Reports Server (NTRS)

    Bodley, C. S.; Devers, A. D.; Park, A. C.; Frisch, H. P.

    1978-01-01

    A theoretical development and associated digital computer program system for the dynamic simulation and stability analysis of passive and actively controlled spacecraft are presented. The dynamic system (spacecraft) is modeled as an assembly of rigid and/or flexible bodies not necessarily in a topological tree configuration. The computer program system is used to investigate total system dynamic characteristics, including interaction effects between rigid and/or flexible bodies, control systems, and a wide range of environmental loadings. In addition, the program system is used for designing attitude control systems and for evaluating total dynamic system performance, including time domain response and frequency domain stability analyses.

  9. TV Time but Not Computer Time Is Associated with Cardiometabolic Risk in Dutch Young Adults

    PubMed Central

    Altenburg, Teatske M.; de Kroon, Marlou L. A.; Renders, Carry M.; HiraSing, Remy; Chinapaw, Mai J. M.

    2013-01-01

    Background TV time and total sedentary time have been positively related to biomarkers of cardiometabolic risk in adults. We aim to examine the association of TV time and computer time separately with cardiometabolic biomarkers in young adults. Additionally, the mediating role of waist circumference (WC) is studied. Methods and Findings Data of 634 Dutch young adults (18–28 years; 39% male) were used. Cardiometabolic biomarkers included indicators of overweight, blood pressure, blood levels of fasting plasma insulin, cholesterol, glucose, triglycerides and a clustered cardiometabolic risk score. Linear regression analyses were used to assess the cross-sectional association of self-reported TV and computer time with cardiometabolic biomarkers, adjusting for demographic and lifestyle factors. Mediation by WC was checked using the product-of-coefficient method. TV time was significantly associated with triglycerides (B = 0.004; CI = [0.001;0.05]) and insulin (B = 0.10; CI = [0.01;0.20]). Computer time was not significantly associated with any of the cardiometabolic biomarkers. We found no evidence for WC to mediate the association of TV time or computer time with cardiometabolic biomarkers. Conclusions We found a significantly positive association of TV time with cardiometabolic biomarkers. In addition, we found no evidence for WC as a mediator of this association. Our findings suggest a need to distinguish between TV time and computer time within future guidelines for screen time. PMID:23460900

  10. Systolic time interval data acquisition system. Specialized cardiovascular studies

    NASA Technical Reports Server (NTRS)

    Baker, J. T.

    1976-01-01

    The development of a data acquisition system for noninvasive measurement of systolic time intervals is described. R-R interval from the ECG determines instantaneous heart rate prior to the beat to be measured. Total electromechanical systole (Q-S2) is measured from the onset of the ECG Q-wave to the onset of the second heart sound (S2). Ejection time (ET or LVET) is measured from the onset of carotid upstroke to the incisure. Pre-ejection period (PEP) is computed by subtracting ET from Q-S2. PEP/ET ratio is computed directly.

  11. Experimental and theoretical studies on solar energy for energy conversion

    NASA Technical Reports Server (NTRS)

    Thomas, A. P.; Thekaekara, M. P.

    1976-01-01

    This paper presents the results of investigations made experimentally and theoretically to evaluate the various parameters that affect the amount of solar energy received on a collector surface. Measurements were made over a long period of time using both pyranometer and pyrheliometer. Computation of spectral and total irradiance at ground level have been made for a large variety of combinations of atmospheric parameters for ozone density, precipitable water vapor, turbidity-coefficients and air mass. A study of the air mass as a function of irradiance measured at GSFC, and comparison of the data with the computed values of total direct solar irradiance for various parameters indicate that turbidity changes with time of the day; atmospheric opacity is less in the afternoon than in the morning.

  12. 28 CFR 523.1 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.1 Definitions. (a) Statutory good time means a credit to a sentence as authorized by 18 U.S.C. 4161. The total amount of statutory good time which an inmate is entitled to have...

  13. 28 CFR 523.1 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.1 Definitions. (a) Statutory good time means a credit to a sentence as authorized by 18 U.S.C. 4161. The total amount of statutory good time which an inmate is entitled to have...

  14. 28 CFR 523.1 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.1 Definitions. (a) Statutory good time means a credit to a sentence as authorized by 18 U.S.C. 4161. The total amount of statutory good time which an inmate is entitled to have...

  15. 28 CFR 523.1 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.1 Definitions. (a) Statutory good time means a credit to a sentence as authorized by 18 U.S.C. 4161. The total amount of statutory good time which an inmate is entitled to have...

  16. 28 CFR 523.1 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.1 Definitions. (a) Statutory good time means a credit to a sentence as authorized by 18 U.S.C. 4161. The total amount of statutory good time which an inmate is entitled to have...

  17. Dynamic load balance scheme for the DSMC algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jin; Geng, Xiangren; Jiang, Dingwu

    The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, themore » total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.« less

  18. A Numerical Method for Calculating the Wave Drag of a Configuration from the Second Derivative of the Area Distribution of a Series of Equivalent Bodies of Revolution

    NASA Technical Reports Server (NTRS)

    Levy, Lionel L., Jr.; Yoshikawa, Kenneth K.

    1959-01-01

    A method based on linearized and slender-body theories, which is easily adapted to electronic-machine computing equipment, is developed for calculating the zero-lift wave drag of single- and multiple-component configurations from a knowledge of the second derivative of the area distribution of a series of equivalent bodies of revolution. The accuracy and computational time required of the method to calculate zero-lift wave drag is evaluated relative to another numerical method which employs the Tchebichef form of harmonic analysis of the area distribution of a series of equivalent bodies of revolution. The results of the evaluation indicate that the total zero-lift wave drag of a multiple-component configuration can generally be calculated most accurately as the sum of the zero-lift wave drag of each component alone plus the zero-lift interference wave drag between all pairs of components. The accuracy and computational time required of both methods to calculate total zero-lift wave drag at supersonic Mach numbers is comparable for airplane-type configurations. For systems of bodies of revolution both methods yield similar results with comparable accuracy; however, the present method only requires up to 60 percent of the computing time required of the harmonic-analysis method for two bodies of revolution and less time for a larger number of bodies.

  19. Watching TV has a distinct sociodemographic and lifestyle profile compared with other sedentary behaviors: A nationwide population-based study

    PubMed Central

    2017-01-01

    Watching TV has been consistently associated with higher risk of adverse health outcomes, but the effect of other sedentary behaviors (SB) is uncertain. Potential explanations are that watching TV is not a marker of a broader sedentary pattern and that each SB reflects different sociodemographic and health characteristics. Data were taken form a survey on 10,199 individuals, representative of the Spanish population aged ≥18 years. SB and other health behaviors were ascertained using validated questionnaires. Watching TV was the predominant SB (45.4% of the total sitting time), followed by sitting at the computer (22.7%). TV watching time showed no correlation with total time on other SB (r: -0.02, p = 0.07). By contrast, time spent at the computer was directly correlated with time spent on commuting (r: 0.07, p<0.01), listening to music (r: 0.10, p<0.01) and reading (r: 0.08, p<0.01). TV watching time was greater in those with older age, lower education, unhealthier lifestyle, and with diabetes or osteomuscular disease. More time spent at the computer or in commuting was linked to younger age, male gender, higher education and having a sedentary job. In conclusion, watching TV is not correlated with other SB and shows a distinct demographic and lifestyle profile. PMID:29206883

  20. Watching TV has a distinct sociodemographic and lifestyle profile compared with other sedentary behaviors: A nationwide population-based study.

    PubMed

    Andrade-Gómez, Elena; García-Esquinas, Esther; Ortolá, Rosario; Martínez-Gómez, David; Rodríguez-Artalejo, Fernando

    2017-01-01

    Watching TV has been consistently associated with higher risk of adverse health outcomes, but the effect of other sedentary behaviors (SB) is uncertain. Potential explanations are that watching TV is not a marker of a broader sedentary pattern and that each SB reflects different sociodemographic and health characteristics. Data were taken form a survey on 10,199 individuals, representative of the Spanish population aged ≥18 years. SB and other health behaviors were ascertained using validated questionnaires. Watching TV was the predominant SB (45.4% of the total sitting time), followed by sitting at the computer (22.7%). TV watching time showed no correlation with total time on other SB (r: -0.02, p = 0.07). By contrast, time spent at the computer was directly correlated with time spent on commuting (r: 0.07, p<0.01), listening to music (r: 0.10, p<0.01) and reading (r: 0.08, p<0.01). TV watching time was greater in those with older age, lower education, unhealthier lifestyle, and with diabetes or osteomuscular disease. More time spent at the computer or in commuting was linked to younger age, male gender, higher education and having a sedentary job. In conclusion, watching TV is not correlated with other SB and shows a distinct demographic and lifestyle profile.

  1. A users manual for a computer program which calculates time optical geocentric transfers using solar or nuclear electric and high thrust propulsion

    NASA Technical Reports Server (NTRS)

    Sackett, L. L.; Edelbaum, T. N.; Malchow, H. L.

    1974-01-01

    This manual is a guide for using a computer program which calculates time optimal trajectories for high-and low-thrust geocentric transfers. Either SEP or NEP may be assumed and a one or two impulse, fixed total delta V, initial high thrust phase may be included. Also a single impulse of specified delta V may be included after the low thrust state. The low thrust phase utilizes equinoctial orbital elements to avoid the classical singularities and Kryloff-Boguliuboff averaging to help insure more rapid computation time. The program is written in FORTRAN 4 in double precision for use on an IBM 360 computer. The manual includes a description of the problem treated, input/output information, examples of runs, and source code listings.

  2. Measuring older adults' sedentary time: reliability, validity, and responsiveness.

    PubMed

    Gardiner, Paul A; Clark, Bronwyn K; Healy, Genevieve N; Eakin, Elizabeth G; Winkler, Elisabeth A H; Owen, Neville

    2011-11-01

    With evidence that prolonged sitting has deleterious health consequences, decreasing sedentary time is a potentially important preventive health target. High-quality measures, particularly for use with older adults, who are the most sedentary population group, are needed to evaluate the effect of sedentary behavior interventions. We examined the reliability, validity, and responsiveness to change of a self-report sedentary behavior questionnaire that assessed time spent in behaviors common among older adults: watching television, computer use, reading, socializing, transport and hobbies, and a summary measure (total sedentary time). In the context of a sedentary behavior intervention, nonworking older adults (n = 48, age = 73 ± 8 yr (mean ± SD)) completed the questionnaire on three occasions during a 2-wk period (7 d between administrations) and wore an accelerometer (ActiGraph model GT1M) for two periods of 6 d. Test-retest reliability (for the individual items and the summary measure) and validity (self-reported total sedentary time compared with accelerometer-derived sedentary time) were assessed during the 1-wk preintervention period, using Spearman (ρ) correlations and 95% confidence intervals (CI). Responsiveness to change after the intervention was assessed using the responsiveness statistic (RS). Test-retest reliability was excellent for television viewing time (ρ (95% CI) = 0.78 (0.63-0.89)), computer use (ρ (95% CI) = 0.90 (0.83-0.94)), and reading (ρ (95% CI) = 0.77 (0.62-0.86)); acceptable for hobbies (ρ (95% CI) = 0.61 (0.39-0.76)); and poor for socializing and transport (ρ < 0.45). Total sedentary time had acceptable test-retest reliability (ρ (95% CI) = 0.52 (0.27-0.70)) and validity (ρ (95% CI) = 0.30 (0.02-0.54)). Self-report total sedentary time was similarly responsive to change (RS = 0.47) as accelerometer-derived sedentary time (RS = 0.39). The summary measure of total sedentary time has good repeatability and modest validity and is sufficiently responsive to change suggesting that it is suitable for use in interventions with older adults.

  3. A modified ATI technique for nowcasting convective rain volumes over areas. [area-time integrals

    NASA Technical Reports Server (NTRS)

    Makarau, Amos; Johnson, L. Ronald; Doneaud, Andre A.

    1988-01-01

    This paper explores the applicability of the area-time-integral (ATI) technique for the estimation of the growth portion only of a convective storm (while the rain volume is computed using the entire life history of the event) and for nowcasting the total rain volume of a convective system at the stage of its maximum development. For these purposes, the ATIs were computed from the digital radar data (for 1981-1982) from the North Dakota Cloud Modification Project, using the maximum echo area (ATIA) no less than 25 dBz, the maximum reflectivity, and the maximum echo height as the end of the growth portion of the convective event. Linear regression analysis demonstrated that correlations between total rain volume or the maximum rain volume versus ATIA were the strongest. The uncertainties obtained were comparable to the uncertainties which typically occur in rain volume estimates obtained from radar data employing Z-R conversion followed by space and time integration. This demonstrates that the total rain volume of a storm can be nowcasted at its maximum stage of development.

  4. TimeSet: A computer program that accesses five atomic time services on two continents

    NASA Technical Reports Server (NTRS)

    Petrakis, P. L.

    1993-01-01

    TimeSet is a shareware program for accessing digital time services by telephone. At its initial release, it was capable of capturing time signals only from the U.S. Naval Observatory to set a computer's clock. Later the ability to synchronize with the National Institute of Standards and Technology was added. Now, in Version 7.10, TimeSet is able to access three additional telephone time services in Europe - in Sweden, Austria, and Italy - making a total of five official services addressable by the program. A companion program, TimeGen, allows yet another source of telephone time data strings for callers equipped with TimeSet version 7.10. TimeGen synthesizes UTC time data strings in the Naval Observatory's format from an accurately set and maintained DOS computer clock, and transmits them to callers. This allows an unlimited number of 'freelance' time generating stations to be created. Timesetting from TimeGen is made feasible by the advent of Becker's RighTime, a shareware program that learns the drift characteristics of a computer's clock and continuously applies a correction to keep it accurate, and also brings .01 second resolution to the DOS clock. With clock regulation by RighTime and periodic update calls by the TimeGen station to an official time source via TimeSet, TimeGen offers the same degree of accuracy within the resolution of the computer clock as any official atomic time source.

  5. Screen time viewing behaviors and isometric trunk muscle strength in youth.

    PubMed

    Grøntved, Anders; Ried-Larsen, Mathias; Froberg, Karsten; Wedderkopp, Niels; Brage, Søren; Kristensen, Peter Lund; Andersen, Lars Bo; Møller, Niels Christian

    2013-10-01

    The objective of this study was to examine the association of screen time viewing behavior with isometric trunk muscle strength in youth. A cross-sectional study was carried out including 606 adolescents (14-16 yr old) participating in the Danish European Youth Heart Study, a population-based study with assessments conducted in either 1997/1998 or 2003/2004. Maximal voluntary contractions during isometric back extension and abdominal flexion were determined using a strain gauge dynamometer, and cardiorespiratory fitness (CRF) was obtained using a maximal cycle ergometer test. TV viewing time, computer use, and other lifestyle behaviors were obtained by self-report. Analyses of association of screen use behaviors with isometric trunk muscle strength were carried out using multivariable adjusted linear regression. The mean (SD) isometric strength was 0.87 (0.16) N·kg-1. TV viewing, computer use, and total screen time use were inversely associated with isometric trunk muscle strength in analyses adjusted for lifestyle and sociodemographic factors. After further adjustment for CRF and waist circumference, associations remained significant for computer use and total screen time, but TV viewing was only marginally associated with muscle strength after these additional adjustments (-0.05 SD (95% confidence interval, -0.11 to 0.005) difference in strength per 1 h·d-1 difference in TV viewing time, P = 0.08). Each 1 h·d-1 difference in total screen time use was associated with -0.09 SD (95% confidence interval, -0.14 to -0.04) lower isometric trunk muscle strength in the fully adjusted model (P = 0.001). There were no indications that the association of screen time use with isometric trunk muscle strength was attenuated among highly fit individuals (P = 0.91 for CRF by screen time interaction). Screen time use was inversely associated with isometric trunk muscle strength independent of CRF and other confounding factors.

  6. Digital computing cardiotachometer

    NASA Technical Reports Server (NTRS)

    Smith, H. E.; Rasquin, J. R.; Taylor, R. A. (Inventor)

    1973-01-01

    A tachometer is described which instantaneously measures heart rate. During the two intervals between three succeeding heart beats, the electronic system: (1) measures the interval by counting cycles from a fixed frequency source occurring between the two beats; and (2) computes heat rate during the interval between the next two beats by counting the number of times that the interval count must be counted to zero in order to equal a total count of sixty times (to convert to beats per minute) the frequency of the fixed frequency source.

  7. Dynamic Transfers Of Tasks Among Computers

    NASA Technical Reports Server (NTRS)

    Liu, Howard T.; Silvester, John A.

    1989-01-01

    Allocation scheme gives jobs to idle computers. Ideal resource-sharing algorithm should have following characteristics: Dynamics, decentralized, and heterogeneous. Proposed enhanced receiver-initiated dynamic algorithm (ERIDA) for resource sharing fulfills all above criteria. Provides method balancing workload among hosts, resulting in improvement in response time and throughput performance of total system. Adjusts dynamically to traffic load of each station.

  8. Talking with the alien: interaction with computers in the GP consultation.

    PubMed

    Dowell, Anthony; Stubbe, Maria; Scott-Dowell, Kathy; Macdonald, Lindsay; Dew, Kevin

    2013-01-01

    This study examines New Zealand GPs' interaction with computers in routine consultations. Twenty-eight video-recorded consultations from 10 GPs were analysed in micro-detail to explore: (i) how doctors divide their time and attention between computer and patient; (ii) the different roles ascribed to the computer; and (iii) how computer use influences the interactional flow of the consultation. All GPs engaged with the computer in some way for at least 20% of each consultation, and on average spent 12% of time totally focussed on the computer. Patterns of use varied; most GPs inputted all or most notes during the consultation, but a few set aside dedicated time afterwards. The computer acted as an additional participant enacting roles like information repository and legitimiser of decisions. Computer use also altered some of the normal 'rules of engagement' between doctor and patient. Long silences and turning away interrupted the smooth flow of conversation, but various 'multitasking' strategies allowed GPs to remain engaged with patients during episodes of computer use (e.g. signposting, online commentary, verbalising while typing, social chat). Conclusions were that use of computers has many benefits but also significantly influences the fine detail of the GP consultation. Doctors must consciously develop strategies to manage this impact.

  9. Single machine total completion time minimization scheduling with a time-dependent learning effect and deteriorating jobs

    NASA Astrophysics Data System (ADS)

    Wang, Ji-Bo; Wang, Ming-Zheng; Ji, Ping

    2012-05-01

    In this article, we consider a single machine scheduling problem with a time-dependent learning effect and deteriorating jobs. By the effects of time-dependent learning and deterioration, we mean that the job processing time is defined by a function of its starting time and total normal processing time of jobs in front of it in the sequence. The objective is to determine an optimal schedule so as to minimize the total completion time. This problem remains open for the case of -1 < a < 0, where a denotes the learning index; we show that an optimal schedule of the problem is V-shaped with respect to job normal processing times. Three heuristic algorithms utilising the V-shaped property are proposed, and computational experiments show that the last heuristic algorithm performs effectively and efficiently in obtaining near-optimal solutions.

  10. Block-structured grids for complex aerodynamic configurations: Current status

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Sanetrik, Mark D.; Parlette, Edward B.

    1995-01-01

    The status of CFD methods based on the use of block-structured grids for analyzing viscous flows over complex configurations is examined. The objective of the present study is to make a realistic assessment of the usability of such grids for routine computations typically encountered in the aerospace industry. It is recognized at the very outset that the total turnaround time, from the moment the configuration is identified until the computational results have been obtained and postprocessed, is more important than just the computational time. Pertinent examples will be cited to demonstrate the feasibility of solving flow over practical configurations of current interest on block-structured grids.

  11. Simplified methods for calculating photodissociation rates

    NASA Technical Reports Server (NTRS)

    Shimazaki, T.; Ogawa, T.; Farrell, B. C.

    1977-01-01

    Simplified methods for calculating the transmission of solar UV radiation and the dissociation coefficients of various molecules are compared. A significant difference sometimes appears in calculations of the individual band, but the total transmission and the total dissociation coefficients integrated over the entire SR (solar radiation) band region agree well between the methods. The ambiguities in the solar flux data affect the calculated dissociation coefficients more strongly than does the method. A simpler method is developed for the purpose of reducing the computation time and computer memory size necessary for storing coefficients of the equations. The new method can reduce the computation time by a factor of more than 3 and the memory size by a factor of more than 50 compared with the Hudson-Mahle method, and yet the result agrees within 10 percent (in most cases much less) with the original Hudson-Mahle results, except for H2O and CO2. A revised method is necessary for these two molecules, whose absorption cross sections change very rapidly over the SR band spectral range.

  12. Inelastic strain analogy for piecewise linear computation of creep residues in built-up structures

    NASA Technical Reports Server (NTRS)

    Jenkins, Jerald M.

    1987-01-01

    An analogy between inelastic strains caused by temperature and those caused by creep is presented in terms of isotropic elasticity. It is shown how the theoretical aspects can be blended with existing finite-element computer programs to exact a piecewise linear solution. The creep effect is determined by using the thermal stress computational approach, if appropriate alterations are made to the thermal expansion of the individual elements. The overall transient solution is achieved by consecutive piecewise linear iterations. The total residue caused by creep is obtained by accumulating creep residues for each iteration and then resubmitting the total residues for each element as an equivalent input. A typical creep law is tested for incremental time convergence. The results indicate that the approach is practical, with a valid indication of the extent of creep after approximately 20 hr of incremental time. The general analogy between body forces and inelastic strain gradients is discussed with respect to how an inelastic problem can be worked as an elastic problem.

  13. Ageostrophic winds in the severe strom environment

    NASA Technical Reports Server (NTRS)

    Moore, J. T.

    1982-01-01

    The period from 1200 GMT 10 April to 0000 GMT 11 April 1979, during which time several major tornadoes and severe thunderstorms, including the Wichita Falls tornado occurred was studied. A time adjusted, isentropic data set was used to analyze key parameters. Fourth order centered finite differences were used to compute the isallobaric, inertial advective, tendency, inertial advective geostrophic and ageostrophic winds. Explicit isentropic trajectories were computed through the isentropic, inviscid equations of motion using a 15 minute time step. Ageostrophic, geostrophic and total vertical motion fields were computed to judge the relative importance of ageostrophy in enhancing the vertical motion field. It is found that ageostrophy is symptomatic of those mass adjustments which take place during upper level jet streak propagation and can, in a favorable environment, act to increase and release potential instability over meso alpha time periods.

  14. The Impact of IBM Cell Technology on the Programming Paradigm in the Context of Computer Systems for Climate and Weather Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Shujia; Duffy, Daniel; Clune, Thomas

    The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratiomore » of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.« less

  15. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    PubMed

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. A computer program to trace seismic ray distribution in complex two-dimensional geological models

    USGS Publications Warehouse

    Yacoub, Nazieh K.; Scott, James H.

    1970-01-01

    A computer program has been developed to trace seismic rays and their amplitudes and energies through complex two-dimensional geological models, for which boundaries between elastic units are defined by a series of digitized X-, Y-coordinate values. Input data for the program includes problem identification, control parameters, model coordinates and elastic parameter for the elastic units. The program evaluates the partitioning of ray amplitude and energy at elastic boundaries, computes the total travel time, total travel distance and other parameters for rays arising at the earth's surface. Instructions are given for punching program control cards and data cards, and for arranging input card decks. An example of printer output for a simple problem is presented. The program is written in FORTRAN IV language. The listing of the program is shown in the Appendix, with an example output from a CDC-6600 computer.

  17. Accelerating EPI distortion correction by utilizing a modern GPU-based parallel computation.

    PubMed

    Yang, Yao-Hao; Huang, Teng-Yi; Wang, Fu-Nien; Chuang, Tzu-Chao; Chen, Nan-Kuei

    2013-04-01

    The combination of phase demodulation and field mapping is a practical method to correct echo planar imaging (EPI) geometric distortion. However, since phase dispersion accumulates in each phase-encoding step, the calculation complexity of phase modulation is Ny-fold higher than conventional image reconstructions. Thus, correcting EPI images via phase demodulation is generally a time-consuming task. Parallel computing by employing general-purpose calculations on graphics processing units (GPU) can accelerate scientific computing if the algorithm is parallelized. This study proposes a method that incorporates the GPU-based technique into phase demodulation calculations to reduce computation time. The proposed parallel algorithm was applied to a PROPELLER-EPI diffusion tensor data set. The GPU-based phase demodulation method reduced the EPI distortion correctly, and accelerated the computation. The total reconstruction time of the 16-slice PROPELLER-EPI diffusion tensor images with matrix size of 128 × 128 was reduced from 1,754 seconds to 101 seconds by utilizing the parallelized 4-GPU program. GPU computing is a promising method to accelerate EPI geometric correction. The resulting reduction in computation time of phase demodulation should accelerate postprocessing for studies performed with EPI, and should effectuate the PROPELLER-EPI technique for clinical practice. Copyright © 2011 by the American Society of Neuroimaging.

  18. A technical innovation for improving identification of the trackers by the LED cameras in navigation-assisted total knee arthroplasty.

    PubMed

    Darmanis, Spyridon; Toms, Andrew; Durman, Robert; Moore, Donna; Eyres, Keith

    2007-07-01

    To reduce the operating time in computer-assisted navigated total knee replacement (TKR), by improving communication between the infrared camera and the trackers placed on the patient. The innovation involves placing a routinely used laser pointer on top of the camera, so that the infrared cameras focus precisely on the trackers located on the knee to be operated on. A prospective randomized study was performed involving 40 patients divided into two groups, A and B. Both groups underwent navigated TKR, but for group B patients a laser pointer was used to improve the targeting capabilities of the cameras. Without the laser pointer, the camera had to move a mean 9.2 times in order to identify the trackers. With the introduction of the laser pointer, this was reduced to 0.9 times. Accordingly, the additional mean time required without the laser pointer was 11.6 minutes. Time delays are a major problem in computer-assisted surgery, and our technical suggestion can contribute towards reducing the delays associated with this particular application.

  19. 42 CFR Appendix E to Part 5 - Criteria for Designation of Areas Having Shortages of Podiatric Professional(s)

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... their time to foot care, the total available foot care practitioners will be computed as follows: Number... for the delivery of podiatric services. 2. The area's ratio of population to foot care practitioners... Count. The population count used will be the total permanent resident civilian population of the area...

  20. Modelling total solar irradiance since 1878 from simulated magnetograms

    NASA Astrophysics Data System (ADS)

    Dasi-Espuig, M.; Jiang, J.; Krivova, N. A.; Solanki, S. K.

    2014-10-01

    Aims: We present a new model of total solar irradiance (TSI) based on magnetograms simulated with a surface flux transport model (SFTM) and the Spectral And Total Irradiance REconstructions (SATIRE) model. Our model provides daily maps of the distribution of the photospheric field and the TSI starting from 1878. Methods: The modelling is done in two main steps. We first calculate the magnetic flux on the solar surface emerging in active and ephemeral regions. The evolution of the magnetic flux in active regions (sunspots and faculae) is computed using a surface flux transport model fed with the observed record of sunspot group areas and positions. The magnetic flux in ephemeral regions is treated separately using the concept of overlapping cycles. We then use a version of the SATIRE model to compute the TSI. The area coverage and the distribution of different magnetic features as a function of time, which are required by SATIRE, are extracted from the simulated magnetograms and the modelled ephemeral region magnetic flux. Previously computed intensity spectra of the various types of magnetic features are employed. Results: Our model reproduces the PMOD composite of TSI measurements starting from 1978 at daily and rotational timescales more accurately than the previous version of the SATIRE model computing TSI over this period of time. The simulated magnetograms provide a more realistic representation of the evolution of the magnetic field on the photosphere and also allow us to make use of information on the spatial distribution of the magnetic fields before the times when observed magnetograms were available. We find that the secular increase in TSI since 1878 is fairly stable to modifications of the treatment of the ephemeral region magnetic flux.

  1. Materials constitutive models for nonlinear analysis of thermally cycled structures

    NASA Technical Reports Server (NTRS)

    Kaufman, A.; Hunt, L. E.

    1982-01-01

    Effects of inelastic materials models on computed stress-strain solutions for thermally loaded structures were studied by performing nonlinear (elastoplastic creep) and elastic structural analyses on a prismatic, double edge wedge specimen of IN 100 alloy that was subjected to thermal cycling in fluidized beds. Four incremental plasticity creep models (isotropic, kinematic, combined isotropic kinematic, and combined plus transient creep) were exercised for the problem by using the MARC nonlinear, finite element computer program. Maximum total strain ranges computed from the elastic and nonlinear analyses agreed within 5 percent. Mean cyclic stresses, inelastic strain ranges, and inelastic work were significantly affected by the choice of inelastic constitutive model. The computing time per cycle for the nonlinear analyses was more than five times that required for the elastic analysis.

  2. Television viewing, computer use, time driving and all-cause mortality: the SUN cohort.

    PubMed

    Basterra-Gortari, Francisco Javier; Bes-Rastrollo, Maira; Gea, Alfredo; Núñez-Córdoba, Jorge María; Toledo, Estefanía; Martínez-González, Miguel Ángel

    2014-06-25

    Sedentary behaviors have been directly associated with all-cause mortality. However, little is known about different types of sedentary behaviors in relation to overall mortality. Our objective was to assess the association between different sedentary behaviors and all-cause mortality. In this prospective, dynamic cohort study (the SUN Project) 13 284 Spanish university graduates with a mean age of 37 years were followed-up for a median of 8.2 years. Television, computer, and driving time were assessed at baseline. Poisson regression models were fitted to examine the association between each sedentary behavior and total mortality. All-cause mortality incidence rate ratios (IRRs) per 2 hours per day were 1.40 (95% confidence interval (CI): 1.06 to 1.84) for television viewing, 0.96 (95% CI: 0.79 to 1.18) for computer use, and 1.14 (95% CI: 0.90 to 1.44) for driving, after adjustment for age, sex, smoking status, total energy intake, Mediterranean diet adherence, body mass index, and physical activity. The risk of mortality was twofold higher for participants reporting ≥ 3 h/day of television viewing than for those reporting <1 h/d (IRR: 2.04 [95% CI 1.16 to 3.57]). Television viewing was directly associated with all-cause mortality. However, computer use and time spent driving were not significantly associated with higher mortality. Further cohort studies and trials designed to assess whether reductions in television viewing are able to reduce mortality are warranted. The lack of association between computer use or time spent driving and mortality needs further confirmation. © 2014 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  3. Computer Assisted Surgery and Current Trends in Orthopaedics Research and Total Joint Replacements

    NASA Astrophysics Data System (ADS)

    Amirouche, Farid

    2008-06-01

    Musculoskeletal research has brought about revolutionary changes in our ability to perform high precision surgery in joint replacement procedures. Recent advances in computer assisted surgery as well better materials have lead to reduced wear and greatly enhanced the quality of life of patients. The new surgical techniques to reduce the size of the incision and damage to underlying structures have been the primary advance toward this goal. These new techniques are known as MIS or Minimally Invasive Surgery. Total hip and knee Arthoplasties are at all time high reaching 1.2 million surgeries per year in the USA. Primary joint failures are usually due to osteoarthristis, rheumatoid arthritis, osteocronis and other inflammatory arthritis conditions. The methods for THR and TKA are critical to initial stability and longevity of the prostheses. This research aims at understanding the fundamental mechanics of the joint Arthoplasty and providing an insight into current challenges in patient specific fitting, fixing, and stability. Both experimental and analytical work will be presented. We will examine Cementless total hip arthroplasty success in the last 10 years and how computer assisted navigation is playing in the follow up studies. Cementless total hip arthroplasty attains permanent fixation by the ingrowth of bone into a porous coated surface. Loosening of an ingrown total hip arthroplasty occurs as a result of osteolysis of the periprosthetic bone and degradation of the bone prosthetic interface. The osteolytic process occurs as a result of polyethylene wear particles produced by the metal polyethylene articulation of the prosthesis. The total hip arthroplasty is a congruent joint and the submicron wear particles produced are phagocytized by macrophages initiating an inflammatory cascade. This cascade produces cytokines ultimately implicated in osteolysis. Resulting bone loss both on the acetabular and femoral sides eventually leads to component instability. As patients are living longer and total hip arthroplasty is performed in younger patients the risks of osteolysis associated with cumulative wear is increased. Computer-assisted surgery is based on sensing feedback; vision and imaging that help surgeons align the patient's joints during total knee or hip replacement with a degree of accuracy not possible with the naked eye. For the first time, the computer feedback is essential for ligament balancing and longevity of the implants. The computers navigation systems also help surgeons to use smaller incisions instead of the traditional larger openings. Small-incision surgery offers the potential for faster recovery, less bleeding and less pain for patients. The development of SESCAN imaging technique to create a patient based model of a 3D joint will be presented to show the effective solution of complex geometry of joints.

  4. The economics of time shared computing: Congestion, user costs and capacity

    NASA Technical Reports Server (NTRS)

    Agnew, C. E.

    1982-01-01

    Time shared systems permit the fixed costs of computing resources to be spread over large numbers of users. However, bottleneck results in the theory of closed queueing networks can be used to show that this economy of scale will be offset by the increased congestion that results as more users are added to the system. If one considers the total costs, including the congestion cost, there is an optimal number of users for a system which equals the saturation value usually used to define system capacity.

  5. 47 CFR 69.153 - Presubscribed interexchange carrier charge (PICC).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CARRIER SERVICES (CONTINUED) ACCESS CHARGES Computation of Charges for Price Cap Local Exchange Carriers... to recover revenues totaling Average Price Cap CMT Revenues per Line month times the number of base...

  6. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  7. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  8. Electromagnetic Navigational Bronchoscopy Reduces the Time Required for Localization and Resection of Lung Nodules.

    PubMed

    Bolton, William David; Cochran, Thomas; Ben-Or, Sharon; Stephenson, James E; Ellis, William; Hale, Allyson L; Binks, Andrew P

    The aims of the study were to evaluate electromagnetic navigational bronchoscopy (ENB) and computed tomography-guided placement as localization techniques for minimally invasive resection of small pulmonary nodules and determine whether electromagnetic navigational bronchoscopy is a safer and more effective method than computed tomography-guided localization. We performed a retrospective review of our thoracic surgery database to identify patients who underwent minimally invasive resection for a pulmonary mass and used either electromagnetic navigational bronchoscopy or computed tomography-guided localization techniques between July 2011 and May 2015. Three hundred eighty-three patients had a minimally invasive resection during our study period, 117 of whom underwent electromagnetic navigational bronchoscopy or computed tomography localization (electromagnetic navigational bronchoscopy = 81; computed tomography = 36). There was no significant difference between computed tomography and electromagnetic navigational bronchoscopy patient groups with regard to age, sex, race, pathology, nodule size, or location. Both computed tomography and electromagnetic navigational bronchoscopy were 100% successful at localizing the mass, and there was no difference in the type of definitive surgical resection (wedge, segmentectomy, or lobectomy) (P = 0.320). Postoperative complications occurred in 36% of all patients, but there were no complications related to the localization procedures. In terms of localization time and surgical time, there was no difference between groups. However, the down/wait time between localization and resection was significant (computed tomography = 189 minutes; electromagnetic navigational bronchoscopy = 27 minutes); this explains why the difference in total time (sum of localization, down, and surgery) was significant (P < 0.001). We found electromagnetic navigational bronchoscopy to be as safe and effective as computed tomography-guided wire placement and to provide a significantly decreased down time between localization and surgical resection.

  9. Comprehensive, Multi-Source Cyber-Security Events Data Set

    DOE Data Explorer

    Kent, Alexander D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-05-21

    This data set represents 58 consecutive days of de-identified event data collected from five sources within Los Alamos National Laboratory’s corporate, internal computer network. The data sources include Windows-based authentication events from both individual computers and centralized Active Directory domain controller servers; process start and stop events from individual Windows computers; Domain Name Service (DNS) lookups as collected on internal DNS servers; network flow data as collected on at several key router locations; and a set of well-defined red teaming events that present bad behavior within the 58 days. In total, the data set is approximately 12 gigabytes compressed across the five data elements and presents 1,648,275,307 events in total for 12,425 users, 17,684 computers, and 62,974 processes. Specific users that are well known system related (SYSTEM, Local Service) were not de-identified though any well-known administrators account were still de-identified. In the network flow data, well-known ports (e.g. 80, 443, etc) were not de-identified. All other users, computers, process, ports, times, and other details were de-identified as a unified set across all the data elements (e.g. U1 is the same U1 in all of the data). The specific timeframe used is not disclosed for security purposes. In addition, no data that allows association outside of LANL’s network is included. All data starts with a time epoch of 1 using a time resolution of 1 second. In the authentication data, failed authentication events are only included for users that had a successful authentication event somewhere within the data set.

  10. Randomized Approaches for Nearest Neighbor Search in Metric Space When Computing the Pairwise Distance Is Extremely Expensive

    NASA Astrophysics Data System (ADS)

    Wang, Lusheng; Yang, Yong; Lin, Guohui

    Finding the closest object for a query in a database is a classical problem in computer science. For some modern biological applications, computing the similarity between two objects might be very time consuming. For example, it takes a long time to compute the edit distance between two whole chromosomes and the alignment cost of two 3D protein structures. In this paper, we study the nearest neighbor search problem in metric space, where the pair-wise distance between two objects in the database is known and we want to minimize the number of distances computed on-line between the query and objects in the database in order to find the closest object. We have designed two randomized approaches for indexing metric space databases, where objects are purely described by their distances with each other. Analysis and experiments show that our approaches only need to compute O(logn) objects in order to find the closest object, where n is the total number of objects in the database.

  11. VALIDATION TESTING OF NEW MECHANISMS WITH OUTDOOR CHAMBER DATA, VOLUME 3: CALCULATION OF PHOTOCHEMICAL REACTION PHOTOLYSIS RATES IN THE UNC OUTDOOR CHAMBER

    EPA Science Inventory

    A new model is described for computing in-chamber actinic flux using site specific conditions that include time of day, air pressure, total column ozone, total column water vapor, relative humidity, aerosol type, aerosol optical density at 500 nm, and the spectral albedo of the g...

  12. Urban storm-runoff modelling; Madison, Wisconsin

    USGS Publications Warehouse

    Grant, R. Stephen; Goddard, Gerald

    1979-01-01

    A brief inconclusive evaluation of the water-quality subroutines of the model was made. Close agreement was noted between observed and simulated loads for nitrates, organic nitrogen, total phosphate, and total solids. Ammonia nitrogen and orthophosphate computed by the model ranged 7 to 11 times greater than the observed loads. Observed loads are doubtful because of the sparsity of water-quality data.

  13. Computer image-guided surgery for total maxillectomy.

    PubMed

    Homma, Akihiro; Saheki, Masahiko; Suzuki, Fumiyuki; Fukuda, Satoshi

    2008-12-01

    In total maxillectomy, the entire upper jaw including the tumor is removed en bloc from the facial skeleton. An intraoperative computed tomographic guidance system (ICTGS) can improve orientation during surgical procedures. However, its efficacy in head and neck surgery remains controversial. This study evaluated the use of an ICTGS in total maxillectomy. Five patients with maxillary sinus neoplasms underwent surgery using a StealthStation ICTGS. The headset was used for anatomic registration during the preoperative CT scan and surgical procedure. The average accuracy was 0.95 mm. The ICTGS provided satisfactory accuracy until the end of resection in all cases, and helped the surgeon to confirm the anatomical location and decide upon the extent of removal in real time. It was particularly useful when the zygoma, maxillary frontal process, orbital floor, and pterygoid process were divided. All patients remained alive and disease free during short-term follow-up. The ICTGS played a supplementary role in total maxillectomy, helping the surgeon to recognize target points accurately in real time, to determine the minimum accurate bone-resection line, and to use the most direct route to reach the lesion. It could also reduce the extent of the skin incision and removal, thus maintaining oncological safety.

  14. Vehicle routing problem with time windows using natural inspired algorithms

    NASA Astrophysics Data System (ADS)

    Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.

    2018-03-01

    Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.

  15. A statistical probe into variability within total ozone time series over Arosa, Switzerland (9.68°E, 46.78°N)

    NASA Astrophysics Data System (ADS)

    Chakraborthy, Parthasarathi; Chattopadhyay, Surajit

    2013-02-01

    Endeavor of the present paper is to investigate the statistical properties of the total ozone concentration time series over Arosa, Switzerland (9.68°E, 46.78°N). For this purpose, different statistical data analysis procedures have been employed for analyzing the mean monthly total ozone concentration data, collected over a period of 40 years (1932-1971), at the above location. Based on the computations on the available data set, the study reports different degrees of variations in different months. The month of July is reported as the month of lowest variability. April and May are found to be the most correlated months with respect to total ozone concentration.

  16. GPU-accelerated iterative reconstruction for limited-data tomography in CBCT systems.

    PubMed

    de Molina, Claudia; Serrano, Estefania; Garcia-Blas, Javier; Carretero, Jesus; Desco, Manuel; Abella, Monica

    2018-05-15

    Standard cone-beam computed tomography (CBCT) involves the acquisition of at least 360 projections rotating through 360 degrees. Nevertheless, there are cases in which only a few projections can be taken in a limited angular span, such as during surgery, where rotation of the source-detector pair is limited to less than 180 degrees. Reconstruction of limited data with the conventional method proposed by Feldkamp, Davis and Kress (FDK) results in severe artifacts. Iterative methods may compensate for the lack of data by including additional prior information, although they imply a high computational burden and memory consumption. We present an accelerated implementation of an iterative method for CBCT following the Split Bregman formulation, which reduces computational time through GPU-accelerated kernels. The implementation enables the reconstruction of large volumes (>1024 3 pixels) using partitioning strategies in forward- and back-projection operations. We evaluated the algorithm on small-animal data for different scenarios with different numbers of projections, angular span, and projection size. Reconstruction time varied linearly with the number of projections and quadratically with projection size but remained almost unchanged with angular span. Forward- and back-projection operations represent 60% of the total computational burden. Efficient implementation using parallel processing and large-memory management strategies together with GPU kernels enables the use of advanced reconstruction approaches which are needed in limited-data scenarios. Our GPU implementation showed a significant time reduction (up to 48 ×) compared to a CPU-only implementation, resulting in a total reconstruction time from several hours to few minutes.

  17. The relationship between playing computer or video games with mental health and social relationships among students in guidance schools, Kermanshah.

    PubMed

    Reshadat, S; Ghasemi, S R; Ahmadian, M; RajabiGilan, N

    2014-01-09

    Computer or video games are a popular recreational activity and playing them may constitute a large part of leisure time. This cross-sectional study aimed to evaluate the relationship between playing computer or video games with mental health and social relationships among students in guidance schools in Kermanshah, Islamic Republic of Iran, in 2012. Our total sample was 573 students and our tool was the General Health Questionnaire (GHQ-28) and social relationships questionnaires. Survey respondents reported spending an average of 71.07 (SD 72.1) min/day on computer or video games. There was a significant relationship between time spent playing games and general mental health (P < 0.04) and depression (P < 0.03). There was also a significant difference between playing and not playing computer or video games with social relationships and their subscales, including trans-local relationships (P < 0.0001) and association relationships (P < 0.01) among all participants. There was also a significant relationship between social relationships and time spent playing games (P < 0.02) and its dimensions, except for family relationships.

  18. Computations of total sediment discharge, Niobrara River near Cody, Nebraska

    USGS Publications Warehouse

    Colby, Bruce R.; Hembree, C.H.

    1955-01-01

    A natural chute in the Niobrara River near Cody, Nebr., constricts the flow of the river except at high stages to a narrow channel in which the turbulence is sufficient to suspend nearly the total sediment discharge. Because much of the flow originates in the sandhills area of Nebraska, the water discharge and sediment discharge are relatively uniform. Sediment discharges based on depth-integrated samples at a contracted section in the chute and on streamflow records at a recording gage about 1,900 feet upstream are available for the period from April 1948 to September 1953 but are not given directly as continuous records in this report. Sediment measurements have been made periodically near the gage and at other nearby relatively unconfined sections of the stream for comparison with measurements at the contracted section. Sediment discharge at these relatively unconfined sections was computed from formulas for comparison with measured sediment discharges at the contracted section. A form of the Du Boys formula gave computed tonnages of sediment that were unsatisfactory. Sediment discharges as computed from the Schoklitsch formula agreed well with measured sediment discharges that were low, but they were much too low at measured sediment discharges that were higher. The Straub formula gave computed discharges, presumably of bed material, that were several times larger than measured discharges of sediment coarser than 0.125 millimeter. All three of these formulas gave computed sediment discharges that increased with water discharges much less rapidly than the measured discharges of sediment coarser than 0.125 millimeter. The Einstein procedure when applied to a reach that included 10 defined cross sections gave much better agreement between computed sediment discharge and measured sediment discharge than did anyone of the three other formulas that were used. This procedure does not compute the discharge of sediment that is too small to be found in the stream bed in appreciable quantities. Hence, total sediment discharges were obtained by adding computed discharges of sediment larger than 0.125 millimeter to measured discharges of sediment smaller than 0.125 millimeter. The size distributions of the computed sediment discharge compared poorly with the size distributions of sediment discharge at the contracted section. Ten sediment discharges computed from the Einstein procedure as applied to a single section averaged several times the measured sediment discharge for the contracted section and gave size distributions that were unsatisfactory. The Einstein procedure was modified to compute total sediment discharge at an alluvial section from readily measurable field data. The modified procedure uses measurements of bed-material particle sizes, suspended-sediment concentrations and particle sizes from depth-integrated samples, streamflow, and water temperatures. Computations of total sediment discharge were made by using this modified procedure, some for the section at the gaging station and some for each of two other relatively unconfined sections. The size distributions of the computed and the measured sediment discharges agreed reasonably well. Major advantages of this modified procedure include applicability to a single section rather than to a reach of channel, use of measured velocity instead of water-surface slope, use of depth-integrated samples, and apparently fair accuracy for computing both total sediment discharge and approximate size distribution of the sediment. Because of these advantages this modified procedure is being further studied to increase its accuracy, to simplify the required computations, and to define its limitations. In the development of the modified procedure, some relationships concerning theories of sediment transport were reviewed and checked against field data. Vertical distributions of suspended sediment at relatively unconfined sections did not agree well with theoretical dist

  19. Cloud computing for comparative genomics

    PubMed Central

    2010-01-01

    Background Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. Results We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. Conclusions The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems. PMID:20482786

  20. Cloud computing for comparative genomics.

    PubMed

    Wall, Dennis P; Kudtarkar, Parul; Fusaro, Vincent A; Pivovarov, Rimma; Patil, Prasad; Tonellato, Peter J

    2010-05-18

    Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems.

  1. Computer usage and task-switching during resident's working day: Disruptive or not?

    PubMed

    Méan, Marie; Garnier, Antoine; Wenger, Nathalie; Castioni, Julien; Waeber, Gérard; Marques-Vidal, Pedro

    2017-01-01

    Recent implementation of electronic health records (EHR) has dramatically changed medical ward organization. While residents in general internal medicine use EHR systems half of their working time, whether computer usage impacts residents' workflow remains uncertain. We aimed to observe the frequency of task-switches occurring during resident's work and to assess whether computer usage was associated with task-switching. In a large Swiss academic university hospital, we conducted, between May 26 and July 24, 2015 a time-motion study to assess how residents in general internal medicine organize their working day. We observed 49 day and 17 evening shifts of 36 residents, amounting to 697 working hours. During day shifts, residents spent 5.4 hours using a computer (mean total working time: 11.6 hours per day). On average, residents switched 15 times per hour from a task to another. Task-switching peaked between 8:00-9:00 and 16:00-17:00. Task-switching was not associated with resident's characteristics and no association was found between task-switching and extra hours (Spearman r = 0.220, p = 0.137 for day and r = 0.483, p = 0.058 for evening shifts). Computer usage occurred more frequently at the beginning or ends of day shifts and was associated with decreased overall task-switching. Task-switching occurs very frequently during resident's working day. Despite the fact that residents used a computer half of their working time, computer usage was associated with decreased task-switching. Whether frequent task-switches and computer usage impact the quality of patient care and resident's work must be evaluated in further studies.

  2. A computational procedure for multibody systems including flexible beam dynamics

    NASA Technical Reports Server (NTRS)

    Downer, J. D.; Park, K. C.; Chiou, J. C.

    1990-01-01

    A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. The flexible beams are modeled using a fully nonlinear theory which accounts for both finite rotations and large deformations. The present formulation incorporates physical measures of conjugate Cauchy stress and covariant strain increments. As a consequence, the beam model can easily be interfaced with real-time strain measurements and feedback control systems. A distinct feature of the present work is the computational preservation of total energy for undamped systems; this is obtained via an objective strain increment/stress update procedure combined with an energy-conserving time integration algorithm which contains an accurate update of angular orientations. The procedure is demonstrated via several example problems.

  3. Microstructure of cotton fibrous assemblies based on computed tomography

    NASA Astrophysics Data System (ADS)

    Jing, Hui; Yu, Weidong

    2017-12-01

    This paper describes for the first time the analysis of inner microstructure of cotton fibrous assemblies using computed tomography. Microstructure parameters such as packing density, fractal dimension as well as porosity including open porosity, closed porosity and total porosity are calculated based on 2D data from computed tomography. Values of packing density and fractal dimension are stable in random oriented fibrous assemblies, and there exists a satisfactory approximate linear relationship between them. Moreover, poles analysis indicates that porosity represents the tightness of fibrous assemblies and open poles are main existence.

  4. Empirical comparison of heuristic load distribution in point-to-point multicomputer networks

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Nazief, Bobby A. A.; Reed, Daniel A.

    1990-01-01

    The study compared several load placement algorithms using instrumented programs and synthetic program models. Salient characteristics of these program traces (total computation time, total number of messages sent, and average message time) span two orders of magnitude. Load distribution algorithms determine the initial placement for processes, a precursor to the more general problem of load redistribution. It is found that desirable workload distribution strategies will place new processes globally, rather than locally, to spread processes rapidly, but that local information should be used to refine global placement.

  5. Bulk viscosity of the Lennard-Jones fluid for a wide range of states computed by equilibrium molecular dynamics

    NASA Astrophysics Data System (ADS)

    Hoheisel, C.; Vogelsang, R.; Schoen, M.

    1987-12-01

    Accurate data for the bulk viscosity ηv have been obtained by molecular dynamics calculations. Many thermodynamic states of the Lennard-Jones fluid were considered. The Green-Kubo integrand of ηv is analyzed in terms of partial correlation functions constituting the total one. These partial functions behave rather differently from those found for the shear viscosity or the thermal conductivity. Generally the total autocorrelation function of ηv shows a steeper initial decay and a more pronounced long time form than those of the shear viscosity or the thermal conductivity. For states near transition to solid phases, like the pseudotriple point of argon, the Green-Kubo integrand of ηv has a significantly longer ranged time behavior than that of the shear viscosity. Hence, for the latter states, a systematic error is expected for ηv using equilibrium molecular dynamics for its computation.

  6. A real time dynamic data acquisition and processing system for velocity, density, and total temperature fluctuation measurements

    NASA Technical Reports Server (NTRS)

    Clukey, Steven J.

    1991-01-01

    The real time Dynamic Data Acquisition and Processing System (DDAPS) is described which provides the capability for the simultaneous measurement of velocity, density, and total temperature fluctuations. The system of hardware and software is described in context of the wind tunnel environment. The DDAPS replaces both a recording mechanism and a separate data processing system. DDAPS receives input from hot wire anemometers. Amplifiers and filters condition the signals with computer controlled modules. The analog signals are simultaneously digitized and digitally recorded on disk. Automatic acquisition collects necessary calibration and environment data. Hot wire sensitivities are generated and applied to the hot wire data to compute fluctuations. The presentation of the raw and processed data is accomplished on demand. The interface to DDAPS is described along with the internal mechanisms of DDAPS. A summary of operations relevant to the use of the DDAPS is also provided.

  7. Short-term outcome of 1,465 computer-navigated primary total knee replacements 2005-2008.

    PubMed

    Gøthesen, Oystein; Espehaug, Birgitte; Havelin, Leif; Petursson, Gunnar; Furnes, Ove

    2011-06-01

    and purpose Improvement of positioning and alignment by the use of computer-assisted surgery (CAS) might improve longevity and function in total knee replacements, but there is little evidence. In this study, we evaluated the short-term results of computer-navigated knee replacements based on data from the Norwegian Arthroplasty Register. Primary total knee replacements without patella resurfacing, reported to the Norwegian Arthroplasty Register during the years 2005-2008, were evaluated. The 5 most common implants and the 3 most common navigation systems were selected. Cemented, uncemented, and hybrid knees were included. With the risk of revision for any cause as the primary endpoint and intraoperative complications and operating time as secondary outcomes, 1,465 computer-navigated knee replacements (CAS) and 8,214 conventionally operated knee replacements (CON) were compared. Kaplan-Meier survival analysis and Cox regression analysis with adjustment for age, sex, prosthesis brand, fixation method, previous knee surgery, preoperative diagnosis, and ASA category were used. Kaplan-Meier estimated survival at 2 years was 98% (95% CI: 97.5-98.3) in the CON group and 96% (95% CI: 95.0-97.8) in the CAS group. The adjusted Cox regression analysis showed a higher risk of revision in the CAS group (RR = 1.7, 95% CI: 1.1-2.5; p = 0.02). The LCS Complete knee had a higher risk of revision with CAS than with CON (RR = 2.1, 95% CI: 1.3-3.4; p = 0.004)). The differences were not statistically significant for the other prosthesis brands. Mean operating time was 15 min longer in the CAS group. With the introduction of computer-navigated knee replacement surgery in Norway, the short-term risk of revision has increased for computer-navigated replacement with the LCS Complete. The mechanisms of failure of these implantations should be explored in greater depth, and in this study we have not been able to draw conclusions regarding causation.

  8. Effects on mortality, treatment, and time management as a result of routine use of total body computed tomography in blunt high-energy trauma patients.

    PubMed

    van Vugt, Raoul; Kool, Digna R; Deunk, Jaap; Edwards, Michael J R

    2012-03-01

    Currently, total body computed tomography (TBCT) is rapidly implemented in the evaluation of trauma patients. With this review, we aim to evaluate the clinical implications-mortality, change in treatment, and time management-of the routine use of TBCT in adult blunt high-energy trauma patients compared with a conservative approach with the use of conventional radiography, ultrasound, and selective computed tomography. A literature search for original studies on TBCT in blunt high-energy trauma patients was performed. Two independent observers included studies concerning mortality, change of treatment, and/or time management as outcome measures. For each article, relevant data were extracted and analyzed. In addition, the quality according to the Oxford levels of evidence was assessed. From 183 articles initially identified, the observers included nine original studies in consensus. One of three studies described a significant difference in mortality; four described a change of treatment in 2% to 27% of patients because of the use of TBCT. Five studies found a gain in time with the use of immediate routine TBCT. Eight studies scored a level of evidence of 2b and one of 3b. Current literature has predominantly suboptimal design to prove terminally that the routine use of TBCT results in improved survival of blunt high-energy trauma patients. TBCT can give a change of treatment and improves time intervals in the emergency department as compared with its selective use.

  9. Application-oriented offloading in heterogeneous networks for mobile cloud computing

    NASA Astrophysics Data System (ADS)

    Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.

    2018-04-01

    Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.

  10. Logistics Control Facility: A Normative Model for Total Asset Visibility in the Air Force Logistics System

    DTIC Science & Technology

    1994-09-01

    IIssue Computers, information systems, and communication systems are being increasingly used in transportation, warehousing, order processing , materials...inventory levels, reduced order processing times, reduced order processing costs, and increased customer satisfaction. While purchasing and transportation...process, the speed in which crders are processed would increase significantly. Lowering the order processing time in turn lowers the lead time, which in

  11. Convergence to equilibrium under a random Hamiltonian.

    PubMed

    Brandão, Fernando G S L; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K; Mozrzymas, Marek

    2012-09-01

    We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.

  12. Convergence to equilibrium under a random Hamiltonian

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K.; Mozrzymas, Marek

    2012-09-01

    We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.

  13. A Hybrid MPI/OpenMP Approach for Parallel Groundwater Model Calibration on Multicore Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan

    2010-01-01

    Groundwater model calibration is becoming increasingly computationally time intensive. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelism in software and hardware to reduce calibration time on multicore computers with minimal parallelization effort. At first, HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for a uranium transport model with over a hundred species involving nearly a hundred reactions, and a field scale coupled flow and transport model. In the first application, a single parallelizable loop is identified to consume over 97% of the total computational time. With a few lines of OpenMP compiler directives inserted into the code,more » the computational time reduces about ten times on a compute node with 16 cores. The performance is further improved by selectively parallelizing a few more loops. For the field scale application, parallelizable loops in 15 of the 174 subroutines in HGC5 are identified to take more than 99% of the execution time. By adding the preconditioned conjugate gradient solver and BICGSTAB, and using a coloring scheme to separate the elements, nodes, and boundary sides, the subroutines for finite element assembly, soil property update, and boundary condition application are parallelized, resulting in a speedup of about 10 on a 16-core compute node. The Levenberg-Marquardt (LM) algorithm is added into HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, compute nodes at the number of adjustable parameters (when the forward difference is used for Jacobian approximation), or twice that number (if the center difference is used), are used to reduce the calibration time from days and weeks to a few hours for the two applications. This approach can be extended to global optimization scheme and Monte Carol analysis where thousands of compute nodes can be efficiently utilized.« less

  14. Computer versus paper--does it make any difference in test performance?

    PubMed

    Karay, Yassin; Schauber, Stefan K; Stosch, Christoph; Schüttpelz-Brauns, Katrin

    2015-01-01

    CONSTRUCT: In this study, we examine the differences in test performance between the paper-based and the computer-based version of the Berlin formative Progress Test. In this context it is the first study that allows controlling for students' prior performance. Computer-based tests make possible a more efficient examination procedure for test administration and review. Although university staff will benefit largely from computer-based tests, the question arises if computer-based tests influence students' test performance. A total of 266 German students from the 9th and 10th semester of medicine (comparable with the 4th-year North American medical school schedule) participated in the study (paper = 132, computer = 134). The allocation of the test format was conducted as a randomized matched-pair design in which students were first sorted according to their prior test results. The organizational procedure, the examination conditions, the room, and seating arrangements, as well as the order of questions and answers, were identical in both groups. The sociodemographic variables and pretest scores of both groups were comparable. The test results from the paper and computer versions did not differ. The groups remained within the allotted time, but students using the computer version (particularly the high performers) needed significantly less time to complete the test. In addition, we found significant differences in guessing behavior. Low performers using the computer version guess significantly more than low-performing students in the paper-pencil version. Participants in computer-based tests are not at a disadvantage in terms of their test results. The computer-based test required less processing time. The reason for the longer processing time when using the paper-pencil version might be due to the time needed to write the answer down, controlling for transferring the answer correctly. It is still not known why students using the computer version (particularly low-performing students) guess at a higher rate. Further studies are necessary to understand this finding.

  15. Emerging technology in surgical education: combining real-time augmented reality and wearable computing devices.

    PubMed

    Ponce, Brent A; Menendez, Mariano E; Oladeji, Lasun O; Fryberger, Charles T; Dantuluri, Phani K

    2014-11-01

    The authors describe the first surgical case adopting the combination of real-time augmented reality and wearable computing devices such as Google Glass (Google Inc, Mountain View, California). A 66-year-old man presented to their institution for a total shoulder replacement after 5 years of progressive right shoulder pain and decreased range of motion. Throughout the surgical procedure, Google Glass was integrated with the Virtual Interactive Presence and Augmented Reality system (University of Alabama at Birmingham, Birmingham, Alabama), enabling the local surgeon to interact with the remote surgeon within the local surgical field. Surgery was well tolerated by the patient and early surgical results were encouraging, with an improvement of shoulder pain and greater range of motion. The combination of real-time augmented reality and wearable computing devices such as Google Glass holds much promise in the field of surgery. Copyright 2014, SLACK Incorporated.

  16. Computations of Flow over a Hump Model Using Higher Order Method with Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Balakumar, P.

    2005-01-01

    Turbulent separated flow over a two-dimensional hump is computed by solving the RANS equations with k - omega (SST) turbulence model for the baseline, steady suction and oscillatory blowing/suction flow control cases. The flow equations and the turbulent model equations are solved using a fifth-order accurate weighted essentially. nonoscillatory (WENO) scheme for space discretization and a third order, total variation diminishing (TVD) Runge-Kutta scheme for time integration. Qualitatively the computed pressure distributions exhibit the same behavior as those observed in the experiments. The computed separation regions are much longer than those observed experimentally. However, the percentage reduction in the separation region in the steady suction case is closer to what was measured in the experiment. The computations did not predict the expected reduction in the separation length in the oscillatory case. The predicted turbulent quantities are two to three times smaller than the measured values pointing towards the deficiencies in the existing turbulent models when they are applied to strong steady/unsteady separated flows.

  17. Cancel and rethink in the Wason selection task: further evidence for the heuristic-analytic dual process theory.

    PubMed

    Wada, Kazushige; Nittono, Hiroshi

    2004-06-01

    The reasoning process in the Wason selection task was examined by measuring card inspection times in the letter-number and drinking-age problems. 24 students were asked to solve the problems presented on a computer screen. Only the card touched with a mouse pointer was visible, and the total exposure time of each card was measured. Participants were allowed to cancel their previous selections at any time. Although rethinking was encouraged, the cards once selected were rarely cancelled (10% of the total selections). Moreover, most of the cancelled cards were reselected (89% of the total cancellations). Consistent with previous findings, inspection times were longer for selected cards than for nonselected cards. These results suggest that card selections are determined largely by initial heuristic processes and rarely reversed by subsequent analytic processes. The present study gives further support for the heuristic-analytic dual process theory.

  18. Development of a distributed-parameter mathematical model for simulation of cryogenic wind tunnels

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1983-01-01

    A one-dimensional distributed-parameter dynamic model of a cryogenic wind tunnel was developed which accounts for internal and external heat transfer, viscous momentum losses, and slotted-test-section dynamics. Boundary conditions imposed by liquid-nitrogen injection, gas venting, and the tunnel fan were included. A time-dependent numerical solution to the resultant set of partial differential equations was obtained on a CDC CYBER 203 vector-processing digital computer at a usable computational rate. Preliminary computational studies were performed by using parameters of the Langley 0.3-Meter Transonic Cryogenic Tunnel. Studies were performed by using parameters from the National Transonic Facility (NTF). The NTF wind-tunnel model was used in the design of control loops for Mach number, total temperature, and total pressure and for determining interactions between the control loops. It was employed in the application of optimal linear-regulator theory and eigenvalue-placement techniques to develop Mach number control laws.

  19. Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor

    NASA Astrophysics Data System (ADS)

    Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul

    2017-05-01

    Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.

  20. A comparative study of serial and parallel aeroelastic computations of wings

    NASA Technical Reports Server (NTRS)

    Byun, Chansup; Guruswamy, Guru P.

    1994-01-01

    A procedure for computing the aeroelasticity of wings on parallel multiple-instruction, multiple-data (MIMD) computers is presented. In this procedure, fluids are modeled using Euler equations, and structures are modeled using modal or finite element equations. The procedure is designed in such a way that each discipline can be developed and maintained independently by using a domain decomposition approach. In the present parallel procedure, each computational domain is scalable. A parallel integration scheme is used to compute aeroelastic responses by solving fluid and structural equations concurrently. The computational efficiency issues of parallel integration of both fluid and structural equations are investigated in detail. This approach, which reduces the total computational time by a factor of almost 2, is demonstrated for a typical aeroelastic wing by using various numbers of processors on the Intel iPSC/860.

  1. Measuring costs of data collection at village clinics by village doctors for a syndromic surveillance system-a cross sectional survey from China.

    PubMed

    Ding, Yan; Fei, Yang; Xu, Biao; Yang, Jun; Yan, Weirong; Diwan, Vinod K; Sauerborn, Rainer; Dong, Hengjin

    2015-07-25

    Studies into the costs of syndromic surveillance systems are rare, especially for estimating the direct costs involved in implementing and maintaining these systems. An Integrated Surveillance System in rural China (ISSC project), with the aim of providing an early warning system for outbreaks, was implemented; village clinics were the main surveillance units. Village doctors expressed their willingness to join in the surveillance if a proper subsidy was provided. This study aims to measure the costs of data collection by village clinics to provide a reference regarding the subsidy level required for village clinics to participate in data collection. We conducted a cross-sectional survey with a village clinic questionnaire and a staff questionnaire using a purposive sampling strategy. We tracked reported events using the ISSC internal database. Cost data included staff time, and the annual depreciation and opportunity costs of computers. We measured the village doctors' time costs for data collection by multiplying the number of full time employment equivalents devoted to the surveillance by the village doctors' annual salaries and benefits, which equaled their net incomes. We estimated the depreciation and opportunity costs of computers by calculating the equivalent annual computer cost and then allocating this to the surveillance based on the percentage usage. The estimated total annual cost of collecting data was 1,423 Chinese Renminbi (RMB) in 2012 (P25 = 857, P75 = 3284), including 1,250 RMB (P25 = 656, P75 = 3000) staff time costs and 134 RMB (P25 = 101, P75 = 335) depreciation and opportunity costs of computers. The total costs of collecting data from the village clinics for the syndromic surveillance system was calculated to be low compared with the individual net income in County A.

  2. Is a computer-assisted design and computer-assisted manufacturing method for mandibular reconstruction economically viable?

    PubMed

    Tarsitano, Achille; Battaglia, Salvatore; Crimi, Salvatore; Ciocca, Leonardo; Scotti, Roberto; Marchetti, Claudio

    2016-07-01

    The design and manufacture of patient-specific mandibular reconstruction plates, particularly in combination with cutting guides, has created many new opportunities for the planning and implementation of mandibular reconstruction. Although this surgical method is being used more widely and the outcomes appear to be improved, the question of the additional cost has to be discussed. To evaluate the cost generated by the management of this technology, we studied a cohort of patients treated for mandibular neoplasms. The population was divided into two groups of 20 patients each who were undergoing a 'traditional' freehand mandibular reconstruction or a computer-aided design/computer-aided manufacturing (CAD-CAM) mandibular reconstruction. Data concerning operation time, complications, and days of hospitalisation were used to evaluate costs related to the management of these patients. The mean operating time for the CAD-CAM group was 435 min, whereas that for the freehand group was 550.5 min. The total difference in terms of average time gain was 115.5 min. No microvascular complication occurred in the CAD-CAM group; two complications (10%) were observed in patients undergoing freehand reconstructions. The mean overall lengths of hospital stay were 13.8 days for the CAD-CAM group and 17 days for the freehand group. Finally, considering that the institutional cost per minute of theatre time is €30, the money saved as a result of the time gained was €3,450. This cost corresponds approximately to the total price of the CAD-CAM surgery. In conclusion, we believe that CAD-CAM technology for mandibular reconstruction will become a widely used reconstructive method and that its cost will be covered by gains in terms of surgical time, quality of reconstruction, and reduced complications. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  3. Continuous real-time water-quality monitoring and regression analysis to compute constituent concentrations and loads in the North Fork Ninnescah River upstream from Cheney Reservoir, south-central Kansas, 1999–2012

    USGS Publications Warehouse

    Stone, Mandy L.; Graham, Jennifer L.; Gatotho, Jackline W.

    2013-01-01

    Cheney Reservoir, located in south-central Kansas, is the primary water supply for the city of Wichita. The U.S. Geological Survey has operated a continuous real-time water-quality monitoring station since 1998 on the North Fork Ninnescah River, the main source of inflow to Cheney Reservoir. Continuously measured water-quality physical properties include streamflow, specific conductance, pH, water temperature, dissolved oxygen, and turbidity. Discrete water-quality samples were collected during 1999 through 2009 and analyzed for sediment, nutrients, bacteria, and other water-quality constituents. Regression models were developed to establish relations between discretely sampled constituent concentrations and continuously measured physical properties to compute concentrations of those constituents of interest that are not easily measured in real time because of limitations in sensor technology and fiscal constraints. Regression models were published in 2006 that were based on data collected during 1997 through 2003. This report updates those models using discrete and continuous data collected during January 1999 through December 2009. Models also were developed for four new constituents, including additional nutrient species and indicator bacteria. In addition, a conversion factor of 0.68 was established to convert the Yellow Springs Instruments (YSI) model 6026 turbidity sensor measurements to the newer YSI model 6136 sensor at the North Ninnescah River upstream from Cheney Reservoir site. Newly developed models and 14 years of hourly continuously measured data were used to calculate selected constituent concentrations and loads during January 1999 through December 2012. The water-quality information in this report is important to the city of Wichita because it allows the concentrations of many potential pollutants of interest to Cheney Reservoir, including nutrients and sediment, to be estimated in real time and characterized over conditions and time scales that would not be possible otherwise. In general, model forms and the amount of variance explained by the models was similar between the original and updated models. The amount of variance explained by the updated models changed by 10 percent or less relative to the original models. Total nitrogen, nitrate, organic nitrogen, E. coli bacteria, and total organic carbon models were newly developed for this report. Additional data collection over a wider range of hydrological conditions facilitated the development of these models. The nitrate model is particularly important because it allows for comparison to Cheney Reservoir Task Force goals. Mean hourly computed total suspended solids concentration during 1999 through 2012 was 54 milligrams per liter (mg/L). The total suspended solids load during 1999 through 2012 was 174,031 tons. On an average annual basis, the Cheney Reservoir Task Force runoff (550 mg/L) and long-term (100 mg/L) total suspended solids goals were never exceeded, but the base flow goal was exceeded every year during 1999 through 2012. Mean hourly computed nitrate concentration was 1.08 mg/L during 1999 through 2012. The total nitrate load during 1999 through 2012 was 1,361 tons. On an annual average basis, the Cheney Reservoir Task Force runoff (6.60 mg/L) nitrate goal was never exceeded, the long-term goal (1.20 mg/L) was exceeded only in 2012, and the base flow goal of 0.25 mg/L was exceeded every year. Mean nitrate concentrations that were higher during base flow, rather than during runoff conditions, suggest that groundwater sources are the main contributors of nitrate to the North Fork Ninnescah River above Cheney Reservoir. Mean hourly computed phosphorus concentration was 0.14 mg/L during 1999 through 2012. The total phosphorus load during 1999 through 2012 was 328 tons. On an average annual basis, the Cheney Reservoir Task Force runoff goal of 0.40 mg/L for total phosphorus was exceeded in 2002, the year with the largest yearly mean turbidity, and the long-term goal (0.10 mg/L) was exceeded in every year except 2011 and 2012, the years with the smallest mean streamflows. The total phosphorus base flow goal of 0.05 mg/L was exceeded every year. Given that base flow goals for total suspended solids, nitrate, and total phosphorus were exceeded every year despite hydrologic conditions, the established base flow goals are either unattainable or substantially more best management practices will need to be implemented to attain them. On an annual average basis, no discernible patterns were evident in total suspended sediment, nitrate, and total phosphorus concentrations or loads over time, in large part because of hydrologic variability. However, more rigorous statistical analyses are required to evaluate temporal trends. A more rigorous analysis of temporal trends will allow evaluation of watershed investments in best management practices.

  4. Optimum data analysis procedures for Titan 4 and Space Shuttle payload acoustic measurements during lift-off

    NASA Technical Reports Server (NTRS)

    Piersol, Allan G.

    1991-01-01

    Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.

  5. An algorithm for fast elastic wave simulation using a vectorized finite difference operator

    NASA Astrophysics Data System (ADS)

    Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna

    2018-07-01

    Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.

  6. A transient response analysis of the space shuttle vehicle during liftoff

    NASA Technical Reports Server (NTRS)

    Brunty, J. A.

    1990-01-01

    A proposed transient response method is formulated for the liftoff analysis of the space shuttle vehicles. It uses a power series approximation with unknown coefficients for the interface forces between the space shuttle and mobile launch platform. This allows the equation of motion of the two structures to be solved separately with the unknown coefficients at the end of each step. These coefficients are obtained by enforcing the interface compatibility conditions between the two structures. Once the unknown coefficients are determined, the total response is computed for that time step. The method is validated by a numerical example of a cantilevered beam and by the liftoff analysis of the space shuttle vehicles. The proposed method is compared to an iterative transient response analysis method used by Martin Marietta for their space shuttle liftoff analysis. It is shown that the proposed method uses less computer time than the iterative method and does not require as small a time step for integration. The space shuttle vehicle model is reduced using two different types of component mode synthesis (CMS) methods, the Lanczos method and the Craig and Bampton CMS method. By varying the cutoff frequency in the Craig and Bampton method it was shown that the space shuttle interface loads can be computed with reasonable accuracy. Both the Lanczos CMS method and Craig and Bampton CMS method give similar results. A substantial amount of computer time is saved using the Lanczos CMS method over that of the Craig and Bampton method. However, when trying to compute a large number of Lanczos vectors, input/output computer time increased and increased the overall computer time. The application of several liftoff release mechanisms that can be adapted to the proposed method are discussed.

  7. 15. NBS TOP SIDE CONTROL ROOM. THE SUIT SYSTEMS CONSOLE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. NBS TOP SIDE CONTROL ROOM. THE SUIT SYSTEMS CONSOLE IS USED TO CONTROL AIR FLOW AND WATER FLOW TO THE UNDERWATER SPACE SUIT DURING THE TEST. THE SUIT SYSTEMS ENGINEER MONITORS AIR FLOW ON THE PANEL TO THE LEFT, AND SUIT DATA ON THE COMPUTER MONITOR JUST SLIGHTLY TO HIS LEFT. WATER FLOW IS MONITORED ON THE PANEL JUST SLIGHTLY TO HIS RIGHT AND TEST VIDEO TO HIS FAR RIGHT. THE DECK CHIEF MONITORS THE DIVER'S DIVE TIMES ON THE COMPUTER IN THE UPPER RIGHT. THE DECK CHIEF LOGS THEM IN AS THEY ENTER THE WATER, AND LOGS THEM OUT AS THEY EXIT THE WATER. THE COMPUTER CALCULATES TOTAL DIVE TIME. - Marshall Space Flight Center, Neutral Buoyancy Simulator Facility, Rideout Road, Huntsville, Madison County, AL

  8. A Simulation Model for Purchasing Duplicate Copies in a Library

    ERIC Educational Resources Information Center

    Arms, W. Y.; Walter, T. P.

    1974-01-01

    A method of estimating the number of duplicate copies of books needed based on a computer simulation which takes into account number of copies available, number of loans, total underlying demand, satisfaction level, percentage time on shelf. (LS)

  9. Investigations of Sediment Transportation, Middle Loup River at Dunning, Nebraska: With Application of Data from Turbulence Flume

    USGS Publications Warehouse

    Hubbell, David Wellington; Matejka, Donald Quintin

    1959-01-01

    An investigation of fluvial sediments of the Middle Loup River at Dunning, Nebr., was begun in 1946 and expanded in 1949 to provide information on sediment transportation. Construction of an artificial turbulence flume at which the total sediment discharge of the Middle Loup River at Dunning, Nebr., could be measured with suspended-sediment sampling equipment was completed in 1949. Since that time. measurements have been made at the turbulence flume and at several selected sections in a reach upstream and downstream from the flume. The Middle Loup River upstream from Dunning traverses the sandhills region of north-central Nebraska and has a drainage area of approximately 1,760 square miles. The sandhills are underlain by the Ogallala formation of Tertiary age and are mantled by loess and dune sand. The topography is characterized by northwest-trending sand dunes, which are stabilized by grass cover. The valley floor upstream from Dunning is generally about half a mile wide, is about 80 feet lower than the uplands, and is composed of sand that was mostly stream deposited. The channel is defined by low banks. Bank erosion is prevalent and is the source of most of the sediment load. The flow originates mostly from ground-water accretion and varies between about 200 and 600 cfs (cubic feet per second). Measured suspended-sediment loads vary from about 200 to 2,000 tons per day, of which about 20 percent is finer than 0.062 millimeter and 100 percent is finer than 0.50 millimeter. Total sediment discharges vary from about 500 to 3,500 tons per day, of which about 10 percent is finer than 0.062 millimeter, about 90 percent is finer than 0.50 millimeter, and about 98 percent is finer than 2.0 millimeters. The measured suspended-sediment discharge in the reach near Dunning averages about one-half of the total sediment discharge as measured at the turbulence flume. This report contains information collected during the period October 1, 1948, to September 30, 1952. The information includes sediment discharges; particle-size analyses of total load, of measured suspended sediment, and of bed material; water discharges and other hydraulic data for the turbulence flume and the selected sections. Sediment discharges have been computed with several different formulas, and insofar as possible, each computed load has been compared with data from the turbulence flume. Sediment discharges computed with the Einstein procedure did not agree well, in general, with comparable measured loads. However, a satisfactory representative cross section for the reach could not be determined with the cross sections that were selected for this investigation. If the computed cross section was narrower and deeper than a representative cross section for the reach, computed loads were high; and if the computed cross section was wider and shallower than a representative cross section for the reach, computed loads were low. Total sediment discharges computed with the modified Einstein procedure compared very well with the loads of individual size ranges and the measured total loads at the turbulence flume. Sediment discharges computed with the Straub equation averaged about twice the measured total sediment discharge at the turbulence flume. Bed-load discharges computed with the Kalinske equation were of about the right magnitude; however, high computed loads were associated with low total loads, low unmeasured loads, and low concentrations of measured suspended sediment coarser than 0.125 millimeter. Bed-load discharges computed with the Schoklitsch equation seemed somewhat high; about one-third of the computed loads were slightly higher than comparable unmeasured loads. Although, in general, high computed discharges with the Schoklitsch equation were associated with high measured total loads, high unmeasured loads, and high concentrations of measured suspended sediment coarser than 0.125 millimeter, the trend was not consistent. Bed-load discharges computed

  10. The position of a standard optical computer mouse affects cardiorespiratory responses during the operation of a computer under time constraints.

    PubMed

    Sako, Shunji; Sugiura, Hiromichi; Tanoue, Hironori; Kojima, Makoto; Kono, Mitsunobu; Inaba, Ryoichi

    2014-08-01

    This study investigated the association between task-induced stress and fatigue by examining the cardiovascular responses of subjects using different mouse positions while operating a computer under time constraints. The study was participated by 16 young, healthy men and examined the use of optical mouse devices affixed to laptop computers. Two mouse positions were investigated: (1) the distal position (DP), in which the subjects place their forearms on the desk accompanied by the abduction and flexion of their shoulder joints, and (2) the proximal position (PP), in which the subjects place only their wrists on the desk without using an armrest. The subjects continued each task for 16 min. We assessed differences in several characteristics according to mouse position, including expired gas values, autonomic nerve activities (based on cardiorespiratory responses), operating efficiencies (based on word counts), and fatigue levels (based on the visual analog scale - VAS). Oxygen consumption (VO(2)), the ratio of inspiration time to respiration time (T(i)/T(total)), respiratory rate (RR), minute ventilation (VE), and the ratio of expiration to inspiration (Te/T(i)) were significantly lower when the participants were performing the task in the DP than those obtained in the PP. Tidal volume (VT), carbon dioxide output rates (VCO(2)/VE), and oxygen extraction fractions (VO(2)/VE) were significantly higher for the DP than they were for the PP. No significant difference in VAS was observed between the positions; however, as the task progressed, autonomic nerve activities were lower and operating efficiencies were significantly higher for the DP than they were for the PP. Our results suggest that the DP has fewer effects on cardiorespiratory functions, causes lower levels of sympathetic nerve activity and mental stress, and produces a higher total workload than the PP. This suggests that the DP is preferable to the PP when operating a computer.

  11. Performance Comparison of Mainframe, Workstations, Clusters, and Desktop Computers

    NASA Technical Reports Server (NTRS)

    Farley, Douglas L.

    2005-01-01

    A performance evaluation of a variety of computers frequently found in a scientific or engineering research environment was conducted using a synthetic and application program benchmarks. From a performance perspective, emerging commodity processors have superior performance relative to legacy mainframe computers. In many cases, the PC clusters exhibited comparable performance with traditional mainframe hardware when 8-12 processors were used. The main advantage of the PC clusters was related to their cost. Regardless of whether the clusters were built from new computers or whether they were created from retired computers their performance to cost ratio was superior to the legacy mainframe computers. Finally, the typical annual maintenance cost of legacy mainframe computers is several times the cost of new equipment such as multiprocessor PC workstations. The savings from eliminating the annual maintenance fee on legacy hardware can result in a yearly increase in total computational capability for an organization.

  12. Investigating the influence of eating habits, body weight and television programme preferences on television viewing time and domestic computer usage.

    PubMed

    Raptou, Elena; Papastefanou, Georgios; Mattas, Konstadinos

    2017-01-01

    The present study explored the influence of eating habits, body weight and television programme preference on television viewing time and domestic computer usage, after adjusting for sociodemographic characteristics and home media environment indicators. In addition, potential substitution or complementarity in screen time was investigated. Individual level data were collected via questionnaires that were administered to a random sample of 2,946 Germans. The econometric analysis employed a seemingly unrelated bivariate ordered probit model to conjointly estimate television viewing time and time engaged in domestic computer usage. Television viewing and domestic computer usage represent two independent behaviours in both genders and across all age groups. Dietary habits have a significant impact on television watching with less healthy food choices associated with increasing television viewing time. Body weight is found to be positively correlated with television screen time in both men and women, and overweight individuals have a higher propensity for heavy television viewing. Similar results were obtained for age groups where an increasing body mass index (BMI) in adults over 24 years old is more likely to be positively associated with a higher duration of television watching. With respect to dietary habits of domestic computer users, participants aged over 24 years of both genders seem to adopt more healthy dietary patterns. A downward trend in the BMI of domestic computer users was observed in women and adults aged 25-60 years. On the contrary, young domestic computer users 18-24 years old have a higher body weight than non-users. Television programme preferences also affect television screen time with clear differences to be observed between genders and across different age groups. In order to reduce total screen time, health interventions should target different types of screen viewing audiences separately.

  13. Epilepsy analytic system with cloud computing.

    PubMed

    Shen, Chia-Ping; Zhou, Weizhi; Lin, Feng-Seng; Sung, Hsiao-Ya; Lam, Yan-Yu; Chen, Wei; Lin, Jeng-Wei; Pan, Ming-Kai; Chiu, Ming-Jang; Lai, Feipei

    2013-01-01

    Biomedical data analytic system has played an important role in doing the clinical diagnosis for several decades. Today, it is an emerging research area of analyzing these big data to make decision support for physicians. This paper presents a parallelized web-based tool with cloud computing service architecture to analyze the epilepsy. There are many modern analytic functions which are wavelet transform, genetic algorithm (GA), and support vector machine (SVM) cascaded in the system. To demonstrate the effectiveness of the system, it has been verified by two kinds of electroencephalography (EEG) data, which are short term EEG and long term EEG. The results reveal that our approach achieves the total classification accuracy higher than 90%. In addition, the entire training time accelerate about 4.66 times and prediction time is also meet requirements in real time.

  14. A Fast Neural Network Approach to Predict Lung Tumor Motion during Respiration for Radiation Therapy Applications

    PubMed Central

    Slama, Matous; Benes, Peter M.; Bila, Jiri

    2015-01-01

    During radiotherapy treatment for thoracic and abdomen cancers, for example, lung cancers, respiratory motion moves the target tumor and thus badly affects the accuracy of radiation dose delivery into the target. A real-time image-guided technique can be used to monitor such lung tumor motion for accurate dose delivery, but the system latency up to several hundred milliseconds for repositioning the radiation beam also affects the accuracy. In order to compensate the latency, neural network prediction technique with real-time retraining can be used. We have investigated real-time prediction of 3D time series of lung tumor motion on a classical linear model, perceptron model, and on a class of higher-order neural network model that has more attractive attributes regarding its optimization convergence and computational efficiency. The implemented static feed-forward neural architectures are compared when using gradient descent adaptation and primarily the Levenberg-Marquardt batch algorithm as the ones of the most common and most comprehensible learning algorithms. The proposed technique resulted in fast real-time retraining, so the total computational time on a PC platform was equal to or even less than the real treatment time. For one-second prediction horizon, the proposed techniques achieved accuracy less than one millimeter of 3D mean absolute error in one hundred seconds of total treatment time. PMID:25893194

  15. A fast neural network approach to predict lung tumor motion during respiration for radiation therapy applications.

    PubMed

    Bukovsky, Ivo; Homma, Noriyasu; Ichiji, Kei; Cejnek, Matous; Slama, Matous; Benes, Peter M; Bila, Jiri

    2015-01-01

    During radiotherapy treatment for thoracic and abdomen cancers, for example, lung cancers, respiratory motion moves the target tumor and thus badly affects the accuracy of radiation dose delivery into the target. A real-time image-guided technique can be used to monitor such lung tumor motion for accurate dose delivery, but the system latency up to several hundred milliseconds for repositioning the radiation beam also affects the accuracy. In order to compensate the latency, neural network prediction technique with real-time retraining can be used. We have investigated real-time prediction of 3D time series of lung tumor motion on a classical linear model, perceptron model, and on a class of higher-order neural network model that has more attractive attributes regarding its optimization convergence and computational efficiency. The implemented static feed-forward neural architectures are compared when using gradient descent adaptation and primarily the Levenberg-Marquardt batch algorithm as the ones of the most common and most comprehensible learning algorithms. The proposed technique resulted in fast real-time retraining, so the total computational time on a PC platform was equal to or even less than the real treatment time. For one-second prediction horizon, the proposed techniques achieved accuracy less than one millimeter of 3D mean absolute error in one hundred seconds of total treatment time.

  16. Evaluating the impact of computer-generated rounding reports on physician workflow in the nursing home: a feasibility time-motion study.

    PubMed

    Thorpe-Jamison, Patrice T; Culley, Colleen M; Perera, Subashan; Handler, Steven M

    2013-05-01

    To determine the feasibility and impact of a computer-generated rounding report on physician rounding time and perceived barriers to providing clinical care in the nursing home (NH) setting. Three NHs located in Pittsburgh, PA. Ten attending NH physicians. Time-motion method to record the time taken to gather data (pre-rounding), to evaluate patients (rounding), and document their findings/develop an assessment and plan (post-rounding). Additionally, surveys were used to determine the physicians' perception of barriers to providing optimal clinical care, as well as physician satisfaction before and after the use of a computer-generated rounding report. Ten physicians were observed during half-day sessions both before and 4 weeks after they were introduced to a computer-generated rounding report. A total of 69 distinct patients were evaluated during the 20 physician observation sessions. Each physician evaluated, on average, four patients before implementation and three patients after implementation. The observations showed a significant increase (P = .03) in the pre-rounding time, and no significant difference in the rounding (P = .09) or post-rounding times (P = .29). Physicians reported that information was more accessible (P = .03) following the implementation of the computer-generated rounding report. Most (80%) physicians stated that they would prefer to use the computer-generated rounding report rather than the paper-based process. The present study provides preliminary data suggesting that the use of a computer-generated rounding report can decrease some perceived barriers to providing optimal care in the NH. Although the rounding report did not improve rounding time efficiency, most NH physicians would prefer to use the computer-generated report rather than the current paper-based process. Improving the accuracy and harmonization of medication information with the electronic medication administration record and rounding reports, as well as improving facility network speeds might improve the effectiveness of this technology. Copyright © 2013 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.

  17. Method for simultaneous overlapped communications between neighboring processors in a multiple

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1991-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  18. The impact of home computer use on children's activities and development.

    PubMed

    Subrahmanyam, K; Kraut, R E; Greenfield, P M; Gross, E F

    2000-01-01

    The increasing amount of time children are spending on computers at home and school has raised questions about how the use of computer technology may make a difference in their lives--from helping with homework to causing depression to encouraging violent behavior. This article provides an overview of the limited research on the effects of home computer use on children's physical, cognitive, and social development. Initial research suggests, for example, that access to computers increases the total amount of time children spend in front of a television or computer screen at the expense of other activities, thereby putting them at risk for obesity. At the same time, cognitive research suggests that playing computer games can be an important building block to computer literacy because it enhances children's ability to read and visualize images in three-dimensional space and track multiple images simultaneously. The limited evidence available also indicates that home computer use is linked to slightly better academic performance. The research findings are more mixed, however, regarding the effects on children's social development. Although little evidence indicates that the moderate use of computers to play games has a negative impact on children's friendships and family relationships, recent survey data show that increased use of the Internet may be linked to increases in loneliness and depression. Of most concern are the findings that playing violent computer games may increase aggressiveness and desensitize a child to suffering, and that the use of computers may blur a child's ability to distinguish real life from simulation. The authors conclude that more systematic research is needed in these areas to help parents and policymakers maximize the positive effects and to minimize the negative effects of home computers in children's lives.

  19. Computer Technology for Industry

    NASA Technical Reports Server (NTRS)

    1979-01-01

    In this age of the computer, more and more business firms are automating their operations for increased efficiency in a great variety of jobs, from simple accounting to managing inventories, from precise machining to analyzing complex structures. In the interest of national productivity, NASA is providing assistance both to longtime computer users and newcomers to automated operations. Through a special technology utilization service, NASA saves industry time and money by making available already developed computer programs which have secondary utility. A computer program is essentially a set of instructions which tells the computer how to produce desired information or effect by drawing upon its stored input. Developing a new program from scratch can be costly and time-consuming. Very often, however, a program developed for one purpose can readily be adapted to a totally different application. To help industry take advantage of existing computer technology, NASA operates the Computer Software Management and Information Center (COSMIC)(registered TradeMark),located at the University of Georgia. COSMIC maintains a large library of computer programs developed for NASA, the Department of Defense, the Department of Energy and other technology-generating agencies of the government. The Center gets a continual flow of software packages, screens them for adaptability to private sector usage, stores them and informs potential customers of their availability.

  20. Descriptive epidemiology of screen and non-screen sedentary time in adolescents: a cross sectional study

    PubMed Central

    2010-01-01

    Background Much attention has been paid to adolescents' screen time, however very few studies have examined non-screen sedentary time (NSST). This study aimed to (1) describe the magnitude and composition of screen sedentary time (SST) and NSST in Australian adolescents, (2) describe the socio-demographic correlates of SST and NSST, and (3) determine whether screen time is an adequate surrogate for total sedentary behaviour in this population. Methods 2200 9-16 year old Australians provided detailed use of time data for four days. Non-screen sedentary time (NSST) included time spent participating in activities expected to elicit <3 METs whilst seated or lying down (other than sleeping), excluding screen-based activities (television, playing videogames or using computers). Total sedentary time was the sum of screen time and NSST. Results Adolescents spent a mean (SD) of 345 (105) minutes/day in NSST, which constituted 60% of total sedentary time. School activities contributed 42% of NSST, socialising 19%, self-care (mainly eating) 16%, and passive transport 15%. Screen time and NSST showed opposite patterns in relation to key socio-demographic characteristics, including sex, age, weight status, household income, parental education and day type. Because screen time was negatively correlated with NSST (r = -0.58), and exhibited a moderate correlation (r = 0.53) with total sedentary time, screen time was only a moderately effective surrogate for total sedentary time. Conclusions To capture a complete picture of young people's sedentary time, studies should endeavour to measure both screen time and NSST. PMID:21194427

  1. Towards a Low-Cost Remote Memory Attestation for the Smart Grid

    PubMed Central

    Yang, Xinyu; He, Xiaofei; Yu, Wei; Lin, Jie; Li, Rui; Yang, Qingyu; Song, Houbing

    2015-01-01

    In the smart grid, measurement devices may be compromised by adversaries, and their operations could be disrupted by attacks. A number of schemes to efficiently and accurately detect these compromised devices remotely have been proposed. Nonetheless, most of the existing schemes detecting compromised devices depend on the incremental response time in the attestation process, which are sensitive to data transmission delay and lead to high computation and network overhead. To address the issue, in this paper, we propose a low-cost remote memory attestation scheme (LRMA), which can efficiently and accurately detect compromised smart meters considering real-time network delay and achieve low computation and network overhead. In LRMA, the impact of real-time network delay on detecting compromised nodes can be eliminated via investigating the time differences reported from relay nodes. Furthermore, the attestation frequency in LRMA is dynamically adjusted with the compromised probability of each node, and then, the total number of attestations could be reduced while low computation and network overhead can be achieved. Through a combination of extensive theoretical analysis and evaluations, our data demonstrate that our proposed scheme can achieve better detection capacity and lower computation and network overhead in comparison to existing schemes. PMID:26307998

  2. Towards a Low-Cost Remote Memory Attestation for the Smart Grid.

    PubMed

    Yang, Xinyu; He, Xiaofei; Yu, Wei; Lin, Jie; Li, Rui; Yang, Qingyu; Song, Houbing

    2015-08-21

    In the smart grid, measurement devices may be compromised by adversaries, and their operations could be disrupted by attacks. A number of schemes to efficiently and accurately detect these compromised devices remotely have been proposed. Nonetheless, most of the existing schemes detecting compromised devices depend on the incremental response time in the attestation process, which are sensitive to data transmission delay and lead to high computation and network overhead. To address the issue, in this paper, we propose a low-cost remote memory attestation scheme (LRMA), which can efficiently and accurately detect compromised smart meters considering real-time network delay and achieve low computation and network overhead. In LRMA, the impact of real-time network delay on detecting compromised nodes can be eliminated via investigating the time differences reported from relay nodes. Furthermore, the attestation frequency in LRMA is dynamically adjusted with the compromised probability of each node, and then, the total number of attestations could be reduced while low computation and network overhead can be achieved. Through a combination of extensive theoretical analysis and evaluations, our data demonstrate that our proposed scheme can achieve better detection capacity and lower computation and network overhead in comparison to existing schemes.

  3. A simple analytical formula to compute clear sky total and photosynthetically available solar irradiance at the ocean surface

    NASA Technical Reports Server (NTRS)

    Frouin, Robert; Lingner, David W.; Gautier, Catherine; Baker, Karen S.; Smith, Ray C.

    1989-01-01

    A simple but accurate analytical formula was developed for computing the total and the photosynthetically available solar irradiances at the ocean surface under clear skies, which takes into account the processes of scattering by molecules and aerosols within the atmosphere and of absorption by the water vapor, ozone, and aerosols. These processes are parameterized as a function of solar zenith angle, aerosol type, atmospheric visibility, and vertically integrated water-vapor and ozone amounts. Comparisons of the calculated and measured total and photosynthetically available solar irradiances for several experiments in tropical and mid-latitude ocean regions show 39 and 14 Wm/sq m rms errors (6.5 and 4.7 percent of the average measured values) on an hourly time scale, respectively. The proposed forumula is unique in its ability to predict surface solar irradiance in the photosynthetically active spectral interval.

  4. The relationship between TV/computer time and adolescents' health-promoting behavior: a secondary data analysis.

    PubMed

    Chen, Mei-Yen; Liou, Yiing-Mei; Wu, Jen-Yee

    2008-03-01

    Television and computers provide significant benefits for learning about the world. Some studies have linked excessive television (TV) watching or computer game playing to disadvantage of health status or some unhealthy behavior among adolescents. However, the relationships between watching TV/playing computer games and adolescents adopting health promoting behavior were limited. This study aimed to discover the relationship between time spent on watching TV and on leisure use of computers and adolescents' health promoting behavior, and associated factors. This paper used secondary data analysis from part of a health promotion project in Taoyuan County, Taiwan. A cross-sectional design was used and purposive sampling was conducted among adolescents in the original project. A total of 660 participants answered the questions appropriately for this work between January and June 2004. Findings showed the mean age of the respondents was 15.0 +/- 1.7 years. The mean numbers of TV watching hours were 2.28 and 4.07 on weekdays and weekends respectively. The mean hours of leisure (non-academic) computer use were 1.64 and 3.38 on weekdays and weekends respectively. Results indicated that adolescents spent significant time watching TV and using the computer, which was negatively associated with adopting health-promoting behaviors such as life appreciation, health responsibility, social support and exercise behavior. Moreover, being boys, being overweight, living in a rural area, and being middle-school students were significantly associated with spending long periods watching TV and using the computer. Therefore, primary health care providers should record the TV and non-academic computer time of youths when conducting health promotion programs, and educate parents on how to become good and healthy electronic media users.

  5. Fast Determination of Distribution-Connected PV Impacts Using a Variable Time-Step Quasi-Static Time-Series Approach: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry

    The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce themore » required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.« less

  6. LIFESPAN: A tool for the computer-aided design of longitudinal studies

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Hertzog, Christopher; Lindenberger, Ulman

    2015-01-01

    Researchers planning a longitudinal study typically search, more or less informally, a multivariate space of possible study designs that include dimensions such as the hypothesized true variance in change, indicator reliability, the number and spacing of measurement occasions, total study time, and sample size. The main search goal is to select a research design that best addresses the guiding questions and hypotheses of the planned study while heeding applicable external conditions and constraints, including time, money, feasibility, and ethical considerations. Because longitudinal study selection ultimately requires optimization under constraints, it is amenable to the general operating principles of optimization in computer-aided design. Based on power equivalence theory (MacCallum et al., 2010; von Oertzen, 2010), we propose a computational framework to promote more systematic searches within the study design space. Starting with an initial design, the proposed framework generates a set of alternative models with equal statistical power to detect hypothesized effects, and delineates trade-off relations among relevant parameters, such as total study time and the number of measurement occasions. We present LIFESPAN (Longitudinal Interactive Front End Study Planner), which implements this framework. LIFESPAN boosts the efficiency, breadth, and precision of the search for optimal longitudinal designs. Its initial version, which is freely available at http://www.brandmaier.de/lifespan, is geared toward the power to detect variance in change as specified in a linear latent growth curve model. PMID:25852596

  7. Accelerating next generation sequencing data analysis with system level optimizations.

    PubMed

    Kathiresan, Nagarajan; Temanni, Ramzi; Almabrazi, Hakeem; Syed, Najeeb; Jithesh, Puthen V; Al-Ali, Rashid

    2017-08-22

    Next generation sequencing (NGS) data analysis is highly compute intensive. In-memory computing, vectorization, bulk data transfer, CPU frequency scaling are some of the hardware features in the modern computing architectures. To get the best execution time and utilize these hardware features, it is necessary to tune the system level parameters before running the application. We studied the GATK-HaplotypeCaller which is part of common NGS workflows, that consume more than 43% of the total execution time. Multiple GATK 3.x versions were benchmarked and the execution time of HaplotypeCaller was optimized by various system level parameters which included: (i) tuning the parallel garbage collection and kernel shared memory to simulate in-memory computing, (ii) architecture-specific tuning in the PairHMM library for vectorization, (iii) including Java 1.8 features through GATK source code compilation and building a runtime environment for parallel sorting and bulk data transfer (iv) the default 'on-demand' mode of CPU frequency is over-clocked by using 'performance-mode' to accelerate the Java multi-threads. As a result, the HaplotypeCaller execution time was reduced by 82.66% in GATK 3.3 and 42.61% in GATK 3.7. Overall, the execution time of NGS pipeline was reduced to 70.60% and 34.14% for GATK 3.3 and GATK 3.7 respectively.

  8. Robust Real-Time Wide-Area Differential GPS Navigation

    NASA Technical Reports Server (NTRS)

    Yunck, Thomas P. (Inventor); Bertiger, William I. (Inventor); Lichten, Stephen M. (Inventor); Mannucci, Anthony J. (Inventor); Muellerschoen, Ronald J. (Inventor); Wu, Sien-Chong (Inventor)

    1998-01-01

    The present invention provides a method and a device for providing superior differential GPS positioning data. The system includes a group of GPS receiving ground stations covering a wide area of the Earth's surface. Unlike other differential GPS systems wherein the known position of each ground station is used to geometrically compute an ephemeris for each GPS satellite. the present system utilizes real-time computation of satellite orbits based on GPS data received from fixed ground stations through a Kalman-type filter/smoother whose output adjusts a real-time orbital model. ne orbital model produces and outputs orbital corrections allowing satellite ephemerides to be known with considerable greater accuracy than from die GPS system broadcasts. The modeled orbits are propagated ahead in time and differenced with actual pseudorange data to compute clock offsets at rapid intervals to compensate for SA clock dither. The orbital and dock calculations are based on dual frequency GPS data which allow computation of estimated signal delay at each ionospheric point. These delay data are used in real-time to construct and update an ionospheric shell map of total electron content which is output as part of the orbital correction data. thereby allowing single frequency users to estimate ionospheric delay with an accuracy approaching that of dual frequency users.

  9. A Fourier-based total-field/scattered-field technique for three-dimensional broadband simulations of elastic targets near a water-sand interface.

    PubMed

    Shao, Yu; Wang, Shumin

    2016-12-01

    The numerical simulation of acoustic scattering from elastic objects near a water-sand interface is critical to underwater target identification. Frequency-domain methods are computationally expensive, especially for large-scale broadband problems. A numerical technique is proposed to enable the efficient use of finite-difference time-domain method for broadband simulations. By incorporating a total-field/scattered-field boundary, the simulation domain is restricted inside a tightly bounded region. The incident field is further synthesized by the Fourier transform for both subcritical and supercritical incidences. Finally, the scattered far field is computed using a half-space Green's function. Numerical examples are further provided to demonstrate the accuracy and efficiency of the proposed technique.

  10. Resource Provisioning in SLA-Based Cluster Computing

    NASA Astrophysics Data System (ADS)

    Xiong, Kaiqi; Suh, Sang

    Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.

  11. Computed Tomography Scanner Productivity and Entry-Level Models in the Global Market

    PubMed Central

    Almeida, R. M. V. R.

    2017-01-01

    Objective This study evaluated the productivity of computed tomography (CT) models and characterized their simplest (entry-level) models' supply in the world market. Methods CT exam times were measured in eight health facilities in the state of Rio de Janeiro, Brazil. Exams were divided into six stages: (1) arrival of patient records to the examination room; (2) patient arrival; (3) patient positioning; (4) data input prior to exam; (5) image acquisition; and (6) patient departure. CT exam productivity was calculated by dividing the total weekly working time by the total exam time for each model. Additionally, an internet search identified full-body CT manufacturers and their offered entry-level models. Results The time durations of 111 CT exams were obtained. Differences among average exam times were not large, and they were mainly due to stages not directly related to data acquisition or image reconstruction. The survey identified that most manufacturers offer 2- to 4-slice models for Asia, South America, and Africa, and one offers single-slice models (Asia). In the USA, two manufacturers offer models below 16-slice. Conclusion Productivity gains are not linearly related to “slice” number. It is suggested that the use of “shareable platforms” could make CTs cheaper, increasing their availability. PMID:29093804

  12. Comparisons of Physicians' and Nurses' Attitudes towards Computers.

    PubMed

    Brumini, Gordana; Ković, Ivor; Zombori, Dejvid; Lulić, Ileana; Bilic-Zulle, Lidija; Petrovecki, Mladen

    2005-01-01

    Before starting the implementation of integrated hospital information systems, the physicians' and nurses' attitudes towards computers were measured by means of a questionnaire. The study was conducted in Dubrava University Hospital, Zagreb in Croatia. Out of 194 respondents, 141 were nurses and 53 physicians, randomly selected. They surveyed by an anonymous questionnaire consisting of 8 closed questions about demographic data, computer science education and computer usage, and 30 statements on attitudes towards computers. The statements were adapted to a Likert type scale. Differences in attitudes towards computers between groups were compared using Kruskal-Wallis and Mann Whitney test for post-hoc analysis. The total score presented attitudes toward computers. Physicians' total score was 130 (97-144), while nurses' total score was 123 (88-141). It points that the average answer to all statements was between "agree" and "strongly agree", and these high total scores indicated their positive attitudes. Age, computer science education and computer usage were important factors witch enhances the total score. Younger physicians and nurses with computer science education and with previous computer experience had more positive attitudes towards computers than others. Our results are important for planning and implementation of integrated hospital information systems in Croatia.

  13. Closed-form integrator for the quaternion (euler angle) kinematics equations

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A. (Inventor)

    2000-01-01

    The invention is embodied in a method of integrating kinematics equations for updating a set of vehicle attitude angles of a vehicle using 3-dimensional angular velocities of the vehicle, which includes computing an integrating factor matrix from quantities corresponding to the 3-dimensional angular velocities, computing a total integrated angular rate from the quantities corresponding to a 3-dimensional angular velocities, computing a state transition matrix as a sum of (a) a first complementary function of the total integrated angular rate and (b) the integrating factor matrix multiplied by a second complementary function of the total integrated angular rate, and updating the set of vehicle attitude angles using the state transition matrix. Preferably, the method further includes computing a quanternion vector from the quantities corresponding to the 3-dimensional angular velocities, in which case the updating of the set of vehicle attitude angles using the state transition matrix is carried out by (a) updating the quanternion vector by multiplying the quanternion vector by the state transition matrix to produce an updated quanternion vector and (b) computing an updated set of vehicle attitude angles from the updated quanternion vector. The first and second trigonometric functions are complementary, such as a sine and a cosine. The quantities corresponding to the 3-dimensional angular velocities include respective averages of the 3-dimensional angular velocities over plural time frames. The updating of the quanternion vector preserves the norm of the vector, whereby the updated set of vehicle attitude angles are virtually error-free.

  14. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  15. Total Navigation in Spine Surgery; A Concise Guide to Eliminate Fluoroscopy Using a Portable Intraoperative Computed Tomography 3-Dimensional Navigation System.

    PubMed

    Navarro-Ramirez, Rodrigo; Lang, Gernot; Lian, Xiaofeng; Berlin, Connor; Janssen, Insa; Jada, Ajit; Alimi, Marjan; Härtl, Roger

    2017-04-01

    Portable intraoperative computed tomography (iCT) with integrated 3-dimensional navigation (NAV) offers new opportunities for more precise navigation in spinal surgery, eliminates radiation exposure for the surgical team, and accelerates surgical workflows. We present the concept of "total navigation" using iCT NAV in spinal surgery. Therefore, we propose a step-by-step guideline demonstrating how total navigation can eliminate fluoroscopy with time-efficient workflows integrating iCT NAV into daily practice. A prospective study was conducted on collected data from patients undergoing iCT NAV-guided spine surgery. Number of scans, radiation exposure, and workflow of iCT NAV (e.g., instrumentation, cage placement, localization) were documented. Finally, the accuracy of pedicle screws and time for instrumentation were determined. iCT NAV was successfully performed in 117 cases for various indications and in all regions of the spine. More than half (61%) of cases were performed in a minimally invasive manner. Navigation was used for skin incision, localization of index level, and verification of implant position. iCT NAV was used to evaluate neural decompression achieved in spinal fusion surgeries. Total navigation eliminates fluoroscopy in 75%, thus reducing staff radiation exposure entirely. The average times for iCT NAV setup and pedicle screw insertion were 12.1 and 3.1 minutes, respectively, achieving a pedicle screw accuracy of 99%. Total navigation makes spine surgery safer and more accurate, and it enhances efficient and reproducible workflows. Fluoroscopy and radiation exposure for the surgical staff can be eliminated in the majority of cases. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. New Mexico district work-effort analysis computer program

    USGS Publications Warehouse

    Hiss, W.L.; Trantolo, A.P.; Sparks, J.L.

    1972-01-01

    The computer program (CAN 2) described in this report is one of several related programs used in the New Mexico District cost-analysis system. The work-effort information used in these programs is accumulated and entered to the nearest hour on forms completed by each employee. Tabulating cards are punched directly from these forms after visual examinations for errors are made. Reports containing detailed work-effort data itemized by employee within each project and account and by account and project for each employee are prepared for both current-month and year-to-date periods by the CAN 2 computer program. An option allowing preparation of reports for a specified 3-month period is provided. The total number of hours worked on each account and project and a grand total of hours worked in the New Mexico District is computed and presented in a summary report for each period. Work effort not chargeable directly to individual projects or accounts is considered as overhead and can be apportioned to the individual accounts and projects on the basis of the ratio of the total hours of work effort for the individual accounts or projects to the total New Mexico District work effort at the option of the user. The hours of work performed by a particular section, such as General Investigations or Surface Water, are prorated and charged to the projects or accounts within the particular section. A number of surveillance or buffer accounts are employed to account for the hours worked on special events or on those parts of large projects or accounts that require a more detailed analysis. Any part of the New Mexico District operation can be separated and analyzed in detail by establishing an appropriate buffer account. With the exception of statements associated with word size, the computer program is written in FORTRAN IV in a relatively low and standard language level to facilitate its use on different digital computers. The program has been run only on a Control Data Corporation 6600 computer system. Central processing computer time has seldom exceeded 5 minutes on the longest year-to-date runs.

  17. Electron impact scattering study of hypohalous acids HOX (X = F, Cl, Br, I)

    NASA Astrophysics Data System (ADS)

    Yadav, Hitesh; Bhutadia, Harshad; Prajapati, Dinesh; Desai, Hardik; Vinodkumar, Minaxi; Vinodkumar, P. C.

    2018-05-01

    In this article we aim to report total cross sections (TCS) QT, total elastic cross sections (Qel), total inelastic cross sections (Qinel) i.e. (total ionizations cross sections (Qion)+total electronic excitation cross sections (Qexc)) from threshold of the target to 5000 eV energy range. We have used a well-defined theoretical methodology Spherical Complex Optical Potential (SCOP) to compute QT, Qel and Qinel and Complex Scattering Potential - ionization contribution (CSP - ic) method to report the (Qion). The cross-sectional data reported here for the Hypohalous Acids is for the first time and the present data can become a guideline for the experimentalist to study these targets.

  18. Associations between parental rules, style of communication and children's screen time.

    PubMed

    Bjelland, Mona; Soenens, Bart; Bere, Elling; Kovács, Éva; Lien, Nanna; Maes, Lea; Manios, Yannis; Moschonis, George; te Velde, Saskia J

    2015-10-01

    Research suggests an inverse association between parental rules and screen time in pre-adolescents, and that parents' style of communication with their children is related to the children's time spent watching TV. The aims of this study were to examine associations of parental rules and parental style of communication with children's screen time and perceived excessive screen time in five European countries. UP4FUN was a multi-centre, cluster randomised controlled trial with pre- and post-test measurements in each of five countries; Belgium, Germany, Greece, Hungary and Norway. Questionnaires were completed by the children at school and the parent questionnaire was brought home. Three structural equation models were tested based on measures of screen time and parental style of communication from the pre-test questionnaires. Of the 152 schools invited, 62 (41 %) schools agreed to participate. In total 3325 children (average age 11.2 years and 51 % girls) and 3038 parents (81 % mothers) completed the pre-test questionnaire. The average TV/DVD times across the countries were between 1.5 and 1.8 h/day, while less time was used for computer/games console (0.9-1.4 h/day). The children's perceived parental style of communication was quite consistent for TV/DVD and computer/games console. The presence of rules was significantly associated with less time watching TV/DVD and use of computer/games console time. Moreover, the use of an autonomy-supportive style was negatively related to both time watching TV/DVD and use of computer/games console time. The use of a controlling style was related positively to perceived excessive time used on TV/DVD and excessive time used on computer/games console. With a few exceptions, results were similar across the five countries. This study suggests that an autonomy-supportive style of communicating rules for TV/DVD or computer/ games console use is negatively related to children's time watching TV/DVD and use of computer/games console time. In contrast, a controlling style is associated with more screen time and with more perceived excessive screen time in particular. Longitudinal research is needed to further examine effects of parental style of communication on children's screen time as well as possible reciprocal effects. International Standard Randomized Controlled Trial Number Register, registration number: ISRCTN34562078 . Date applied29/07/2011, Date assigned11/10/2011.

  19. Allocation of Internal Medicine Resident Time in a Swiss Hospital: A Time and Motion Study of Day and Evening Shifts.

    PubMed

    Wenger, Nathalie; Méan, Marie; Castioni, Julien; Marques-Vidal, Pedro; Waeber, Gérard; Garnier, Antoine

    2017-04-18

    Little current evidence documents how internal medicine residents spend their time at work, particularly with regard to the proportions of time spent in direct patient care versus using computers. To describe how residents allocate their time during day and evening hospital shifts. Time and motion study. Internal medicine residency at a university hospital in Switzerland, May to July 2015. 36 internal medicine residents with an average of 29 months of postgraduate training. Trained observers recorded the residents' activities using a tablet-based application. Twenty-two activities were categorized as directly related to patients, indirectly related to patients, communication, academic, nonmedical tasks, and transition. In addition, the presence of a patient or colleague and use of a computer or telephone during each activity was recorded. Residents were observed for a total of 696.7 hours. Day shifts lasted 11.6 hours (1.6 hours more than scheduled). During these shifts, activities indirectly related to patients accounted for 52.4% of the time, and activities directly related to patients accounted for 28.0%. Residents spent an average of 1.7 hours with patients, 5.2 hours using computers, and 13 minutes doing both. Time spent using a computer was scattered throughout the day, with the heaviest use after 6:00 p.m. The study involved a small sample from 1 institution. At this Swiss teaching hospital, internal medicine residents spent more time at work than scheduled. Activities indirectly related to patients predominated, and about half the workday was spent using a computer. Information Technology Department and Department of Internal Medicine of Lausanne University Hospital.

  20. Three-dimensional particle-particle simulations: Dependence of relaxation time on plasma parameter

    NASA Astrophysics Data System (ADS)

    Zhao, Yinjian

    2018-05-01

    A particle-particle simulation model is applied to investigate the dependence of the relaxation time on the plasma parameter in a three-dimensional unmagnetized plasma. It is found that the relaxation time increases linearly as the plasma parameter increases within the range of the plasma parameter from 2 to 10; when the plasma parameter equals 2, the relaxation time is independent of the total number of particles, but when the plasma parameter equals 10, the relaxation time slightly increases as the total number of particles increases, which indicates the transition of a plasma from collisional to collisionless. In addition, ions with initial Maxwell-Boltzmann (MB) distribution are found to stay in the MB distribution during the whole simulation time, and the mass of ions does not significantly affect the relaxation time of electrons. This work also shows the feasibility of the particle-particle model when using GPU parallel computing techniques.

  1. Updated computations and estimates of streamflows tributary to Carson Valley, Douglas County, Nevada, and Alpine County, California, 1990-2002

    USGS Publications Warehouse

    Maurer, Douglas K.; Watkins, Sharon A.; Burrowws, Robert L.

    2004-01-01

    Rapid population growth in Carson Valley has caused concern over the continued availability of water resources to sustain future growth. The U.S. Geological Survey, in cooperation with Douglas County, began a study to update estimates of water-budget components in Carson Valley for current climatic conditions. Data collected at 19 sites included 9 continuous records of tributary streamflows, 1 continuous record of outflow from the valley, and 408 measurements of 10 perennially flowing but ungaged drainages. These data were compiled and analyzed to provide updated computations and estimates of streamflows tributary to Carson Valley, 1990-2002. Mean monthly and annual flows were computed from continuous records for the period 1990-2002 for five streams, and for the period available, 1990-97, for four streams. Daily mean flow from ungaged drainages was estimated using multi-variate regressions of individual discharge measurements against measured flow at selected continuous gages. From the estimated daily mean flows, monthly and annual mean flows were calculated from 1990 to 2002. These values were used to compute estimates of mean monthly and annual flows for the ungaged perennial drainages. Using the computed and estimated mean annual flows, annual unit-area runoff was computed for the perennial drainages, which ranged from 0.30 to 2.02 feet. For the period 1990-2002, estimated inflow of perennial streams tributary to Carson Valley totaled about 25,900 acre-feet per year. Inflow computed from gaged perennial drainages totaled 10,300 acre-feet per year, and estimated inflow from ungaged perennial drainages totaled 15,600 acre-feet per year. The annual flow of perennial streams ranges from 4,210 acre-feet at Clear Creek to 450 acre-feet at Stutler Canyon Creek. Differences in unit-area runoff and in the seasonal timing of flow likely are caused by differences in geologic setting, altitude, slope, or aspect of the individual drainages. The remaining drainages are ephemeral and supply inflow to the valley floor only during spring runoff in wet years or during large precipitation events. Annual unit-area runoff for the perennial drainages was used to estimate inflow from ephemeral drainages totaling 11,700 acre-feet per year. The totaled estimate of perennial and ephemeral tributary inflows to Carson Valley is 37,600 acre-feet per year. Gaged perennial inflow is 27 percent of the total, ungaged perennial inflow is 42 percent, and ephemeral inflow is 31 percent. The estimate is from 50 to 60 percent greater than three previous estimates, one made for a larger area and similar to two other estimates made for larger areas. The combined uncertainty of the estimates totaled about 33 percent of the total inflow or about 12,000 acre-feet per year.

  2. Short-term outcome of 1,465 computer-navigated primary total knee replacements 2005–2008

    PubMed Central

    2011-01-01

    Background and purpose Improvement of positioning and alignment by the use of computer-assisted surgery (CAS) might improve longevity and function in total knee replacements, but there is little evidence. In this study, we evaluated the short-term results of computer-navigated knee replacements based on data from the Norwegian Arthroplasty Register. Patients and methods Primary total knee replacements without patella resurfacing, reported to the Norwegian Arthroplasty Register during the years 2005–2008, were evaluated. The 5 most common implants and the 3 most common navigation systems were selected. Cemented, uncemented, and hybrid knees were included. With the risk of revision for any cause as the primary endpoint and intraoperative complications and operating time as secondary outcomes, 1,465 computer-navigated knee replacements (CAS) and 8,214 conventionally operated knee replacements (CON) were compared. Kaplan-Meier survival analysis and Cox regression analysis with adjustment for age, sex, prosthesis brand, fixation method, previous knee surgery, preoperative diagnosis, and ASA category were used. Results Kaplan-Meier estimated survival at 2 years was 98% (95% CI: 97.5–98.3) in the CON group and 96% (95% CI: 95.0–97.8) in the CAS group. The adjusted Cox regression analysis showed a higher risk of revision in the CAS group (RR = 1.7, 95% CI: 1.1–2.5; p = 0.02). The LCS Complete knee had a higher risk of revision with CAS than with CON (RR = 2.1, 95% CI: 1.3–3.4; p = 0.004)). The differences were not statistically significant for the other prosthesis brands. Mean operating time was 15 min longer in the CAS group. Interpretation With the introduction of computer-navigated knee replacement surgery in Norway, the short-term risk of revision has increased for computer-navigated replacement with the LCS Complete. The mechanisms of failure of these implantations should be explored in greater depth, and in this study we have not been able to draw conclusions regarding causation. PMID:21504309

  3. Pair plasma relaxation time scales.

    PubMed

    Aksenov, A G; Ruffini, R; Vereshchagin, G V

    2010-04-01

    By numerically solving the relativistic Boltzmann equations, we compute the time scale for relaxation to thermal equilibrium for an optically thick electron-positron plasma with baryon loading. We focus on the time scales of electromagnetic interactions. The collisional integrals are obtained directly from the corresponding QED matrix elements. Thermalization time scales are computed for a wide range of values of both the total-energy density (over 10 orders of magnitude) and of the baryonic loading parameter (over 6 orders of magnitude). This also allows us to study such interesting limiting cases as the almost purely electron-positron plasma or electron-proton plasma as well as intermediate cases. These results appear to be important both for laboratory experiments aimed at generating optically thick pair plasmas as well as for astrophysical models in which electron-positron pair plasmas play a relevant role.

  4. System Level Applications of Adaptive Computing (SLAAC)

    DTIC Science & Technology

    2003-11-01

    saved clock cycles, as the computation cycle time was directly proportional to the number of bitplanes in the image. The simulation was undertaken in...S-1][D -1] SK E W E R [k+K S-1][0] SK E W E R [k+K S-1][1] MinMax MinMax MinMax Min - IdxMin Max - IdxMax 0 Figure 3: PPI algorithm architeture ...parallel processing of data. The total throughput in these extended architectures is directly proportional to the amount of resources (CLB slices

  5. Patient-specific non-linear finite element modelling for predicting soft organ deformation in real-time: application to non-rigid neuroimage registration.

    PubMed

    Wittek, Adam; Joldes, Grand; Couton, Mathieu; Warfield, Simon K; Miller, Karol

    2010-12-01

    Long computation times of non-linear (i.e. accounting for geometric and material non-linearity) biomechanical models have been regarded as one of the key factors preventing application of such models in predicting organ deformation for image-guided surgery. This contribution presents real-time patient-specific computation of the deformation field within the brain for six cases of brain shift induced by craniotomy (i.e. surgical opening of the skull) using specialised non-linear finite element procedures implemented on a graphics processing unit (GPU). In contrast to commercial finite element codes that rely on an updated Lagrangian formulation and implicit integration in time domain for steady state solutions, our procedures utilise the total Lagrangian formulation with explicit time stepping and dynamic relaxation. We used patient-specific finite element meshes consisting of hexahedral and non-locking tetrahedral elements, together with realistic material properties for the brain tissue and appropriate contact conditions at the boundaries. The loading was defined by prescribing deformations on the brain surface under the craniotomy. Application of the computed deformation fields to register (i.e. align) the preoperative and intraoperative images indicated that the models very accurately predict the intraoperative deformations within the brain. For each case, computing the brain deformation field took less than 4 s using an NVIDIA Tesla C870 GPU, which is two orders of magnitude reduction in computation time in comparison to our previous study in which the brain deformation was predicted using a commercial finite element solver executed on a personal computer. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. Clinical impact and value of workstation single sign-on.

    PubMed

    Gellert, George A; Crouch, John F; Gibson, Lynn A; Conklin, George S; Webster, S Luke; Gillean, John A

    2017-05-01

    CHRISTUS Health began implementation of computer workstation single sign-on (SSO) in 2015. SSO technology utilizes a badge reader placed at each workstation where clinicians swipe or "tap" their identification badges. To assess the impact of SSO implementation in reducing clinician time logging in to various clinical software programs, and in financial savings from migrating to a thin client that enabled replacement of traditional hard drive computer workstations. Following implementation of SSO, a total of 65,202 logins were sampled systematically during a 7day period among 2256 active clinical end users for time saved in 6 facilities when compared to pre-implementation. Dollar values were assigned to the time saved by 3 groups of clinical end users: physicians, nurses and ancillary service providers. The reduction of total clinician login time over the 7day period showed a net gain of 168.3h per week of clinician time - 28.1h (2.3 shifts) per facility per week. Annualized, 1461.2h of mixed physician and nursing time is liberated per facility per annum (121.8 shifts of 12h per year). The annual dollar cost savings of this reduction of time expended logging in is $92,146 per hospital per annum and $1,658,745 per annum in the first phase implementation of 18 hospitals. Computer hardware equipment savings due to desktop virtualization increases annual savings to $2,333,745. Qualitative value contributions to clinician satisfaction, reduction in staff turnover, facilitation of adoption of EHR applications, and other benefits of SSO are discussed. SSO had a positive impact on clinician efficiency and productivity in the 6 hospitals evaluated, and is an effective and cost-effective method to liberate clinician time from repetitive and time consuming logins to clinical software applications. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Accessible high performance computing solutions for near real-time image processing for time critical applications

    NASA Astrophysics Data System (ADS)

    Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek

    2009-09-01

    High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.

  8. Current And Future Directions Of Lens Design Software

    NASA Astrophysics Data System (ADS)

    Gustafson, Darryl E.

    1983-10-01

    The most effective environment for doing lens design continues to evolve as new computer hardware and software tools become available. Important recent hardware developments include: Low-cost but powerful interactive multi-user 32 bit computers with virtual memory that are totally software-compatible with prior larger and more expensive members of the family. A rapidly growing variety of graphics devices for both hard-copy and screen graphics, including many with color capability. In addition, with optical design software readily accessible in many forms, optical design has become a part-time activity for a large number of engineers instead of being restricted to a small number of full-time specialists. A designer interface that is friendly for the part-time user while remaining efficient for the full-time designer is thus becoming more important as well as more practical. Along with these developments, software tools in other scientific and engineering disciplines are proliferating. Thus, the optical designer is less and less unique in his use of computer-aided techniques and faces the challenge and opportunity of efficiently communicating his designs to other computer-aided-design (CAD), computer-aided-manufacturing (CAM), structural, thermal, and mechanical software tools. This paper will address the impact of these developments on the current and future directions of the CODE VTM optical design software package, its implementation, and the resulting lens design environment.

  9. Computational Study of Scenarios Regarding Explosion Risk Mitigation

    NASA Astrophysics Data System (ADS)

    Vlasin, Nicolae-Ioan; Mihai Pasculescu, Vlad; Florea, Gheorghe-Daniel; Cornel Suvar, Marius

    2016-10-01

    Exploration in order to discover new deposits of natural gas, upgrading techniques to exploit these resources and new ways to convert the heat capacity of these gases into industrial usable energy is the research areas of great interest around the globe. But all activities involving the handling of natural gas (exploitation, transport, combustion) are subjected to the same type of risk: the risk to explosion. Experiments carried out physical scenarios to determine ways to reduce this risk can be extremely costly, requiring suitable premises, equipment and apparatus, manpower, time and, not least, presenting the risk of personnel injury. Taking in account the above mentioned, the present paper deals with the possibility of studying the scenarios of gas explosion type events in virtual domain, exemplifying by performing a computer simulation of a stoichiometric air - methane explosion (methane is the main component of natural gas). The advantages of computer-assisted imply are the possibility of using complex virtual geometries of any form as the area of deployment phenomenon, the use of the same geometry for an infinite number of settings of initial parameters as input, total elimination the risk of personnel injury, decrease the execution time etc. Although computer simulations are hardware resources consuming and require specialized personnel to use the CFD (Computational Fluid Dynamics) techniques, the costs and risks associated with these methods are greatly diminished, presenting, in the same time, a major benefit in terms of execution time.

  10. Association of sedentary behavior time with ideal cardiovascular health: the ORISCAV-LUX study.

    PubMed

    Crichton, Georgina E; Alkerwi, Ala'a

    2014-01-01

    Recently attention has been drawn to the health impacts of time spent engaging in sedentary behaviors. No studies have examined sedentary behaviors in relation to the newly defined construct of ideal cardiovascular health, which incorporates three health factors (blood pressure, total cholesterol, fasting plasma glucose) and four behaviors (physical activity, smoking, body mass index, diet). The purpose of this study was to examine associations between sedentary behaviors, including sitting time, and time spent viewing television and in front of a computer, with cardiovascular health, in a representative sample of adults from Luxembourg. A cross-sectional analysis of 1262 participants in the Observation of Cardiovascular Risk Factors in Luxembourg study was conducted, who underwent objective cardiovascular health assessments and completed the International Physical Activity Questionnaire. A Cardiovascular Health Score was calculated based on the number of health factors and behaviors at ideal levels. Sitting time on a weekday, television time, and computer time (both on a workday and a day off), were related to the Cardiovascular Health Score. Higher weekday sitting time was significantly associated with a poorer Cardiovascular Health Score (p = 0.002 for linear trend), after full adjustment for age, gender, education, income and occupation. Television time was inversely associated with the Cardiovascular Health Score, on both a workday and a day off (p = 0.002 for both). A similar inverse relationship was observed between the Cardiovascular Health Score and computer time, only on a day off (p = 0.04). Higher time spent sitting, viewing television, and using a computer during a day off may be unfavorably associated with ideal cardiovascular health.

  11. Symplectic multi-particle tracking on GPUs

    NASA Astrophysics Data System (ADS)

    Liu, Zhicong; Qiang, Ji

    2018-05-01

    A symplectic multi-particle tracking model is implemented on the Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) language. The symplectic tracking model can preserve phase space structure and reduce non-physical effects in long term simulation, which is important for beam property evaluation in particle accelerators. Though this model is computationally expensive, it is very suitable for parallelization and can be accelerated significantly by using GPUs. In this paper, we optimized the implementation of the symplectic tracking model on both single GPU and multiple GPUs. Using a single GPU processor, the code achieves a factor of 2-10 speedup for a range of problem sizes compared with the time on a single state-of-the-art Central Processing Unit (CPU) node with similar power consumption and semiconductor technology. It also shows good scalability on a multi-GPU cluster at Oak Ridge Leadership Computing Facility. In an application to beam dynamics simulation, the GPU implementation helps save more than a factor of two total computing time in comparison to the CPU implementation.

  12. Quantifying regional cerebral blood flow by N-isopropyl-P-[I-123]iodoamphetamine (IMP) using a ring type single-photon emission computed tomography system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, N.; Odano, I.; Ohkubo, M.

    1994-05-01

    We developed a more accurate quantitative measurement of regional cerebral blood flow (rCBF) with the microsphere model using N-isopropyl-p-[I-123] iodoamphetamine (IMP) and a ring type single photon emission computed tomography (SPECT) system. SPECT studies were performed in 17 patients with brain diseases. A dose of 222 MBq (6 mCi) of [I-123]IMP was injected i.v., at the same time a 5 min period of arterial blood withdrawal was begun. SPECT data were acquired from 25 min to 60 min after tracer injection. For obtaining the brain activity concentration at 5 min after IMP injection, total brain counts collections and one minutemore » period short time SPECT studies were performed at 5, 20, and 60 min. Measurement of the values of rCBF was calculated using short time SPECT images at 5 min (rCBF), static SPECT images corrected with total cerebral counts (rCBF{sub Ct}.) and those corrected with reconstructed counts on short time SPECT images (rCBF{sub Cb}). There was a good relationship (r=0.69) between rCBF and rCBF{sub Ct}, however, rCBF{sub Ct} tends to be underestimated in high flow areas and overestimated in low flow areas. There was better relationship between rCBF and rCBF{sub Cb}(r=0.92). The overestimation and underestimation shown in rCBF{sub Ct} was considered to be due to the correction of reconstructed counts using a total cerebral time activity curve, because of the kinetic behavior of [I-123]IMP was different in each region. We concluded that more accurate rCBF values could be obtained using the regional time activity curves.« less

  13. Watershed Effects on Streamflow Quantity and Quality in Six Watersheds of Gwinnett County, Georgia

    USGS Publications Warehouse

    Landers, Mark N.; Ankcorn, Paul D.; McFadden, Keith W.

    2007-01-01

    Watershed management is critical for the protection and enhancement of streams that provide multiple benefits for Gwinnett County, Georgia, and downstream communities. Successful watershed management requires an understanding of how stream quality is affected by watershed characteristics. The influence of watershed characteristics on stream quality is complex, particularly for the nonpoint sources of pollutants that affect urban watersheds. The U.S. Geological Survey (USGS), in cooperation with Gwinnett County Department of Water Resources (formerly known as Public Utilities), established a water-quality monitoring program during late 1996 to collect comprehensive, consistent, high-quality data for use by watershed managers. Between 1996 and 2003, more than 10,000 analyses were made for more than 430 water-quality samples. Continuous-flow and water-quality data have been collected since 1998. Loads have been computed for selected constituents from 1998 to 2003. Changing stream hydrology is a primary driver for many other water-quality and aquatic habitat effects. Primary factors affecting stream hydrology (after watershed size and climate) within Gwinnett County are watershed slope and land uses. For the six study watersheds in Gwinnett County, watershedwide imperviousness up to 12 percent does not have a well-defined influence on stream hydrology, whereas two watersheds with 21- and 35-percent impervious area are clearly impacted. In the stream corridor, however, imperviousness from 1.6 to 4.4 percent appears to affect baseflow and stormflow for all six watersheds. Relations of concentrations to discharge are used to develop regression models to compute constituent loads using the USGS LOAD ESTimator model. A unique method developed in this study is used to calibrate the model using separate baseflow and stormflow sample datasets. The method reduced model error and provided estimates of the load associated with the baseflow and stormflow parts of the hydrograph. Annual load of total suspended sediment is a performance criterion in Gwinnett County's Watershed Protection Plan. Median concentrations of total suspended solids in stormflow range from 30 to 180 times greater than in baseflow. This increase in total suspended solids concentration with increasing discharge has a multiplied effect on total suspended solids load, 97 to 99 percent of which is transported during stormflow. Annual total suspended solids load is highly dependent on annual precipitation; between 1998 and 2003 load for the wettest year was up to 28 times greater than for the driest year. Average annual total suspended solids yield from 1998-2003 in the six watersheds increased with high-density and transportation/utility land uses, and generally decreased with low-density residential, estate/park, and undeveloped land uses. Watershed characteristics also were related to annual loads of total phosphorus, dissolved phosphorus, total nitrogen, total dissolved solids, biochemical oxygen demand, and total zinc, as well as stream alkalinity. Flow-adjusted total suspended solids, total phosphorus, and total zinc stormflow concentrations between 1996 and 2003 have a seasonal pattern in five of the six watersheds. Flow-adjusted concentrations typically peak during late summer, between July and August. The seasonal pattern is stronger for more developed watersheds and may be related to seasonal land-disturbance activities and/or to seasonal rainfall intensity, both of which increase in summer. Adjusting for seasonality in the computation of constituent load caused the standard error of annual total suspended solids load to improve by an average of 11 percent, and increased computed summer total suspended solids loads by an average of 45 percent and decreased winter total suspended solids loads by an average of 40 percent. Total annual loads changed by less than 5 percent on the average. Graphical and statistical analyses do not indicate a time tre

  14. Parallel Implementation of the Terrain Masking Algorithm

    DTIC Science & Technology

    1994-03-01

    contains behavior rules which can define a computation or an algorithm. It can communicate with other process nodes, it can contain local data, and it can...terrain maskirg calculation is being performed. It is this algorithm that comsumes about seventy percent of the total terrain masking calculation time

  15. Software Surface Modeling and Grid Generation Steering Committee

    NASA Technical Reports Server (NTRS)

    Smith, Robert E. (Editor)

    1992-01-01

    It is a NASA objective to promote improvements in the capability and efficiency of computational fluid dynamics. Grid generation, the creation of a discrete representation of the solution domain, is an essential part of computational fluid dynamics. However, grid generation about complex boundaries requires sophisticated surface-model descriptions of the boundaries. The surface modeling and the associated computation of surface grids consume an extremely large percentage of the total time required for volume grid generation. Efficient and user friendly software systems for surface modeling and grid generation are critical for computational fluid dynamics to reach its potential. The papers presented here represent the state-of-the-art in software systems for surface modeling and grid generation. Several papers describe improved techniques for grid generation.

  16. Point Judith, Rhode Island, Breakwater Risk Assessment

    DTIC Science & Technology

    2015-08-01

    output stations. Beach zones considered included the sandy beach to the west side of the HoR, which had significant dune features and was fronting...time dependency for crest height and wave parameters is assumed, hc = total damaged crest height of structure from toe , Lp is the local wave length...computed using linear wave theory and Tp, h is the toe depth, hc’ = total undamaged crest height of structure from toe , At = area of structure enclosed

  17. Summer 2015 measurements of total OH reactivity at a UK coastal site

    NASA Astrophysics Data System (ADS)

    Woodward-Massey, R.; Cryer, D. R.; Whalley, L. K.; Ingham, T.; Crilley, L.; Kramer, L. J.; Reeves, C.; Forster, G.; Oram, D.; Bandy, B.; Reed, C.; Lee, J. D.; Bloss, W.; Heard, D. E.

    2015-12-01

    The hydroxyl radical (OH) plays a central role in the day time oxidative removal of pollutants and greenhouse gases in the atmosphere. It is essential that all production and loss pathways of OH are understood and included in computer models in order to accurately predict OH concentrations for a range of environments, and in turn the rate of production of secondary products, for example ozone and organic aerosol. Direct measurement of total OH reactivity, the pseudo first order rate coefficient for OH loss by reaction with its sinks, is a very useful tool to test how complete our knowledge is of OH loss pathways. Comparison with values of total OH reactivity calculated by computer models using concentrations of simultaneously measured OH 'sinks' and unmeasured intermediates enables environments to be identified where there are unidentified 'missing' OH sinks. Total OH reactivity was measured using the laser flash photolysis combined with time-resolved laser-induced fluorescence technique during the ICOZA (Integrated Chemistry of OZone in the Atmosphere) campaign in July 2015 at the Weybourne Atmospheric Observatory (WAO), Norfolk, UK. Air masses sampled ranged from polluted air from the UK or Europe containing processed urban emissions to very clean air of marine origin. Data for measured and calculated OH reactivity will be presented in addition to a discussion of the magnitude of the 'missing' OH sink determined for each type of air mass.

  18. Transforming parts of a differential equations system to difference equations as a method for run-time savings in NONMEM.

    PubMed

    Petersson, K J F; Friberg, L E; Karlsson, M O

    2010-10-01

    Computer models of biological systems grow more complex as computing power increase. Often these models are defined as differential equations and no analytical solutions exist. Numerical integration is used to approximate the solution; this can be computationally intensive, time consuming and be a large proportion of the total computer runtime. The performance of different integration methods depend on the mathematical properties of the differential equations system at hand. In this paper we investigate the possibility of runtime gains by calculating parts of or the whole differential equations system at given time intervals, outside of the differential equations solver. This approach was tested on nine models defined as differential equations with the goal to reduce runtime while maintaining model fit, based on the objective function value. The software used was NONMEM. In four models the computational runtime was successfully reduced (by 59-96%). The differences in parameter estimates, compared to using only the differential equations solver were less than 12% for all fixed effects parameters. For the variance parameters, estimates were within 10% for the majority of the parameters. Population and individual predictions were similar and the differences in OFV were between 1 and -14 units. When computational runtime seriously affects the usefulness of a model we suggest evaluating this approach for repetitive elements of model building and evaluation such as covariate inclusions or bootstraps.

  19. The scheduling of tracking times for interplanetary spacecraft on the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Webb, W. A.

    1978-01-01

    The Deep Space Network (DSN) is a network of tracking stations, located throughout the globe, used to track spacecraft for NASA's interplanetary missions. This paper describes a computer program, DSNTRAK, which provides an optimum daily tracking schedule for the DSN given the view periods at each station for a mission set of n spacecraft, where n is between 2 and 6. The objective function is specified in terms of relative total daily tracking time requirements between the n spacecraft. Linear programming is used to maximize the total daily tracking time and determine an optimal daily tracking schedule consistent with DSN station capabilities. DSNTRAK is used as part of a procedure to provide DSN load forecasting information for proposed future NASA mission sets.

  20. Compressed sensing for ultrasound computed tomography.

    PubMed

    van Sloun, Ruud; Pandharipande, Ashish; Mischi, Massimo; Demi, Libertario

    2015-06-01

    Ultrasound computed tomography (UCT) allows the reconstruction of quantitative tissue characteristics, such as speed of sound, mass density, and attenuation. Lowering its acquisition time would be beneficial; however, this is fundamentally limited by the physical time of flight and the number of transmission events. In this letter, we propose a compressed sensing solution for UCT. The adopted measurement scheme is based on compressed acquisitions, with concurrent randomised transmissions in a circular array configuration. Reconstruction of the image is then obtained by combining the born iterative method and total variation minimization, thereby exploiting variation sparsity in the image domain. Evaluation using simulated UCT scattering measurements shows that the proposed transmission scheme performs better than uniform undersampling, and is able to reduce acquisition time by almost one order of magnitude, while maintaining high spatial resolution.

  1. Development of a real-time aeroperformance analysis technique for the X-29A advanced technology demonstrator

    NASA Technical Reports Server (NTRS)

    Ray, R. J.; Hicks, J. W.; Alexander, R. I.

    1988-01-01

    The X-29A advanced technology demonstrator has shown the practicality and advantages of the capability to compute and display, in real time, aeroperformance flight results. This capability includes the calculation of the in-flight measured drag polar, lift curve, and aircraft specific excess power. From these elements many other types of aeroperformance measurements can be computed and analyzed. The technique can be used to give an immediate postmaneuver assessment of data quality and maneuver technique, thus increasing the productivity of a flight program. A key element of this new method was the concurrent development of a real-time in-flight net thrust algorithm, based on the simplified gross thrust method. This net thrust algorithm allows for the direct calculation of total aircraft drag.

  2. Efficacy of computer-based endoscope cleaning and disinfection using a hospital management information system.

    PubMed

    Wang, Caixia; Chen, Yuanyuan; Yang, Feng; Ren, Jie; Yu, Xin; Wang, Jiani; Sun, Siyu

    2016-08-01

    The present study aimed to assess the efficacy of computer-based endoscope cleaning and disinfection using a hospital management information system (HMIS). A total of 2,674 gastroscopes were eligible for inclusion in this study. For the processes of disinfection management, the gastroscopes were randomly divided into 2 groups: gastroscope disinfection HMIS (GD-HMIS) group and manual group. In the GD-HMIS group, an integrated circuit card (IC card) chip was installed to monitor and record endoscope cleaning and disinfection automatically and in real time, whereas the endoscope cleaning and disinfection in the manual group was recorded manually. The overall disinfection progresses for both groups were recorded, and the total operational time was calculated. For the GD-HMIS group, endoscope disinfection HMIS software was successfully developed. The time to complete a single session of cleaning and disinfecting on a gastroscope was 15.6 minutes (range, 14.3-17.2 minutes) for the GD-HMIS group and 21.3 minutes (range, 20.2-23.9 minutes) for the manual group. Failure to record information, such as the identification number of the endoscope, occasionally occurred in the manual group, which affected the accuracy and reliability of manual recording. Computer-based gastroscope cleaning and disinfection using a hospital management information system could monitor the process of gastroscope cleaning and disinfection in real time and improve the accuracy and reliability, thereby ensuring the quality of gastroscope cleaning and disinfection. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  3. Quantum Monte Carlo with very large multideterminant wavefunctions.

    PubMed

    Scemama, Anthony; Applencourt, Thomas; Giner, Emmanuel; Caffarel, Michel

    2016-07-01

    An algorithm to compute efficiently the first two derivatives of (very) large multideterminant wavefunctions for quantum Monte Carlo calculations is presented. The calculation of determinants and their derivatives is performed using the Sherman-Morrison formula for updating the inverse Slater matrix. An improved implementation based on the reduction of the number of column substitutions and on a very efficient implementation of the calculation of the scalar products involved is presented. It is emphasized that multideterminant expansions contain in general a large number of identical spin-specific determinants: for typical configuration interaction-type wavefunctions the number of unique spin-specific determinants Ndetσ ( σ=↑,↓) with a non-negligible weight in the expansion is of order O(Ndet). We show that a careful implementation of the calculation of the Ndet -dependent contributions can make this step negligible enough so that in practice the algorithm scales as the total number of unique spin-specific determinants,  Ndet↑+Ndet↓, over a wide range of total number of determinants (here, Ndet up to about one million), thus greatly reducing the total computational cost. Finally, a new truncation scheme for the multideterminant expansion is proposed so that larger expansions can be considered without increasing the computational time. The algorithm is illustrated with all-electron fixed-node diffusion Monte Carlo calculations of the total energy of the chlorine atom. Calculations using a trial wavefunction including about 750,000 determinants with a computational increase of ∼400 compared to a single-determinant calculation are shown to be feasible. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. The automation of an inlet mass flow control system

    NASA Technical Reports Server (NTRS)

    Supplee, Frank; Tcheng, Ping; Weisenborn, Michael

    1989-01-01

    The automation of a closed-loop computer controlled system for the inlet mass flow system (IMFS) developed for a wind tunnel facility at Langley Research Center is presented. This new PC based control system is intended to replace the manual control system presently in use in order to fully automate the plug positioning of the IMFS during wind tunnel testing. Provision is also made for communication between the PC and a host-computer in order to allow total animation of the plug positioning and data acquisition during the complete sequence of predetermined plug locations. As extensive running time is programmed for the IMFS, this new automated system will save both manpower and tunnel running time.

  5. Rovibrational bound states of SO2 isotopologues. I: Total angular momentum J = 0-10

    NASA Astrophysics Data System (ADS)

    Kumar, Praveen; Ellis, Joseph; Poirier, Bill

    2015-04-01

    Isotopic variation of the rovibrational bound states of SO2 for the four stable sulfur isotopes 32-34,36S is investigated in comprehensive detail. In a two-part series, we compute the low-lying energy levels for all values of total angular momentum in the range J = 0-20. All rovibrational levels are computed, to an extremely high level of numerical convergence. The calculations have been carried out using the ScalIT suite of parallel codes. The present study (Paper I) examines the J = 0-10 rovibrational levels, providing unambiguous symmetry and rovibrational label assignments for each computed state. The calculated vibrational energy levels exhibit very good agreement with previously reported experimental and theoretical data. Rovibrational energy levels, calculated without any Coriolis approximations, are reported here for the first time. Among other potential ramifications, this data will facilitate understanding of the origin of mass-independent fractionation of sulfur isotopes in the Archean rock record-of great relevance for understanding the "oxygen revolution".

  6. Feasibility and safety of augmented-reality glass for computed tomography-assisted percutaneous revascularization of coronary chronic total occlusion: A single center prospective pilot study.

    PubMed

    Opolski, Maksymilian P; Debski, Artur; Borucki, Bartosz A; Staruch, Adam D; Kepka, Cezary; Rokicki, Jakub K; Sieradzki, Bartosz; Witkowski, Adam

    2017-11-01

    Percutaneous coronary intervention (PCI) of chronic total occlusion (CTO) may be facilitated by projection of coronary computed tomography angiography (CTA) datasets in the catheterization laboratory. There is no data on the feasibility and safety outcomes of CTA-assisted CTO PCI using a wearable augmented-reality glass. A total of 15 patients scheduled for elective antegrade CTO intervention were prospectively enrolled and underwent preprocedural coronary CTA. Three-dimensional and curved multiplanar CT reconstructions were transmitted to a head-mounted hands-free computer worn by interventional cardiologists during CTO PCI to provide additional information on CTO tortuosity and calcification. The results of CTO PCI using a wearable computer were compared with a time-matched prospective angiographic registry of 59 patients undergoing antegrade CTO PCI without a wearable computer. Operators' satisfaction was assessed by a 5-point Likert scale. Mean age was 64 ± 8 years and the mean J-CTO score was 2.1 ± 0.9 in the CTA-assisted group. The voice-activated co-registration and review of CTA images in a wearable computer during CTO PCI were feasible and highly rated by PCI operators (4.7/5 points). There were no major adverse cardiovascular events. Compared with standard CTO PCI, CTA-assisted recanalization of CTO using a wearable computer showed more frequent selection of the first-choice stiff wire (0% vs 40%, p < 0.001) and lower contrast exposure (166 ± 52 vs 134 ± 43 ml, p = 0.03). Overall CTO success rates and safety outcomes remained similar between both groups. CTA-assisted CTO PCI using an augmented-reality glass is feasible and safe, and might reduce the resources required for the interventional treatment of CTO. Copyright © 2017 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  7. Intelligent Systems for Assessing Aging Changes: Home-Based, Unobtrusive, and Continuous Assessment of Aging

    PubMed Central

    Maxwell, Shoshana A.; Mattek, Nora; Hayes, Tamara L.; Dodge, Hiroko; Pavel, Misha; Jimison, Holly B.; Wild, Katherine; Boise, Linda; Zitzelberger, Tracy A.

    2011-01-01

    Objectives. To describe a longitudinal community cohort study, Intelligent Systems for Assessing Aging Changes, that has deployed an unobtrusive home-based assessment platform in many seniors homes in the existing community. Methods. Several types of sensors have been installed in the homes of 265 elderly persons for an average of 33 months. Metrics assessed by the sensors include total daily activity, time out of home, and walking speed. Participants were given a computer as well as training, and computer usage was monitored. Participants are assessed annually with health and function questionnaires, physical examinations, and neuropsychological testing. Results. Mean age was 83.3 years, mean years of education was 15.5, and 73% of cohort were women. During a 4-week snapshot, participants left their home twice a day on average for a total of 208 min per day. Mean in-home walking speed was 61.0 cm/s. Participants spent 43% of days on the computer averaging 76 min per day. Discussion. These results demonstrate for the first time the feasibility of engaging seniors in a large-scale deployment of in-home activity assessment technology and the successful collection of these activity metrics. We plan to use this platform to determine if continuous unobtrusive monitoring may detect incident cognitive decline. PMID:21743050

  8. Initial experience with custom-fit total knee replacement: intra-operative events and long-leg coronal alignment.

    PubMed

    Spencer, Brian A; Mont, Michael A; McGrath, Mike S; Boyd, Bradley; Mitrick, Michael F

    2009-12-01

    New technology using magnetic resonance imaging (MRI) allows the surgeon to place total knee replacement components into each patient's pre-arthritic natural alignment. This study evaluated the initial intra-operative experience using this technique. Twenty-one patients had a sagittal MRI of their arthritic knee to determine component placement for a total knee replacement. Cutting guides were machined to control all intra-operative cuts. Intra-operative events were recorded and these knees were compared to a matching cohort of the senior surgeon's previous 30 conventional total knee replacements. Post-operative scanograms were obtained from each patient and coronal alignment was compared to previous studies using conventional and computer-assisted techniques. There were no intra-operative or acute post-operative complications. There were no differences in blood loss and there was a mean decrease in operative time of 14% compared to a cohort of patients with conventional knee replacements. The average deviation from the mechanical axis was 1.2 degrees of varus, which was comparable to previously reported conventional and computer-assisted techniques. Custom-fit total knee replacement appeared to be a safe procedure for uncomplicated cases of osteoarthritis.

  9. Prescribing and formulating neonatal intravenous feeding solutions by microcomputer.

    PubMed Central

    MacMahon, P

    1984-01-01

    This paper describes a computer programme for a low cost microcomputer designed to assist in the task of administering total parenteral nutrition to neonates: no knowledge of computers is necessary to operate the system. The programme displays recommended values for each of the total parenteral nutrition constituents that must be prescribed, based on detailed analysis of all the pertinent variables. The recommended values may be rejected but they do provide a useful prompt, especially for the more junior doctors. The programme includes a number of safeguards that protect against entering potentially dangerous values. As soon as the operator has completed the procedure of entering total parenteral nutrition requirements the calculations necessary to formulate a solution containing these are automatically performed. The print out contains this data plus instructions on the infusion rate and an analysis of the formulation's calorific content. This system makes it easier to vary the quantity of individual total parenteral nutrition constituents and time has been saved which was previously wasted performing laborious calculations. One of the most important contributions has been the virtual elimination of errors in the complex task of prescribing and formulating total parenteral nutrition for sick neonates. PMID:6430246

  10. Demonstration of optical computing logics based on binary decision diagram.

    PubMed

    Lin, Shiyun; Ishikawa, Yasuhiko; Wada, Kazumi

    2012-01-16

    Optical circuits are low power consumption and fast speed alternatives for the current information processing based on transistor circuits. However, because of no transistor function available in optics, the architecture for optical computing should be chosen that optics prefers. One of which is Binary Decision Diagram (BDD), where signal is processed by sending an optical signal from the root through a serial of switching nodes to the leaf (terminal). Speed of optical computing is limited by either transmission time of optical signals from the root to the leaf or switching time of a node. We have designed and experimentally demonstrated 1-bit and 2-bit adders based on the BDD architecture. The switching nodes are silicon ring resonators with a modulation depth of 10 dB and the states are changed by the plasma dispersion effect. The quality, Q of the rings designed is 1500, which allows fast transmission of signal, e.g., 1.3 ps calculated by a photon escaping time. A total processing time is thus analyzed to be ~9 ps for a 2-bit adder and would scales linearly with the number of bit. It is two orders of magnitude faster than the conventional CMOS circuitry, ~ns scale of delay. The presented results show the potential of fast speed optical computing circuits.

  11. Spaceborne Processor Array

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas

    2008-01-01

    A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.

  12. Developing dementia prevention trials: baseline report of the Home-Based Assessment study.

    PubMed

    Sano, Mary; Egelko, Susan; Donohue, Michael; Ferris, Steven; Kaye, Jeffrey; Hayes, Tamara L; Mundt, James C; Sun, Chung-Kai; Paparello, Silvia; Aisen, Paul S

    2013-01-01

    This report describes the baseline experience of the multicenter, Home-Based Assessment study, designed to develop methods for dementia prevention trials using novel technologies for test administration and data collection. Nondemented individuals of 75 years of age or more were recruited and evaluated in-person using established clinical trial outcomes of cognition and function, and randomized to one of 3 assessment methodologies: (1) mail-in questionnaire/live telephone interviews [mail-in/phone (MIP)]; (2) automated telephone with interactive voice recognition; and (3) internet-based computer Kiosk. Brief versions of cognitive and noncognitive outcomes were adapted to each methodology and administered at baseline and repeatedly over a 4-year period. "Efficiency" measures assessed the time from screening to baseline, and staff time required for each methodology. A total of 713 individuals signed consent and were screened; 640 met eligibility and were randomized to one of 3 assessment arms; and 581 completed baseline. Dropout, time from screening to baseline, and total staff time were highest among those assigned to internet-based computer Kiosk. However, efficiency measures were driven by nonrecurring start-up activities suggesting that differences may be mitigated over a long trial. Performance among Home-Based Assessment instruments collected through different technologies will be compared with established outcomes over this 4-year study.

  13. Aggregate R-R-V Analysis

    EPA Pesticide Factsheets

    The excel file contains time series data of flow rates, concentrations of alachlor , atrazine, ammonia, total phosphorus, and total suspended solids observed in two watersheds in Indiana from 2002 to 2007. The aggregate time series data corresponding or representative to all these parameters was obtained using a specialized, data-driven technique. The aggregate data is hypothesized in the published paper to represent the overall health of both watersheds with respect to various potential water quality impairments. The time series data for each of the individual water quality parameters were used to compute corresponding risk measures (Rel, Res, and Vul) that are reported in Table 4 and 5. The aggregation of the risk measures, which is computed from the aggregate time series and water quality standards in Table 1, is also reported in Table 4 and 5 of the published paper. Values under column heading uncertainty reports uncertainties associated with reconstruction of missing records of the water quality parameters. Long-term records of the water quality parameters were reconstructed in order to estimate the (R-R-V) and corresponding aggregate risk measures. This dataset is associated with the following publication:Hoque, Y., S. Tripathi, M. Hantush , and R. Govindaraju. Aggregate Measures of Watershed Health from Reconstructed Water Quality Data with Uncertainty. Ed Gregorich JOURNAL OF ENVIRONMENTAL QUALITY. American Society of Agronomy, MADISON, WI,

  14. Multiplexing technique for computer communications via satellite channels

    NASA Technical Reports Server (NTRS)

    Binder, R.

    1975-01-01

    Multiplexing scheme combines technique of dynamic allocation with conventional time-division multiplexing. Scheme is designed to expedite short-duration interactive or priority traffic and to delay large data transfers; as result, each node has effective capacity of almost total channel capacity when other nodes have light traffic loads.

  15. Relation between Video Game Addiction and Interfamily Relationships on Primary School Students

    ERIC Educational Resources Information Center

    Zorbaz, Selen Demirtas; Ulas, Ozlem; Kizildag, Seval

    2015-01-01

    This study seeks to analyze whether or not the following three variables of "Discouraging Family Relations," "Supportive Family Relations," "Total Time Spent on the Computer," and "Grade Point Average (GPA)" predict elementary school students' video game addiction rates, and whether or not there exists a…

  16. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  17. A Simplified Model for Detonation Based Pressure-Gain Combustors

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.

    2010-01-01

    A time-dependent model is presented which simulates the essential physics of a detonative or otherwise constant volume, pressure-gain combustor for gas turbine applications. The model utilizes simple, global thermodynamic relations to determine an assumed instantaneous and uniform post-combustion state in one of many envisioned tubes comprising the device. A simple, second order, non-upwinding computational fluid dynamic algorithm is then used to compute the (continuous) flowfield properties during the blowdown and refill stages of the periodic cycle which each tube undergoes. The exhausted flow is averaged to provide mixed total pressure and enthalpy which may be used as a cycle performance metric for benefits analysis. The simplicity of the model allows for nearly instantaneous results when implemented on a personal computer. The results compare favorably with higher resolution numerical codes which are more difficult to configure, and more time consuming to operate.

  18. New horizons in forensic radiology: the 60-second digital autopsy-full-body examination of a gunshot victim by multislice computed tomography.

    PubMed

    Thali, Michael J; Schweitzer, Wolf; Yen, Kathrin; Vock, Peter; Ozdoba, Christoph; Spielvogel, Elke; Dirnhofer, Richard

    2003-03-01

    The goal of this study was the full-body documentation of a gunshot wound victim with multislice helical computed tomography for subsequent comparison with the findings of the standard forensic autopsy. Complete volume data of the head, neck, and trunk were acquired by use of two acquisitions of less than 1 minute of total scanning time. Subsequent two-dimensional multiplanar reformations and three-dimensional shaded surface display reconstructions helped document the gunshot-created skull fractures and brain injuries, including the wound track, and the intracerebral bone fragments. Computed tomography also demonstrated intracardiac air embolism and pulmonary aspiration of blood resulting from bullet wound-related trauma. The "digital autopsy," even when postprocessing time was added, was more rapid than the classic forensic autopsy and, based on the nondestructive approach, offered certain advantages in comparison with the forensic autopsy.

  19. Effect of element size on the solution accuracies of finite-element heat transfer and thermal stress analyses of space shuttle orbiter

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Olona, Timothy

    1987-01-01

    The effect of element size on the solution accuracies of finite-element heat transfer and thermal stress analyses of space shuttle orbiter was investigated. Several structural performance and resizing (SPAR) thermal models and NASA structural analysis (NASTRAN) structural models were set up for the orbiter wing midspan bay 3. The thermal model was found to be the one that determines the limit of finite-element fineness because of the limitation of computational core space required for the radiation view factor calculations. The thermal stresses were found to be extremely sensitive to a slight variation of structural temperature distributions. The minimum degree of element fineness required for the thermal model to yield reasonably accurate solutions was established. The radiation view factor computation time was found to be insignificant compared with the total computer time required for the SPAR transient heat transfer analysis.

  20. Explanation of the computer listings of Faraday factors for INTASAT users

    NASA Technical Reports Server (NTRS)

    Nesterczuk, G.; Llewellyn, S. K.; Bent, R. B.; Schmid, P. E.

    1974-01-01

    Using a simplified form of the Appleton-Hartree formula for the phase refractive index, a relationship was obtained between the Faraday rotation angle along the angular path and the total electron content along the vertical path, intersecting the angular at the height of maximum electron density. Using the second mean value theorem of integration, the function B cosine theta second chi was removed from under the integral sign and replaced by a 'mean' value. The mean value factors were printed on the computer listing for 39 stations receiving signals from the INTASAT satellite during the specified time period. The data is presented by station and date. Graphs are included to demonstrate the variation of the Faraday factor with local time and season, with magnetic latitude, elevation and azimuth angles. Other topics discussed include a description of the bent ionospheric model, the earth's magnetic field model, and the sample computer listing.

  1. Predicting moisture and economic value of solid forest fuel piles for improving the profitability of bioenergy use

    NASA Astrophysics Data System (ADS)

    Lauren, Ari; Kinnunen, Jyrki-Pekko; Sikanen, Lauri

    2016-04-01

    Bioenergy contributes 26 % of the total energy use in Finland, and 60 % of this is provided by solid forest fuel consisting of small stems and logging residues such as tops, branches, roots and stumps. Typically the logging residues are stored as piles on site before transporting to regional combined heat and power plants for combustion. Profitability of forest fuel use depends on smart control of the feedstock. Fuel moisture, dry matter loss, and the rate of interest during the storing are the key variables affecting the economic value of the fuel. The value increases with drying, but decreases with wetting, dry matter loss and positive rate of interest. We compiled a simple simulation model computing the moisture change, dry matter loss, transportation costs and present value of feedstock piles. The model was used to predict the time of the maximum value of the stock, and to compose feedstock allocation strategies under the question: how should we choose the piles and the combustion time so that total energy yield and the economic value of the energy production is maximized? The question was assessed concerning the demand of the energy plant. The model parameterization was based on field scale studies. The initial moisture, and the rates of daily moisture change and dry matter loss in the feedstock piles depended on the day of the year according to empirical field measurements. Time step of the computation was one day. Effects of pile use timing on the total energy yield and profitability was studied using combinatorial optimization. Results show that the storing increases the pile maximum value if the natural drying onsets soon after the harvesting; otherwise dry matter loss and the capital cost of the storing overcome the benefits gained by drying. Optimized timing of the pile use can improve slightly the profitability, based on the increased total energy yield and because the energy unit based transportation costs decrease when water content in the biomass is decreased.

  2. Simplified methods for computing total sediment discharge with the modified Einstein procedure

    USGS Publications Warehouse

    Colby, Bruce R.; Hubbell, David Wellington

    1961-01-01

    A procedure was presented in 1950 by H. A. Einstein for computing the total discharge of sediment particles of sizes that are in appreciable quantities in the stream bed. This procedure was modified by the U.S. Geological Survey and adapted to computing the total sediment discharge of a stream on the basis of samples of bed sediment, depth-integrated samples of suspended sediment, streamflow measurements, and water temperature. This paper gives simplified methods for computing total sediment discharge by the modified Einstein procedure. Each of four homographs appreciably simplifies a major step in the computations. Within the stated limitations, use of the homographs introduces much less error than is present in either the basic data or the theories on which the computations of total sediment discharge are based. The results are nearly as accurate mathematically as those that could be obtained from the longer and more complex arithmetic and algebraic computations of the Einstein procedure.

  3. New Approaches to the Computer Simulation of Amorphous Alloys: A Review.

    PubMed

    Valladares, Ariel A; Díaz-Celaya, Juan A; Galván-Colín, Jonathan; Mejía-Mendoza, Luis M; Reyes-Retana, José A; Valladares, Renela M; Valladares, Alexander; Alvarez-Ramirez, Fernando; Qu, Dongdong; Shen, Jun

    2011-04-13

    In this work we review our new methods to computer generate amorphous atomic topologies of several binary alloys: SiH, SiN, CN; binary systems based on group IV elements like SiC; the GeSe 2 chalcogenide; aluminum-based systems: AlN and AlSi, and the CuZr amorphous alloy. We use an ab initio approach based on density functionals and computationally thermally-randomized periodically-continued cells with at least 108 atoms. The computational thermal process to generate the amorphous alloys is the undermelt-quench approach, or one of its variants, that consists in linearly heating the samples to just below their melting (or liquidus) temperatures, and then linearly cooling them afterwards. These processes are carried out from initial crystalline conditions using short and long time steps. We find that a step four-times the default time step is adequate for most of the simulations. Radial distribution functions (partial and total) are calculated and compared whenever possible with experimental results, and the agreement is very good. For some materials we report studies of the effect of the topological disorder on their electronic and vibrational densities of states and on their optical properties.

  4. New Approaches to the Computer Simulation of Amorphous Alloys: A Review

    PubMed Central

    Valladares, Ariel A.; Díaz-Celaya, Juan A.; Galván-Colín, Jonathan; Mejía-Mendoza, Luis M.; Reyes-Retana, José A.; Valladares, Renela M.; Valladares, Alexander; Alvarez-Ramirez, Fernando; Qu, Dongdong; Shen, Jun

    2011-01-01

    In this work we review our new methods to computer generate amorphous atomic topologies of several binary alloys: SiH, SiN, CN; binary systems based on group IV elements like SiC; the GeSe2 chalcogenide; aluminum-based systems: AlN and AlSi, and the CuZr amorphous alloy. We use an ab initio approach based on density functionals and computationally thermally-randomized periodically-continued cells with at least 108 atoms. The computational thermal process to generate the amorphous alloys is the undermelt-quench approach, or one of its variants, that consists in linearly heating the samples to just below their melting (or liquidus) temperatures, and then linearly cooling them afterwards. These processes are carried out from initial crystalline conditions using short and long time steps. We find that a step four-times the default time step is adequate for most of the simulations. Radial distribution functions (partial and total) are calculated and compared whenever possible with experimental results, and the agreement is very good. For some materials we report studies of the effect of the topological disorder on their electronic and vibrational densities of states and on their optical properties. PMID:28879948

  5. The Ames Power Monitoring System

    NASA Technical Reports Server (NTRS)

    Osetinsky, Leonid; Wang, David

    2003-01-01

    The Ames Power Monitoring System (APMS) is a centralized system of power meters, computer hardware, and specialpurpose software that collects and stores electrical power data by various facilities at Ames Research Center (ARC). This system is needed because of the large and varying nature of the overall ARC power demand, which has been observed to range from 20 to 200 MW. Large portions of peak demand can be attributed to only three wind tunnels (60, 180, and 100 MW, respectively). The APMS helps ARC avoid or minimize costly demand charges by enabling wind-tunnel operators, test engineers, and the power manager to monitor total demand for center in real time. These persons receive the information they need to manage and schedule energy-intensive research in advance and to adjust loads in real time to ensure that the overall maximum allowable demand is not exceeded. The APMS (see figure) includes a server computer running the Windows NT operating system and can, in principle, include an unlimited number of power meters and client computers. As configured at the time of reporting the information for this article, the APMS includes more than 40 power meters monitoring all the major research facilities, plus 15 Windows-based client personal computers that display real-time and historical data to users via graphical user interfaces (GUIs). The power meters and client computers communicate with the server using Transmission Control Protocol/Internet Protocol (TCP/IP) on Ethernet networks, variously, through dedicated fiber-optic cables or through the pre-existing ARC local-area network (ARCLAN). The APMS has enabled ARC to achieve significant savings ($1.2 million in 2001) in the cost of power and electric energy by helping personnel to maintain total demand below monthly allowable levels, to manage the overall power factor to avoid low power factor penalties, and to use historical system data to identify opportunities for additional energy savings. The APMS also provides power engineers and electricians with the information they need to plan modifications in advance and perform day-to-day maintenance of the ARC electric-power distribution system.

  6. Robust Low-dose CT Perfusion Deconvolution via Tensor Total-Variation Regularization

    PubMed Central

    Zhang, Shaoting; Chen, Tsuhan; Sanelli, Pina C.

    2016-01-01

    Acute brain diseases such as acute strokes and transit ischemic attacks are the leading causes of mortality and morbidity worldwide, responsible for 9% of total death every year. ‘Time is brain’ is a widely accepted concept in acute cerebrovascular disease treatment. Efficient and accurate computational framework for hemodynamic parameters estimation can save critical time for thrombolytic therapy. Meanwhile the high level of accumulated radiation dosage due to continuous image acquisition in CT perfusion (CTP) raised concerns on patient safety and public health. However, low-radiation leads to increased noise and artifacts which require more sophisticated and time-consuming algorithms for robust estimation. In this paper, we focus on developing a robust and efficient framework to accurately estimate the perfusion parameters at low radiation dosage. Specifically, we present a tensor total-variation (TTV) technique which fuses the spatial correlation of the vascular structure and the temporal continuation of the blood signal flow. An efficient algorithm is proposed to find the solution with fast convergence and reduced computational complexity. Extensive evaluations are carried out in terms of sensitivity to noise levels, estimation accuracy, contrast preservation, and performed on digital perfusion phantom estimation, as well as in-vivo clinical subjects. Our framework reduces the necessary radiation dose to only 8% of the original level and outperforms the state-of-art algorithms with peak signal-to-noise ratio improved by 32%. It reduces the oscillation in the residue functions, corrects over-estimation of cerebral blood flow (CBF) and under-estimation of mean transit time (MTT), and maintains the distinction between the deficit and normal regions. PMID:25706579

  7. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  8. Association of Sedentary Behavior Time with Ideal Cardiovascular Health: The ORISCAV-LUX Study

    PubMed Central

    Crichton, Georgina E.; Alkerwi, Ala'a

    2014-01-01

    Background Recently attention has been drawn to the health impacts of time spent engaging in sedentary behaviors. No studies have examined sedentary behaviors in relation to the newly defined construct of ideal cardiovascular health, which incorporates three health factors (blood pressure, total cholesterol, fasting plasma glucose) and four behaviors (physical activity, smoking, body mass index, diet). The purpose of this study was to examine associations between sedentary behaviors, including sitting time, and time spent viewing television and in front of a computer, with cardiovascular health, in a representative sample of adults from Luxembourg. Methods A cross-sectional analysis of 1262 participants in the Observation of Cardiovascular Risk Factors in Luxembourg study was conducted, who underwent objective cardiovascular health assessments and completed the International Physical Activity Questionnaire. A Cardiovascular Health Score was calculated based on the number of health factors and behaviors at ideal levels. Sitting time on a weekday, television time, and computer time (both on a workday and a day off), were related to the Cardiovascular Health Score. Results Higher weekday sitting time was significantly associated with a poorer Cardiovascular Health Score (p = 0.002 for linear trend), after full adjustment for age, gender, education, income and occupation. Television time was inversely associated with the Cardiovascular Health Score, on both a workday and a day off (p = 0.002 for both). A similar inverse relationship was observed between the Cardiovascular Health Score and computer time, only on a day off (p = 0.04). Conclusion Higher time spent sitting, viewing television, and using a computer during a day off may be unfavorably associated with ideal cardiovascular health. PMID:24925084

  9. Cone-beam computed tomography fusion and navigation for real-time positron emission tomography-guided biopsies and ablations: a feasibility study.

    PubMed

    Abi-Jaoudeh, Nadine; Mielekamp, Peter; Noordhoek, Niels; Venkatesan, Aradhana M; Millo, Corina; Radaelli, Alessandro; Carelsen, Bart; Wood, Bradford J

    2012-06-01

    To describe a novel technique for multimodality positron emission tomography (PET) fusion-guided interventions that combines cone-beam computed tomography (CT) with PET/CT before the procedure. Subjects were selected among patients scheduled for a biopsy or ablation procedure. The lesions were not visible with conventional imaging methods or did not have uniform uptake on PET. Clinical success was defined by adequate histopathologic specimens for molecular profiling or diagnosis and by lack of enhancement on follow-up imaging for ablation procedures. Time to target (time elapsed between the completion of the initial cone-beam CT scan and first tissue sample or treatment), total procedure time (time from the moment the patient was on the table until the patient was off the table), and number of times the needle was repositioned were recorded. Seven patients underwent eight procedures (two ablations and six biopsies). Registration and procedures were completed successfully in all cases. Clinical success was achieved in all biopsy procedures and in one of the two ablation procedures. The needle was repositioned once in one biopsy procedure only. On average, the time to target was 38 minutes (range 13-54 min). Total procedure time was 95 minutes (range 51-240 min, which includes composite ablation). On average, fluoroscopy time was 2.5 minutes (range 1.3-6.2 min). An integrated cone-beam CT software platform can enable PET-guided biopsies and ablation procedures without the need for additional specialized hardware. Copyright © 2012 SIR. Published by Elsevier Inc. All rights reserved.

  10. Computer use and needs of internists: a survey of members of the American College of Physicians-American Society of Internal Medicine.

    PubMed Central

    Lacher, D.; Nelson, E.; Bylsma, W.; Spena, R.

    2000-01-01

    The American College of Physicians-American Society of Internal Medicine conducted a membership survey in late 1998 to assess their activities, needs, and attitudes. A total of 9,466 members (20.9% response rate) reported on 198 items related to computer use and needs of internists. Eighty-two percent of the respondents reported that they use computers for personal or professional reasons. Physicians younger than 50 years old who had full- or part-time academic affiliation reported using computers more frequently for medical applications. About two thirds of respondents who had access to computers connected to the Internet at least weekly, with most using the Internet from home for e-mail and nonmedical uses. Physicians expressed concerns about Internet security, confidentiality, and accuracy, and the lack of time to browse the Internet. In practice settings, internists used computers for administrative and financial functions. Less than 19% of respondents had partial or complete electronic clinical functions in their offices. Less than 7% of respondents exchanged e-mail with their patients on a weekly or daily basis. Also, less than 15% of respondents used computers for continuing medical education (CME). Respondents reported they wanted to increase their general computer skills and enhance their knowledge of computer-based information sources for patient care, electronic medical record systems, computer-based CME, and telemedicine While most respondents used computers and connected to the Internet, few physicians utilized computers for clinical management. Medical organizations face the challenge of increasing physician use of clinical systems and electronic CME. PMID:11079924

  11. Computer use and needs of internists: a survey of members of the American College of Physicians-American Society of Internal Medicine.

    PubMed

    Lacher, D; Nelson, E; Bylsma, W; Spena, R

    2000-01-01

    The American College of Physicians-American Society of Internal Medicine conducted a membership survey in late 1998 to assess their activities, needs, and attitudes. A total of 9,466 members (20.9% response rate) reported on 198 items related to computer use and needs of internists. Eighty-two percent of the respondents reported that they use computers for personal or professional reasons. Physicians younger than 50 years old who had full- or part-time academic affiliation reported using computers more frequently for medical applications. About two thirds of respondents who had access to computers connected to the Internet at least weekly, with most using the Internet from home for e-mail and nonmedical uses. Physicians expressed concerns about Internet security, confidentiality, and accuracy, and the lack of time to browse the Internet. In practice settings, internists used computers for administrative and financial functions. Less than 19% of respondents had partial or complete electronic clinical functions in their offices. Less than 7% of respondents exchanged e-mail with their patients on a weekly or daily basis. Also, less than 15% of respondents used computers for continuing medical education (CME). Respondents reported they wanted to increase their general computer skills and enhance their knowledge of computer-based information sources for patient care, electronic medical record systems, computer-based CME, and telemedicine While most respondents used computers and connected to the Internet, few physicians utilized computers for clinical management. Medical organizations face the challenge of increasing physician use of clinical systems and electronic CME.

  12. Evaluating external nutrient and suspended-sediment loads to Upper Klamath Lake, Oregon, using surrogate regressions with real-time turbidity and acoustic backscatter data

    USGS Publications Warehouse

    Schenk, Liam N.; Anderson, Chauncey W.; Diaz, Paul; Stewart, Marc A.

    2016-12-22

    Executive SummarySuspended-sediment and total phosphorus loads were computed for two sites in the Upper Klamath Basin on the Wood and Williamson Rivers, the two main tributaries to Upper Klamath Lake. High temporal resolution turbidity and acoustic backscatter data were used to develop surrogate regression models to compute instantaneous concentrations and loads on these rivers. Regression models for the Williamson River site showed strong correlations of turbidity with total phosphorus and suspended-sediment concentrations (adjusted coefficients of determination [Adj R2]=0.73 and 0.95, respectively). Regression models for the Wood River site had relatively poor, although statistically significant, relations of turbidity with total phosphorus, and turbidity and acoustic backscatter with suspended sediment concentration, with high prediction uncertainty. Total phosphorus loads for the partial 2014 water year (excluding October and November 2013) were 39 and 28 metric tons for the Williamson and Wood Rivers, respectively. These values are within the low range of phosphorus loads computed for these rivers from prior studies using water-quality data collected by the Klamath Tribes. The 2014 partial year total phosphorus loads on the Williamson and Wood Rivers are assumed to be biased low because of the absence of data from the first 2 months of water year 2014, and the drought conditions that were prevalent during that water year. Therefore, total phosphorus and suspended-sediment loads in this report should be considered as representative of a low-water year for the two study sites. Comparing loads from the Williamson and Wood River monitoring sites for November 2013–September 2014 shows that the Williamson and Sprague Rivers combined, as measured at the Williamson River site, contributed substantially more suspended sediment to Upper Klamath Lake than the Wood River, with 4,360 and 1,450 metric tons measured, respectively.Surrogate techniques have proven useful at the two study sites, particularly in using turbidity to compute suspended-sediment concentrations in the Williamson River. This proof-of-concept effort for computing total phosphorus concentrations using turbidity at the Williamson and Wood River sites also has shown that with additional samples over a wide range of flow regimes, high-temporal-resolution total phosphorus loads can be estimated on a daily, monthly, and annual basis, along with uncertainties for total phosphorus and suspended-sediment concentrations computed using regression models. Sediment-corrected backscatter at the Wood River has potential for estimating suspended-sediment loads from the Wood River Valley as well, with additional analysis of the variable streamflow measured at that site. Suspended-sediment and total phosphorus loads with a high level of temporal resolution will be useful to water managers, restoration practitioners, and scientists in the Upper Klamath Basin working toward the common goal of decreasing nutrient and sediment loads in Upper Klamath Lake.

  13. Using a Cloud Computing System to Reduce Door-to-Balloon Time in Acute ST-Elevation Myocardial Infarction Patients Transferred for Percutaneous Coronary Intervention.

    PubMed

    Ho, Chi-Kung; Chen, Fu-Cheng; Chen, Yung-Lung; Wang, Hui-Ting; Lee, Chien-Ho; Chung, Wen-Jung; Lin, Cheng-Jui; Hsueh, Shu-Kai; Hung, Shin-Chiang; Wu, Kuan-Han; Liu, Chu-Feng; Kung, Chia-Te; Cheng, Cheng-I

    2017-01-01

    This study evaluated the impact on clinical outcomes using a cloud computing system to reduce percutaneous coronary intervention hospital door-to-balloon (DTB) time for ST segment elevation myocardial infarction (STEMI). A total of 369 patients before and after implementation of the transfer protocol were enrolled. Of these patients, 262 were transferred through protocol while the other 107 patients were transferred through the traditional referral process. There were no significant differences in DTB time, pain to door of STEMI receiving center arrival time, and pain to balloon time between the two groups. Pain to electrocardiography time in patients with Killip I/II and catheterization laboratory to balloon time in patients with Killip III/IV were significantly reduced in transferred through protocol group compared to in traditional referral process group (both p < 0.05). There were also no remarkable differences in the complication rate and 30-day mortality between two groups. The multivariate analysis revealed that the independent predictors of 30-day mortality were elderly patients, advanced Killip score, and higher level of troponin-I. This study showed that patients transferred through our present protocol could reduce pain to electrocardiography and catheterization laboratory to balloon time in Killip I/II and III/IV patients separately. However, this study showed that using a cloud computing system in our present protocol did not reduce DTB time.

  14. Computer-assisted total hip arthroplasty: coding the next generation of navigation systems for orthopedic surgery.

    PubMed

    Renkawitz, Tobias; Tingart, Markus; Grifka, Joachim; Sendtner, Ernst; Kalteis, Thomas

    2009-09-01

    This article outlines the scientific basis and a state-of-the-art application of computer-assisted orthopedic surgery in total hip arthroplasty (THA) and provides a future perspective on this technology. Computer-assisted orthopedic surgery in primary THA has the potential to couple 3D simulations with real-time evaluations of surgical performance, which has brought these developments from the research laboratory all the way to clinical use. Nonimage- or imageless-based navigation systems without the need for additional pre- or intra-operative image acquisition have stood the test to significantly reduce the variability in positioning the acetabular component and have shown precise measurement of leg length and offset changes during THA. More recently, computer-assisted orthopedic surgery systems have opened a new frontier for accurate surgical practice in minimally invasive, tissue-preserving THA. The future generation of imageless navigation systems will switch from simple measurement tasks to real navigation tools. These software algorithms will consider the cup and stem as components of a coupled biomechanical system, navigating the orthopedic surgeon to find an optimized complementary component orientation rather than target values intraoperatively, and are expected to have a high impact on clinical practice and postoperative functionality in modern THA.

  15. Estimating the volume and age of water stored in global lakes using a geo-statistical approach

    PubMed Central

    Messager, Mathis Loïc; Lehner, Bernhard; Grill, Günther; Nedeva, Irena; Schmitt, Oliver

    2016-01-01

    Lakes are key components of biogeochemical and ecological processes, thus knowledge about their distribution, volume and residence time is crucial in understanding their properties and interactions within the Earth system. However, global information is scarce and inconsistent across spatial scales and regions. Here we develop a geo-statistical model to estimate the volume of global lakes with a surface area of at least 10 ha based on the surrounding terrain information. Our spatially resolved database shows 1.42 million individual polygons of natural lakes with a total surface area of 2.67 × 106 km2 (1.8% of global land area), a total shoreline length of 7.2 × 106 km (about four times longer than the world's ocean coastline) and a total volume of 181.9 × 103 km3 (0.8% of total global non-frozen terrestrial water stocks). We also compute mean and median hydraulic residence times for all lakes to be 1,834 days and 456 days, respectively. PMID:27976671

  16. Optimal reconfiguration strategy for a degradable multimodule computing system

    NASA Technical Reports Server (NTRS)

    Lee, Yann-Hang; Shin, Kang G.

    1987-01-01

    The present quantitative approach to the problem of reconfiguring a degradable multimode system assigns some modules to computation and arranges others for reliability. By using expected total reward as the optimal criterion, there emerges an active reconfiguration strategy based not only on the occurrence of failure but the progression of the given mission. This reconfiguration strategy requires specification of the times at which the system should undergo reconfiguration, and the configurations to which the system should change. The optimal reconfiguration problem is converted to integer nonlinear knapsack and fractional programming problems.

  17. Reply to “Comment on ‘Axion induced oscillating electric dipole moments’”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, Christopher T.

    A recent paper of Flambaum, Roberts and Stadnik, [1], claims there is no induced oscillating electric dipole moment (OEDM), eg, for the electron, arising from the oscillating cosmic axion background via the anomaly. This claim is based upon the assumption that electric dipoles always be defined by their coupling to static (constant in time) electric fields. The relevant Feynman diagram, as computed by [1], then becomes a total divergence, and vanishes in momentum space. However, an OEDM does arise from the anomaly, coupled to time dependent electric fields. It shares the decoupling properties with the anomaly. The full action, inmore » an arbitrary gauge, was computed in [2], [3]. It is nonvanishing with a time dependent outgoing photon, and yields physics, eg, electric dipole radiation of an electron immersed in a cosmic axion field.« less

  18. Cloud Compute for Global Climate Station Summaries

    NASA Astrophysics Data System (ADS)

    Baldwin, R.; May, B.; Cogbill, P.

    2017-12-01

    Global Climate Station Summaries are simple indicators of observational normals which include climatic data summarizations and frequency distributions. These typically are statistical analyses of station data over 5-, 10-, 20-, 30-year or longer time periods. The summaries are computed from the global surface hourly dataset. This dataset totaling over 500 gigabytes is comprised of 40 different types of weather observations with 20,000 stations worldwide. NCEI and the U.S. Navy developed these value added products in the form of hourly summaries from many of these observations. Enabling this compute functionality in the cloud is the focus of the project. An overview of approach and challenges associated with application transition to the cloud will be presented.

  19. Machining fixture layout optimization using particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Dou, Jianping; Wang, Xingsong; Wang, Lei

    2011-05-01

    Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.

  20. A Primer on High-Throughput Computing for Genomic Selection

    PubMed Central

    Wu, Xiao-Lin; Beissinger, Timothy M.; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J. M.; Weigel, Kent A.; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized genetic gain). Eventually, HTC may change our view of data analysis as well as decision-making in the post-genomic era of selection programs in animals and plants, or in the study of complex diseases in humans. PMID:22303303

  1. Time-resolved computed tomography of the liver: retrospective, multi-phase image reconstruction derived from volumetric perfusion imaging.

    PubMed

    Fischer, Michael A; Leidner, Bertil; Kartalis, Nikolaos; Svensson, Anders; Aspelin, Peter; Albiin, Nils; Brismar, Torkel B

    2014-01-01

    To assess feasibility and image quality (IQ) of a new post-processing algorithm for retrospective extraction of an optimised multi-phase CT (time-resolved CT) of the liver from volumetric perfusion imaging. Sixteen patients underwent clinically indicated perfusion CT using 4D spiral mode of dual-source 128-slice CT. Three image sets were reconstructed: motion-corrected and noise-reduced (MCNR) images derived from 4D raw data; maximum and average intensity projections (time MIP/AVG) of the arterial/portal/portal-venous phases and all phases (total MIP/ AVG) derived from retrospective fusion of dedicated MCNR split series. Two readers assessed the IQ, detection rate and evaluation time; one reader assessed image noise and lesion-to-liver contrast. Time-resolved CT was feasible in all patients. Each post-processing step yielded a significant reduction of image noise and evaluation time, maintaining lesion-to-liver contrast. Time MIPs/AVGs showed the highest overall IQ without relevant motion artefacts and best depiction of arterial and portal/portal-venous phases respectively. Time MIPs demonstrated a significantly higher detection rate for arterialised liver lesions than total MIPs/AVGs and the raw data series. Time-resolved CT allows data from volumetric perfusion imaging to be condensed into an optimised multi-phase liver CT, yielding a superior IQ and higher detection rate for arterialised liver lesions than the raw data series. • Four-dimensional computed tomography is limited by motion artefacts and poor image quality. • Time-resolved-CT facilitates 4D-CT data visualisation, segmentation and analysis by condensing raw data. • Time-resolved CT demonstrates better image quality than raw data images. • Time-resolved CT improves detection of arterialised liver lesions in cirrhotic patients.

  2. Assessing the relationship between computational speed and precision: a case study comparing an interpreted versus compiled programming language using a stochastic simulation model in diabetes care.

    PubMed

    McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P

    2010-01-01

    Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.

  3. Design and implementation of a UNIX based distributed computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Love, J.S.; Michael, M.W.

    1994-12-31

    We have designed, implemented, and are running a corporate-wide distributed processing batch queue on a large number of networked workstations using the UNIX{reg_sign} operating system. Atlas Wireline researchers and scientists have used the system for over a year. The large increase in available computer power has greatly reduced the time required for nuclear and electromagnetic tool modeling. Use of remote distributed computing has simultaneously reduced computation costs and increased usable computer time. The system integrates equipment from different manufacturers, using various CPU architectures, distinct operating system revisions, and even multiple processors per machine. Various differences between the machines have tomore » be accounted for in the master scheduler. These differences include shells, command sets, swap spaces, memory sizes, CPU sizes, and OS revision levels. Remote processing across a network must be performed in a manner that is seamless from the users` perspective. The system currently uses IBM RISC System/6000{reg_sign}, SPARCstation{sup TM}, HP9000s700, HP9000s800, and DEC Alpha AXP{sup TM} machines. Each CPU in the network has its own speed rating, allowed working hours, and workload parameters. The system if designed so that all of the computers in the network can be optimally scheduled without adversely impacting the primary users of the machines. The increase in the total usable computational capacity by means of distributed batch computing can change corporate computing strategy. The integration of disparate computer platforms eliminates the need to buy one type of computer for computations, another for graphics, and yet another for day-to-day operations. It might be possible, for example, to meet all research and engineering computing needs with existing networked computers.« less

  4. A fast parallel 3D Poisson solver with longitudinal periodic and transverse open boundary conditions for space-charge simulations

    NASA Astrophysics Data System (ADS)

    Qiang, Ji

    2017-10-01

    A three-dimensional (3D) Poisson solver with longitudinal periodic and transverse open boundary conditions can have important applications in beam physics of particle accelerators. In this paper, we present a fast efficient method to solve the Poisson equation using a spectral finite-difference method. This method uses a computational domain that contains the charged particle beam only and has a computational complexity of O(Nu(logNmode)) , where Nu is the total number of unknowns and Nmode is the maximum number of longitudinal or azimuthal modes. This saves both the computational time and the memory usage of using an artificial boundary condition in a large extended computational domain. The new 3D Poisson solver is parallelized using a message passing interface (MPI) on multi-processor computers and shows a reasonable parallel performance up to hundreds of processor cores.

  5. A secure file manager for UNIX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeVries, R.G.

    1990-12-31

    The development of a secure file management system for a UNIX-based computer facility with supercomputers and workstations is described. Specifically, UNIX in its usual form does not address: (1) Operation which would satisfy rigorous security requirements. (2) Online space management in an environment where total data demands would be many times the actual online capacity. (3) Making the file management system part of a computer network in which users of any computer in the local network could retrieve data generated on any other computer in the network. The characteristics of UNIX can be exploited to develop a portable, secure filemore » manager which would operate on computer systems ranging from workstations to supercomputers. Implementation considerations making unusual use of UNIX features, rather than requiring extensive internal system changes, are described, and implementation using the Cray Research Inc. UNICOS operating system is outlined.« less

  6. Class network routing

    DOEpatents

    Bhanot, Gyan [Princeton, NJ; Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2009-09-08

    Class network routing is implemented in a network such as a computer network comprising a plurality of parallel compute processors at nodes thereof. Class network routing allows a compute processor to broadcast a message to a range (one or more) of other compute processors in the computer network, such as processors in a column or a row. Normally this type of operation requires a separate message to be sent to each processor. With class network routing pursuant to the invention, a single message is sufficient, which generally reduces the total number of messages in the network as well as the latency to do a broadcast. Class network routing is also applied to dense matrix inversion algorithms on distributed memory parallel supercomputers with hardware class function (multicast) capability. This is achieved by exploiting the fact that the communication patterns of dense matrix inversion can be served by hardware class functions, which results in faster execution times.

  7. Energy intensity of computer manufacturing: hybrid assessment combining process and economic input-output methods.

    PubMed

    Williams, Eric

    2004-11-15

    The total energy and fossil fuels used in producing a desktop computer with 17-in. CRT monitor are estimated at 6400 megajoules (MJ) and 260 kg, respectively. This indicates that computer manufacturing is energy intensive: the ratio of fossil fuel use to product weight is 11, an order of magnitude larger than the factor of 1-2 for many other manufactured goods. This high energy intensity of manufacturing, combined with rapid turnover in computers, results in an annual life cycle energy burden that is surprisingly high: about 2600 MJ per year, 1.3 times that of a refrigerator. In contrast with many home appliances, life cycle energy use of a computer is dominated by production (81%) as opposed to operation (19%). Extension of usable lifespan (e.g. by reselling or upgrading) is thus a promising approach to mitigating energy impacts as well as other environmental burdens associated with manufacturing and disposal.

  8. Summaries of Papers Presented at the Topical Meeting on Optical Computing Held in Incline Village, Nevada on March 16-18, 1987.

    DTIC Science & Technology

    1988-03-31

    Automation and Electrometry, USSR Academy of Sciences, Siberian Branch, under the direction of Academician Yu. E. Nesterikhin. A number of interesting...switched video surveillance or - studio networks where switch set-up time is unimportant. A totally different class of electrically controlled

  9. Key algorithms used in GR02: A computer simulation model for predicting tree and stand growth

    Treesearch

    Garrett A. Hughes; Paul E. Sendak; Paul E. Sendak

    1985-01-01

    GR02 is an individual tree, distance-independent simulation model for predicting tree and stand growth over time. It performs five major functions during each run: (1) updates diameter at breast height, (2) updates total height, (3) estimates mortality, (4) determines regeneration, and (5) updates crown class.

  10. A Field Study of Employee E-Learning Activity and Outcomes

    ERIC Educational Resources Information Center

    Brown, Kenneth G.

    2005-01-01

    Employees with access to e-learning courses targeting computer skills were tracked during a year-long study. Employees' perceptions of peer and supervisor support, job characteristics (such as workload and autonomy), and motivation to learn were used to predict total time spent using e-learning. Results suggest the importance of motivation to…

  11. Obesity and Breast Cancer

    DTIC Science & Technology

    2005-07-01

    serum INS, IGF-I and binding proteins, triglycerides, HDL - cholesterol , total and free steroids, sex hormone binding globulin, adiponectin, leptin, and...collection of information is estimated to average 1 hour per response , including the time for reviewing instructions, searching existing data sources...Bioinformatics, Biostatistics, Computer Science, Digital Mammography, Magnetic Resonance Imaging, Tissue Arrays, Gene Polymorphisms , Animal Models, Clinical

  12. Numerical simulation of long-duration blast wave evolution in confined facilities

    NASA Astrophysics Data System (ADS)

    Togashi, F.; Baum, J. D.; Mestreau, E.; Löhner, R.; Sunshine, D.

    2010-10-01

    The objective of this research effort was to investigate the quasi-steady flow field produced by explosives in confined facilities. In this effort we modeled tests in which a high explosive (HE) cylindrical charge was hung in the center of a room and detonated. The HEs used for the tests were C-4 and AFX 757. While C-4 is just slightly under-oxidized and is typically modeled as an ideal explosive, AFX 757 includes a significant percentage of aluminum particles, so long-time afterburning and energy release must be considered. The Lawrence Livermore National Laboratory (LLNL)-produced thermo-chemical equilibrium algorithm, “Cheetah”, was used to estimate the remaining burnable detonation products. From these remaining species, the afterburning energy was computed and added to the flow field. Computations of the detonation and afterburn of two HEs in the confined multi-room facility were performed. The results demonstrate excellent agreement with available experimental data in terms of blast wave time of arrival, peak shock amplitude, reverberation, and total impulse (and hence, total energy release, via either the detonation or afterburn processes.

  13. A Numerical Method of Calculating Propeller Noise Including Acoustic Nonlinear Effects

    NASA Technical Reports Server (NTRS)

    Korkan, K. D.

    1985-01-01

    Using the transonic flow fields(s) generated by the NASPROP-E computer code for an eight blade SR3-series propeller, a theoretical method is investigated to calculate the total noise values and frequency content in the acoustic near and far field without using the Ffowcs Williams - Hawkings equation. The flow field is numerically generated using an implicit three dimensional Euler equation solver in weak conservation law form. Numerical damping is required by the differencing method for stability in three dimensions, and the influence of the damping on the calculated acoustic values is investigated. The acoustic near field is solved by integrating with respect to time the pressure oscillations induced at a stationary observer location. The acoustic far field is calculated from the near field primitive variables as generated by NASPROP-E computer code using a method involving a perturbation velocity potential as suggested by Hawkings in the calculation of the acoustic pressure time-history at a specified far field observed location. the methodologies described are valid for calculating total noise levels and are applicable to any propeller geometry for which a flow field solution is available.

  14. Application of two direct runoff prediction methods in Puerto Rico

    USGS Publications Warehouse

    Sepulveda, N.

    1997-01-01

    Two methods for predicting direct runoff from rainfall data were applied to several basins and the resulting hydrographs compared to measured values. The first method uses a geomorphology-based unit hydrograph to predict direct runoff through its convolution with the excess rainfall hyetograph. The second method shows how the resulting hydraulic routing flow equation from a kinematic wave approximation is solved using a spectral method based on the matrix representation of the spatial derivative with Chebyshev collocation and a fourth-order Runge-Kutta time discretization scheme. The calibrated Green-Ampt (GA) infiltration parameters are obtained by minimizing the sum, over several rainfall events, of absolute differences between the total excess rainfall volume computed from the GA equations and the total direct runoff volume computed from a hydrograph separation technique. The improvement made in predicting direct runoff using a geomorphology-based unit hydrograph with the ephemeral and perennial stream network instead of the strictly perennial stream network is negligible. The hydraulic routing scheme presented here is highly accurate in predicting the magnitude and time of the hydrograph peak although the much faster unit hydrograph method also yields reasonable results.

  15. Characterization and Computational Modeling of Minor Phases in Alloy LSHR

    NASA Technical Reports Server (NTRS)

    Jou, Herng-Jeng; Olson, Gregory; Gabb, Timothy; Garg, Anita; Miller, Derek

    2012-01-01

    The minor phases of powder metallurgy disk superalloy LSHR were studied. Samples were consistently heat treated at three different temperatures for long times to approach equilibrium. Additional heat treatments were also performed for shorter times, to assess minor phase kinetics in non-equilibrium conditions. Minor phases including MC carbides, M23C6 carbides, M3B2 borides, and sigma were identified. Their average sizes and total area fractions were determined. CALPHAD thermodynamics databases and PrecipiCalc(TradeMark), a computational precipitation modeling tool, were employed with Ni-base thermodynamics and diffusion databases to model and simulate the phase microstructural evolution observed in the experiments with an objective to identify the model limitations and the directions of model enhancement.

  16. Kinematics and dynamics of vortex rings in a tube

    NASA Technical Reports Server (NTRS)

    Brasseur, J. G.

    1979-01-01

    Kinematic theory and flow visualization experiments were combined to examine the dynamic processes which control the evolution of vortex rings from very low to very high Reynolds numbers, and to assess the effects of the wall as a vortex ring travels up a tube. The kinematic relationships among the size, shape, speed, and strength of vortex rings in a tube were computed from the theory. Relatively simple flow visualization measurements were used to calculate the total circulation of a vortex rings at a given time. Using this method, the strength was computated and plotted as a function of time for experimentally produced vortex rings. Reynolds number relationships are established and quantitative differences among the three Reynolds number groups are discussed.

  17. A new computer-based Farnsworth Munsell 100-hue test for evaluation of color vision.

    PubMed

    Ghose, Supriyo; Parmar, Twinkle; Dada, Tanuj; Vanathi, Murugesan; Sharma, Sourabh

    2014-08-01

    To evaluate a computer-based Farnsworth-Munsell (FM) 100-hue test and compare it with a manual FM 100-hue test in normal and congenital color-deficient individuals. Fifty color defective subjects and 200 normal subjects with a best-corrected visual acuity ≥ 6/12 were compared using a standard manual FM 100-hue test and a computer-based FM 100-hue test under standard operating conditions as recommended by the manufacturer after initial trial testing. Parameters evaluated were total error scores (TES), type of defect and testing time. Pearson's correlation coefficient was used to determine the relationship between the test scores. Cohen's kappa was used to assess agreement of color defect classification between the two tests. A receiver operating characteristic curve was used to determine the optimal cut-off score for the computer-based FM 100-hue test. The mean time was 16 ± 1.5 (range 6-20) min for the manual FM 100-hue test and 7.4 ± 1.4 (range 5-13) min for the computer-based FM 100-hue test, thus reducing testing time to <50 % (p < 0.05). For grading color discrimination, Pearson's correlation coefficient for TES between the two tests was 0.91 (p < 0.001). For color defect classification, Cohen's agreement coefficient was 0.98 (p < 0.01). The computer-based FM 100-hue is an effective and rapid method for detecting, classifying and grading color vision anomalies.

  18. Estimating dust production rate of carbon-rich stars in the Small Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Nanni, A.; Marigo, P.; Groenewegen, M. A. T.; Aringer, B.; Pastorelli, G.; Rubele, S.; Girardi, L.; Bressan, A.; Bladh, S.

    We compute a grid of spectra describing dusty Circumstellar Envelopes of Thermally Pulsing Asymptotic Giant Branch carbon-rich stars by employing a physically grounded description for dust growth. The optical constants for carbon dust have been selected in order to reproduce simultaneously the most important color-color diagrams in the Near and Mid Infrared bands. We fit the Spectral Energy Distribution of ≈2000 carbon-rich in the Small Magellanic Cloud and we compute their total dust production rate. We compare our results with the ones in the literature. Different choices of the dust-to-gas ratio and outflow expansion velocity adopted in different works, yield, in some cases, a total dust budget about three times lower than the one derived from our scheme, with the same optical data set for carbon dust.

  19. A generalized Condat's algorithm of 1D total variation regularization

    NASA Astrophysics Data System (ADS)

    Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly

    2017-09-01

    A common way for solving the denosing problem is to utilize the total variation (TV) regularization. Many efficient numerical algorithms have been developed for solving the TV regularization problem. Condat described a fast direct algorithm to compute the processed 1D signal. Also there exists a direct algorithm with a linear time for 1D TV denoising referred to as the taut string algorithm. The Condat's algorithm is based on a dual problem to the 1D TV regularization. In this paper, we propose a variant of the Condat's algorithm based on the direct 1D TV regularization problem. The usage of the Condat's algorithm with the taut string approach leads to a clear geometric description of the extremal function. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of degraded signals.

  20. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud

    PubMed Central

    Florence, A. Paulin; Shanthi, V.; Simon, C. B. Sunil

    2016-01-01

    Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption. PMID:27239551

  1. Simulating the Gradually Deteriorating Performance of an RTG

    NASA Technical Reports Server (NTRS)

    Wood, Eric G.; Ewell, Richard C.; Patel, Jagdish; Hanks, David R.; Lozano, Juan A.; Snyder, G. Jeffrey; Noon, Larry

    2008-01-01

    Degra (now in version 3) is a computer program that simulates the performance of a radioisotope thermoelectric generator (RTG) over its lifetime. Degra is provided with a graphical user interface that is used to edit input parameters that describe the initial state of the RTG and the time-varying loads and environment to which it will be exposed. Performance is computed by modeling the flows of heat from the radioactive source and through the thermocouples, also allowing for losses, to determine the temperature drop across the thermocouples. This temperature drop is used to determine the open-circuit voltage, electrical resistance, and thermal conductance of the thermocouples. Output power can then be computed by relating the open-circuit voltage and the electrical resistance of the thermocouples to a specified time-varying load voltage. Degra accounts for the gradual deterioration of performance attributable primarily to decay of the radioactive source and secondarily to gradual deterioration of the thermoelectric material. To provide guidance to an RTG designer, given a minimum of input, Degra computes the dimensions, masses, and thermal conductances of important internal structures as well as the overall external dimensions and total mass.

  2. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud.

    PubMed

    Florence, A Paulin; Shanthi, V; Simon, C B Sunil

    2016-01-01

    Cloud computing is a new technology which supports resource sharing on a "Pay as you go" basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption.

  3. Computer simulation of two-dimensional unsteady flows in estuaries and embayments by the method of characteristics : basic theory and the formulation of the numerical method

    USGS Publications Warehouse

    Lai, Chintu

    1977-01-01

    Two-dimensional unsteady flows of homogeneous density in estuaries and embayments can be described by hyperbolic, quasi-linear partial differential equations involving three dependent and three independent variables. A linear combination of these equations leads to a parametric equation of characteristic form, which consists of two parts: total differentiation along the bicharacteristics and partial differentiation in space. For its numerical solution, the specified-time-interval scheme has been used. The unknown, partial space-derivative terms can be eliminated first by suitable combinations of difference equations, converted from the corresponding differential forms and written along four selected bicharacteristics and a streamline. Other unknowns are thus made solvable from the known variables on the current time plane. The computation is carried to the second-order accuracy by using trapezoidal rule of integration. Means to handle complex boundary conditions are developed for practical application. Computer programs have been written and a mathematical model has been constructed for flow simulation. The favorable computer outputs suggest further exploration and development of model worthwhile. (Woodard-USGS)

  4. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Technical Reports Server (NTRS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-01-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  5. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Astrophysics Data System (ADS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-11-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  6. [Sedentary behaviour 13-years-olds and its association with selected health behaviours, parenting practices and body mass].

    PubMed

    Jodkowska, Maria; Tabak, Izabela; Oblacińska, Anna; Stalmach, Magdalena

    2013-01-01

    1. To estimate the time spent in sedentary behaviour (watching TV, using the computer, doing homework). 2. To assess the link between the total time spent on watching TV, using the computer, doing homework and dietary habits, physical activity, parental practices and body mass. Cross-sectional study was conducted in Poland in 2008 among 13-year olds (n=600). They self-reported their time of TV viewing, computer use and homework. Their dietary behaviours, physical activity (MVPA) and parenting practices were also self-reported. Height and weight were measured by school nurses. Descriptive statistics and correlation were used in this analysis. The mean time spent watching television in school days was 2.3 hours for girls and 2.2 for boys. Boys spent significantly more time using the computer than girls - respectively 1.8 and 1.5 hours, while girls took longer doing homework - respectively 1.7 and 1.3 hours. Mean screen time was about 4 hours in school days and about 6 hours during weekend, statistically longer for boys in weekdays. Screen time was positively associated with intake of sweets, chips, soft drinks, "fast food" and meals consumption during TV, and negatively with regularity of meals and parental supervision. There was no correlation between screen time with physical activity and body mass. Sedentary behaviours and physical activity are not competing behaviours in Polish teenagers, but their relationship with unhealthy dietary patterns may lead to development of obesity. Good parental practices, both mother's and father's supervision seems to be crucial for screen time limitation in their children. Parents should become aware that relevant lifestyle monitoring of their children is a crucial element of health education in prevention of civilization diseases. This is a task for both healthcare workers and educational staff.

  7. Training in pathology informatics: implementation at the University of Pittsburgh.

    PubMed

    Harrison, James H; Stewart, Jimmie

    2003-08-01

    Pathology informatics is generally recognized as an important component of pathology training, but the scope, form, and goals of informatics training vary substantially between pathology residency programs. The Training and Education Committee of the Association for Pathology Informatics (API TEC) has developed a standard set of knowledge and skills objectives that are recommended for inclusion in pathology informatics training and may serve to standardize and formalize training programs in this area. The University of Pittsburgh (Pittsburgh, Pa) core rotation in pathology informatics includes most of these goals and is offered as an implementation model for pathology informatics training. The core rotation in pathology informatics is a 3-week, full-time rotation including didactic sessions and hands-on laboratories. Topics include general desktop computing and the Internet, but the primary focus of the rotation is vocabulary and concepts related to enterprise and pathology information systems, pathology practice, and research. The total contact time is 63 hours, and a total of 19 faculty and staff contribute. Pretests and posttests are given at the start and end of the rotation. Performance and course evaluation data were collected for 3 years (a total of 21 residents). The rotation implements 84% of the knowledge objectives and 94% of the skills objectives recommended by the API TEC. Residents scored an average of about 20% on the pretest and about 70% on the posttest for an average increase during the course of 50%. Posttest scores did not correlate with pretest scores or self-assessed computer skill level. The size of the pretest/posttest difference correlated negatively with the pretest scores and self-assessed computing skill level. Pretest scores were generally low regardless of whether residents were familiar with desktop computing and productivity applications, indicating that even residents who are computer "savvy" have limited knowledge of pathology informatics topics. Posttest scores showed that all residents' knowledge increased substantially during the course and that residents who were computing novices were not disadvantaged. In fact, novices tended to have higher pretest/posttest differences, indicating that the rotation effectively supported initially less knowledgeable residents in "catching up" to their peers and achieving an appropriate competency level. This rotation provides a formal training model that implements the API TEC recommendations with demonstrated success.

  8. Automated, computer-guided PASI measurements by digital image analysis versus conventional physicians' PASI calculations: study protocol for a comparative, single-centre, observational study.

    PubMed

    Fink, Christine; Uhlmann, Lorenz; Klose, Christina; Haenssle, Holger A

    2018-05-17

    Reliable and accurate assessment of severity in psoriasis is very important in order to meet indication criteria for initiation of systemic treatment or to evaluate treatment efficacy. The most acknowledged tool for measuring the extent of psoriatic skin changes is the Psoriasis Area and Severity Index (PASI). However, the calculation of PASI can be tedious and subjective and high intraobserver and interobserver variability is an important concern. Therefore, there is a great need for a standardised and objective method that guarantees a reproducible PASI calculation. Within this study we will investigate the precision and reproducibility of automated, computer-guided PASI measurements in comparison to trained physicians to address these limitations. Non-interventional analyses of PASI calculations by either physicians in a prospective versus retrospective setting or an automated computer-guided algorithm in 120 patients with plaque psoriasis. All retrospective PASI calculations by physicians or by the computer algorithm are based on total body digital images. The primary objective of this study is comparison of automated computer-guided PASI measurements by means of digital image analysis versus conventional, prospective or retrospective physicians' PASI assessments. Secondary endpoints include (1) the assessment of physicians' interobserver variance in PASI calculations, (2) the assessment of physicians' intraobserver variance in PASI assessments of the same patients' images after a time interval of at least 4 weeks, (3) the assessment of the deviation between physicians' prospective versus retrospective PASI calculations, and (4) the reproducibility of automated computer-guided PASI measurements by assessment of two sets of total body digital images of the same patients taken at one time point. Ethical approval was provided by the Ethics Committee of the Medical Faculty of the University of Heidelberg (ethics approval number S-379/2016). DRKS00011818; Results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Aerothermodynamic Design Sensitivities for a Reacting Gas Flow Solver on an Unstructured Mesh Using a Discrete Adjoint Formulation

    NASA Astrophysics Data System (ADS)

    Thompson, Kyle Bonner

    An algorithm is described to efficiently compute aerothermodynamic design sensitivities using a decoupled variable set. In a conventional approach to computing design sensitivities for reacting flows, the species continuity equations are fully coupled to the conservation laws for momentum and energy. In this algorithm, the species continuity equations are solved separately from the mixture continuity, momentum, and total energy equations. This decoupling simplifies the implicit system, so that the flow solver can be made significantly more efficient, with very little penalty on overall scheme robustness. Most importantly, the computational cost of the point implicit relaxation is shown to scale linearly with the number of species for the decoupled system, whereas the fully coupled approach scales quadratically. Also, the decoupled method significantly reduces the cost in wall time and memory in comparison to the fully coupled approach. This decoupled approach for computing design sensitivities with the adjoint system is demonstrated for inviscid flow in chemical non-equilibrium around a re-entry vehicle with a retro-firing annular nozzle. The sensitivities of the surface temperature and mass flow rate through the nozzle plenum are computed with respect to plenum conditions and verified against sensitivities computed using a complex-variable finite-difference approach. The decoupled scheme significantly reduces the computational time and memory required to complete the optimization, making this an attractive method for high-fidelity design of hypersonic vehicles.

  10. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  11. Systolic array IC for genetic computation

    NASA Technical Reports Server (NTRS)

    Anderson, D.

    1991-01-01

    Measuring similarities between large sequences of genetic information is a formidable task requiring enormous amounts of computer time. Geneticists claim that nearly two months of CRAY-2 time are required to run a single comparison of the known database against the new bases that will be found this year, and more than a CRAY-2 year for next year's genetic discoveries, and so on. The DNA IC, designed at HP-ICBD in cooperation with the California Institute of Technology and the Jet Propulsion Laboratory, is being implemented in order to move the task of genetic comparison onto workstations and personal computers, while vastly improving performance. The chip is a systolic (pumped) array comprised of 16 processors, control logic, and global RAM, totaling 400,000 FETS. At 12 MHz, each chip performs 2.7 billion 16 bit operations per second. Using 35 of these chips in series on one PC board (performing nearly 100 billion operations per second), a sequence of 560 bases can be compared against the eventual total genome of 3 billion bases, in minutes--on a personal computer. While the designed purpose of the DNA chip is for genetic research, other disciplines requiring similarity measurements between strings of 7 bit encoded data could make use of this chip as well. Cryptography and speech recognition are two examples. A mix of full custom design and standard cells, in CMOS34, were used to achieve these goals. Innovative test methods were developed to enhance controllability and observability in the array. This paper describes these techniques as well as the chip's functionality. This chip was designed in the 1989-90 timeframe.

  12. New computing systems and their impact on structural analysis and design

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1989-01-01

    A review is given of the recent advances in computer technology that are likely to impact structural analysis and design. The computational needs for future structures technology are described. The characteristics of new and projected computing systems are summarized. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed, and a novel partitioning strategy is outlined for maximizing the degree of parallelism. The strategy is designed for computers with a shared memory and a small number of powerful processors (or a small number of clusters of medium-range processors). It is based on approximating the response of the structure by a combination of symmetric and antisymmetric response vectors, each obtained using a fraction of the degrees of freedom of the original finite element model. The strategy was implemented on the CRAY X-MP/4 and the Alliant FX/8 computers. For nonlinear dynamic problems on the CRAY X-MP with four CPUs, it resulted in an order of magnitude reduction in total analysis time, compared with the direct analysis on a single-CPU CRAY X-MP machine.

  13. Real-time Tsunami Inundation Prediction Using High Performance Computers

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the earthquake occurs took about 2 minutes, which would be sufficient for a practical tsunami inundation predictions. In the presentation, the computational performance of our faster-than-real-time tsunami inundation model will be shown, and preferable tsunami wave source analysis for an accurate inundation prediction will also be discussed.

  14. Some single-machine scheduling problems with learning effects and two competing agents.

    PubMed

    Li, Hongjie; Li, Zeyuan; Yin, Yunqiang

    2014-01-01

    This study considers a scheduling environment in which there are two agents and a set of jobs, each of which belongs to one of the two agents and its actual processing time is defined as a decreasing linear function of its starting time. Each of the two agents competes to process its respective jobs on a single machine and has its own scheduling objective to optimize. The objective is to assign the jobs so that the resulting schedule performs well with respect to the objectives of both agents. The objective functions addressed in this study include the maximum cost, the total weighted completion time, and the discounted total weighted completion time. We investigate three problems arising from different combinations of the objectives of the two agents. The computational complexity of the problems is discussed and solution algorithms where possible are presented.

  15. Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)

    NASA Astrophysics Data System (ADS)

    Crowell, B. W.; Bock, Y.; Squibb, M. B.

    2010-12-01

    Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.

  16. Investigation of the Unsteady Total Pressure Profile Corresponding to Counter-Rotating Vortices in an Internal Flow Application

    NASA Astrophysics Data System (ADS)

    Gordon, Kathryn; Morris, Scott; Jemcov, Aleksandar; Cameron, Joshua

    2013-11-01

    The interaction of components in a compressible, internal flow often results in unsteady interactions between the wakes and moving blades. A prime example in which this flow feature is of interest is the interaction between the downstream rotor blades in a transonic axial compressor with the wake vortices shed from the upstream inlet guide vane (IGV). Previous work shows that a double row of counter-rotating vortices convects downstream into the rotor passage as a result of the rotor blade bow shock impinging on the IGV. The rotor-relative time-mean total pressure distribution has a region of high total pressure corresponding to the pathline of the vortices. The present work focuses on the relationship between the magnitude of the time-mean rotor-relative total pressure profile and the axial spacing between the IGV and the rotor. A survey of different axial gap sizes is performed in a two-dimensional computational study to obtain the sensitivity of the pressure profile amplitude to IGV-rotor axial spacing.

  17. [Cost analysis for navigation in knee endoprosthetics].

    PubMed

    Cerha, O; Kirschner, S; Günther, K-P; Lützner, J

    2009-12-01

    Total knee arthroplasty (TKA) is one of the most frequent procedures in orthopaedic surgery. The outcome depends on a range of factors including alignment of the leg and the positioning of the implant in addition to patient-associated factors. Computer-assisted navigation systems can improve the restoration of a neutral leg alignment. This procedure has been established especially in Europe and North America. The additional expenses are not reimbursed in the German DRG system (Diagnosis Related Groups). In the present study a cost analysis of computer-assisted TKA compared to the conventional technique was performed. The acquisition expenses of various navigation systems (5 and 10 year depreciation), annual costs for maintenance and software updates as well as the accompanying costs per operation (consumables, additional operating time) were considered. The additional operating time was determined on the basis of a meta-analysis according to the current literature. Situations with 25, 50, 100, 200 and 500 computer-assisted TKAs per year were simulated. The amount of the incremental costs of the computer-assisted TKA depends mainly on the annual volume and the additional operating time. A relevant decrease of the incremental costs was detected between 50 and 100 procedures per year. In a model with 100 computer-assisted TKAs per year an additional operating time of 14 mins and a 10 year depreciation of the investment costs, the incremental expenses amount to 300-395 depending on the navigation system. Computer-assisted TKA is associated with additional costs. From an economical point of view an amount of more than 50 procedures per year appears to be favourable. The cost-effectiveness could be estimated if long-term results will show a reduction of revisions or a better clinical outcome.

  18. Addressing the computational cost of large EIT solutions.

    PubMed

    Boyle, Alistair; Borsic, Andrea; Adler, Andy

    2012-05-01

    Electrical impedance tomography (EIT) is a soft field tomography modality based on the application of electric current to a body and measurement of voltages through electrodes at the boundary. The interior conductivity is reconstructed on a discrete representation of the domain using a finite-element method (FEM) mesh and a parametrization of that domain. The reconstruction requires a sequence of numerically intensive calculations. There is strong interest in reducing the cost of these calculations. An improvement in the compute time for current problems would encourage further exploration of computationally challenging problems such as the incorporation of time series data, wide-spread adoption of three-dimensional simulations and correlation of other modalities such as CT and ultrasound. Multicore processors offer an opportunity to reduce EIT computation times but may require some restructuring of the underlying algorithms to maximize the use of available resources. This work profiles two EIT software packages (EIDORS and NDRM) to experimentally determine where the computational costs arise in EIT as problems scale. Sparse matrix solvers, a key component for the FEM forward problem and sensitivity estimates in the inverse problem, are shown to take a considerable portion of the total compute time in these packages. A sparse matrix solver performance measurement tool, Meagre-Crowd, is developed to interface with a variety of solvers and compare their performance over a range of two- and three-dimensional problems of increasing node density. Results show that distributed sparse matrix solvers that operate on multiple cores are advantageous up to a limit that increases as the node density increases. We recommend a selection procedure to find a solver and hardware arrangement matched to the problem and provide guidance and tools to perform that selection.

  19. Computer program for analysis of hemodynamic response to head-up tilt test

    NASA Astrophysics Data System (ADS)

    ŚwiÄ tek, Eliza; Cybulski, Gerard; Koźluk, Edward; PiÄ tkowska, Agnieszka; Niewiadomski, Wiktor

    2014-11-01

    The aim of this work was to create a computer program, written in the MATLAB environment, which enables the visualization and analysis of hemodynamic parameters recorded during a passive tilt test using the CNS Task Force Monitor System. The application was created to help in the assessment of the relationship between the values and dynamics of changes of the selected parameters and the risk of orthostatic syncope. The signal analysis included: R-R intervals (RRI), heart rate (HR), systolic blood pressure (sBP), diastolic blood pressure (dBP), mean blood pressure (mBP), stroke volume (SV), stroke index (SI), cardiac output (CO), cardiac index (CI), total peripheral resistance (TPR), total peripheral resistance index (TPRI), ventricular ejection time (LVET) and thoracic fluid content (TFC). The program enables the user to visualize waveforms for a selected parameter and to perform smoothing with selected moving average parameters. It allows one to construct the graph of means for any range, and the Poincare plot for a selected time range. The program automatically determines the average value of the parameter before tilt, its minimum and maximum value immediately after changing positions and the times of their occurrence. It is possible to correct the automatically detected points manually. For the RR interval, it determines the acceleration index (AI) and the brake index (BI). It is possible to save calculated values to an XLS with a name specified by user. The application has a user-friendly graphical interface and can run on a computer that has no MATLAB software.

  20. Investigation of the effects of storage time on the dimensional accuracy of impression materials using cone beam computed tomography

    PubMed Central

    2016-01-01

    PURPOSE The storage conditions of impressions affect the dimensional accuracy of the impression materials. The aim of the study was to assess the effects of storage time on dimensional accuracy of five different impression materials by cone beam computed tomography (CBCT). MATERIALS AND METHODS Polyether (Impregum), hydrocolloid (Hydrogum and Alginoplast), and silicone (Zetaflow and Honigum) impression materials were used for impressions taken from an acrylic master model. The impressions were poured and subjected to four different storage times: immediate use, and 1, 3, and 5 days of storage. Line 1 (between right and left first molar mesiobuccal cusp tips) and Line 2 (between right and left canine tips) were measured on a CBCT scanned model, and time dependent mean differences were analyzed by two-way univariate and Duncan's test (α=.05). RESULTS For Line 1, the total mean difference of Impregum and Hydrogum were statistically different from Alginoplast (P<.05), while Zetaflow and Honigum had smaller discrepancies. Alginoplast resulted in more difference than the other impressions (P<.05). For Line 2, the total mean difference of Impregum was statistically different from the other impressions. Significant differences were observed in Line 1 and Line 2 for the different storage periods (P<.05). CONCLUSION The dimensional accuracy of impression material is clinically acceptable if the impression material is stored in suitable conditions. PMID:27826388

  1. Investigation of the effects of storage time on the dimensional accuracy of impression materials using cone beam computed tomography.

    PubMed

    Alkurt, Murat; Yeşıl Duymus, Zeynep; Dedeoglu, Numan

    2016-10-01

    The storage conditions of impressions affect the dimensional accuracy of the impression materials. The aim of the study was to assess the effects of storage time on dimensional accuracy of five different impression materials by cone beam computed tomography (CBCT). Polyether (Impregum), hydrocolloid (Hydrogum and Alginoplast), and silicone (Zetaflow and Honigum) impression materials were used for impressions taken from an acrylic master model. The impressions were poured and subjected to four different storage times: immediate use, and 1, 3, and 5 days of storage. Line 1 (between right and left first molar mesiobuccal cusp tips) and Line 2 (between right and left canine tips) were measured on a CBCT scanned model, and time dependent mean differences were analyzed by two-way univariate and Duncan's test (α=.05). For Line 1, the total mean difference of Impregum and Hydrogum were statistically different from Alginoplast ( P <.05), while Zetaflow and Honigum had smaller discrepancies. Alginoplast resulted in more difference than the other impressions ( P <.05). For Line 2, the total mean difference of Impregum was statistically different from the other impressions. Significant differences were observed in Line 1 and Line 2 for the different storage periods ( P <.05). The dimensional accuracy of impression material is clinically acceptable if the impression material is stored in suitable conditions.

  2. The prevalence of leisure time sedentary behaviour and physical activity in adolescent boys: an ecological momentary assessment approach.

    PubMed

    Gorely, Trish; Biddle, Stuart J H; Marshall, Simon J; Cameron, Noel

    2009-01-01

    To use ecological momentary assessment to describe how adolescent boys in the United Kingdom spend their leisure time. Design. Cross-sectional, stratified, random sample from secondary schools in 15 regions within the United Kingdom. The data are from a larger study of adolescent lifestyles (Project STIL). A total of 561 boys with a mean age of 14.6 years (range 12.7-16.7 years). The majority were white-European (86.5%). Television viewing occupied the most leisure time on both weekdays (131 minutes) and weekend (202.5 minutes) days. On weekdays the five most time consuming sedentary activities (television viewing, homework, motorised travel, playing computer/video games and shopping/hanging out) occupied on average 272.2 minutes. On weekend days, the five most time consuming sedentary activities (television viewing, shopping/hanging out, motorised travel, sitting and talking and playing computer/video games) occupied 405.5 minutes. In total, 54 minutes were occupied by active transport or sports and exercise per weekday and 81 minutes per weekend day. Only a minority watched more than 4 hours of TV per day (8.9% on weekdays and 33.8% on weekend days). Differences were noted in the means and prevalence between weekend and weekdays, reflecting the greater discretionary time available at the weekend. Adolescent boys engage in a variety of sedentary and active free time behaviours. It appears prudent to encourage adolescents to adopt overall healthy lifestyles by considering the combination of both active and sedentary pursuits an individual engages in and by moving beyond a focus on any one single behaviour.

  3. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 3 2014-01-01 2014-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage...

  4. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 3 2013-01-01 2013-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage...

  5. 45 CFR 265.7 - How will we determine if the State is meeting the quarterly reporting requirements?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computational errors and are internally consistent (e.g., items that should add to totals do so); (3) The State... from computational errors and are internally consistent (e.g., items that should add to totals do so... from computational errors and are internally consistent (e.g., items that should add to totals do so...

  6. Improved parallel data partitioning by nested dissection with applications to information retrieval.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Michael M.; Chevalier, Cedric; Boman, Erik Gunnar

    The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it ismore » a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.« less

  7. Feasibility study of an Integrated Program for Aerospace-vehicle Design (IPAD) system. Volume 6: Implementation schedule, development costs, operational costs, benefit assessment, impact on company organization, spin-off assessment, phase 1, tasks 3 to 8

    NASA Technical Reports Server (NTRS)

    Garrocq, C. A.; Hurley, M. J.; Dublin, M.

    1973-01-01

    A baseline implementation plan, including alternative implementation approaches for critical software elements and variants to the plan, was developed. The basic philosophy was aimed at: (1) a progressive release of capability for three major computing systems, (2) an end product that was a working tool, (3) giving participation to industry, government agencies, and universities, and (4) emphasizing the development of critical elements of the IPAD framework software. The results of these tasks indicate an IPAD first release capability 45 months after go-ahead, a five year total implementation schedule, and a total developmental cost of 2027 man-months and 1074 computer hours. Several areas of operational cost increases were identified mainly due to the impact of additional equipment needed and additional computer overhead. The benefits of an IPAD system were related mainly to potential savings in engineering man-hours, reduction of design-cycle calendar time, and indirect upgrading of product quality and performance.

  8. Improve the efficiency of the Cartesian tensor based fast multipole method for Coulomb interaction using the traces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, He; Luo, Li -Shi; Li, Rui

    To compute the non-oscillating mutual interaction for a systems with N points, the fast multipole method (FMM) has an efficiency that scales linearly with the number of points. Specifically, for Coulomb interaction, FMM can be constructed using either the spherical harmonic functions or the totally symmetric Cartesian tensors. In this paper, we will present that the effciency of the Cartesian tensor-based FMM for the Coulomb interaction can be significantly improved by implementing the traces of the Cartesian tensors in calculation to reduce the independent elements of the n-th rank totally symmetric Cartesian tensor from (n + 1)(n + 2)=2 tomore » 2n + 1. The computation complexity for the operations in FMM are analyzed and expressed as polynomials of the highest rank of the Cartesian tensors. For most operations, the complexity is reduced by one order. Numerical examples regarding the convergence and the effciency of the new algorithm are demonstrated. As a result, a reduction of computation time up to 50% has been observed for a moderate number of points and rank of tensors.« less

  9. Improve the efficiency of the Cartesian tensor based fast multipole method for Coulomb interaction using the traces

    DOE PAGES

    Huang, He; Luo, Li -Shi; Li, Rui; ...

    2018-05-17

    To compute the non-oscillating mutual interaction for a systems with N points, the fast multipole method (FMM) has an efficiency that scales linearly with the number of points. Specifically, for Coulomb interaction, FMM can be constructed using either the spherical harmonic functions or the totally symmetric Cartesian tensors. In this paper, we will present that the effciency of the Cartesian tensor-based FMM for the Coulomb interaction can be significantly improved by implementing the traces of the Cartesian tensors in calculation to reduce the independent elements of the n-th rank totally symmetric Cartesian tensor from (n + 1)(n + 2)=2 tomore » 2n + 1. The computation complexity for the operations in FMM are analyzed and expressed as polynomials of the highest rank of the Cartesian tensors. For most operations, the complexity is reduced by one order. Numerical examples regarding the convergence and the effciency of the new algorithm are demonstrated. As a result, a reduction of computation time up to 50% has been observed for a moderate number of points and rank of tensors.« less

  10. Robust feature extraction for rapid classification of damage in composites

    NASA Astrophysics Data System (ADS)

    Coelho, Clyde K.; Reynolds, Whitney; Chattopadhyay, Aditi

    2009-03-01

    The ability to detect anomalies in signals from sensors is imperative for structural health monitoring (SHM) applications. Many of the candidate algorithms for these applications either require a lot of training examples or are very computationally inefficient for large sample sizes. The damage detection framework presented in this paper uses a combination of Linear Discriminant Analysis (LDA) along with Support Vector Machines (SVM) to obtain a computationally efficient classification scheme for rapid damage state determination. LDA was used for feature extraction of damage signals from piezoelectric sensors on a composite plate and these features were used to train the SVM algorithm in parts, reducing the computational intensity associated with the quadratic optimization problem that needs to be solved during training. SVM classifiers were organized into a binary tree structure to speed up classification, which also reduces the total training time required. This framework was validated on composite plates that were impacted at various locations. The results show that the algorithm was able to correctly predict the different impact damage cases in composite laminates using less than 21 percent of the total available training data after data reduction.

  11. Computer program system for dynamic simulation and stability analysis of passive and actively controlled spacecraft. Volume 1. Theory

    NASA Technical Reports Server (NTRS)

    Bodley, C. S.; Devers, D. A.; Park, C. A.

    1975-01-01

    A theoretical development and associated digital computer program system is presented. The dynamic system (spacecraft) is modeled as an assembly of rigid and/or flexible bodies not necessarily in a topological tree configuration. The computer program system may be used to investigate total system dynamic characteristics including interaction effects between rigid and/or flexible bodies, control systems, and a wide range of environmental loadings. Additionally, the program system may be used for design of attitude control systems and for evaluation of total dynamic system performance including time domain response and frequency domain stability analyses. Volume 1 presents the theoretical developments including a description of the physical system, the equations of dynamic equilibrium, discussion of kinematics and system topology, a complete treatment of momentum wheel coupling, and a discussion of gravity gradient and environmental effects. Volume 2, is a program users' guide and includes a description of the overall digital program code, individual subroutines and a description of required program input and generated program output. Volume 3 presents the results of selected demonstration problems that illustrate all program system capabilities.

  12. Computer-Aided Design/Computer-Assisted Manufacture-Derived Needle Guide for Injection of Botulinum Toxin into the Lateral Pterygoid Muscle in Patients with Oromandibular Dystonia.

    PubMed

    Yoshida, Kazuya

    2018-01-01

    To evaluate the effectiveness and safety of botulinum toxin administration into the inferior head of the lateral pterygoid muscle of patients with jaw opening dystonia by using a computer-aided design/computer-assisted manufacture (CAD/CAM)-derived needle guide. A total of 17 patients with jaw opening dystonia were enrolled. After the patient's computed tomography (CT) scan was imported and fused with a scan of a plaster cast model of the maxilla, the optimal needle insertion site over the lateral pterygoid muscle was determined using the NobelClinician software. A total of 13 patients were injected both with and without the guide, and 4 patients underwent guided injection alone. The therapeutic effects of botulinum toxin injection and its associated complications were statistically compared between the guided and unguided procedures using paired t test. Botulinum toxin therapy was performed 42 and 32 times with and without the guides, respectively. The needle was easily inserted without any complications in all procedures. There was a significant difference (P < .001) between the mean comprehensive improvements observed with (66.3%) and without (54.4%) the guides. The findings suggest that the use of needle guides during the injection of botulinum toxin into the inferior head of the lateral pterygoid muscle is very useful for aiding the accurate and safe administration of botulinum toxin therapy for jaw opening dystonia.

  13. Self-guaranteed measurement-based quantum computation

    NASA Astrophysics Data System (ADS)

    Hayashi, Masahito; Hajdušek, Michal

    2018-05-01

    In order to guarantee the output of a quantum computation, we usually assume that the component devices are trusted. However, when the total computation process is large, it is not easy to guarantee the whole system when we have scaling effects, unexpected noise, or unaccounted for correlations between several subsystems. If we do not trust the measurement basis or the prepared entangled state, we do need to be worried about such uncertainties. To this end, we propose a self-guaranteed protocol for verification of quantum computation under the scheme of measurement-based quantum computation where no prior-trusted devices (measurement basis or entangled state) are needed. The approach we present enables the implementation of verifiable quantum computation using the measurement-based model in the context of a particular instance of delegated quantum computation where the server prepares the initial computational resource and sends it to the client, who drives the computation by single-qubit measurements. Applying self-testing procedures, we are able to verify the initial resource as well as the operation of the quantum devices and hence the computation itself. The overhead of our protocol scales with the size of the initial resource state to the power of 4 times the natural logarithm of the initial state's size.

  14. Relation of stream quality to streamflow, and estimated loads of selected water-quality constituents in the James and Rappahannock rivers near the fall line of Virginia, July 1988 through June 1990

    USGS Publications Warehouse

    Belval, D.L.; Campbell, J.P.; Woodside, M.D.

    1994-01-01

    This report presents the results of a study by the U.S. Geological Survey, in cooperation with the Virginia Department of Environmental Quality-- Division of Intergovernmental Coordination to monitor and estimate loads of selected nutrients and suspended solids discharged to Chesapeake Bay from two major tributaries in Virginia. From July 1988 through June 1990, monitoring consisted of collecting depth-integrated, cross-sectional samples from the James and Rappahannock Rivers during storm- flow conditions and at scheduled intervals. Water- quality constituents that were monitored included total suspended solids (residue, total at 105 degrees Celsius), dissolved nitrite plus nitrate, dissolved ammonia, total Kjeldahl nitrogen (ammonia plus organic), total nitrogen, total phosphorus, dissolved orthopohosphorus, total organic carbon, and dissolved silica. Daily mean load estimates of each constituent were computed by month, using a seven-parameter log-linear-regression model that uses variables of time, discharge, and seasonality. Water-quality data and constituent- load estimates are included in the report in tabular and graphic form. The data and load estimates provided in this report will be used to calibrate the computer modeling efforts of the Chesapeake Bay region, evaluate the water quality of the Bay and the major effects on the water quality, and assess the results of best-management practices in Virginia.

  15. An efficient non-dominated sorting method for evolutionary algorithms.

    PubMed

    Fang, Hongbing; Wang, Qian; Tu, Yi-Cheng; Horstemeyer, Mark F

    2008-01-01

    We present a new non-dominated sorting algorithm to generate the non-dominated fronts in multi-objective optimization with evolutionary algorithms, particularly the NSGA-II. The non-dominated sorting algorithm used by NSGA-II has a time complexity of O(MN(2)) in generating non-dominated fronts in one generation (iteration) for a population size N and M objective functions. Since generating non-dominated fronts takes the majority of total computational time (excluding the cost of fitness evaluations) of NSGA-II, making this algorithm faster will significantly improve the overall efficiency of NSGA-II and other genetic algorithms using non-dominated sorting. The new non-dominated sorting algorithm proposed in this study reduces the number of redundant comparisons existing in the algorithm of NSGA-II by recording the dominance information among solutions from their first comparisons. By utilizing a new data structure called the dominance tree and the divide-and-conquer mechanism, the new algorithm is faster than NSGA-II for different numbers of objective functions. Although the number of solution comparisons by the proposed algorithm is close to that of NSGA-II when the number of objectives becomes large, the total computational time shows that the proposed algorithm still has better efficiency because of the adoption of the dominance tree structure and the divide-and-conquer mechanism.

  16. Low-thrust chemical orbit to orbit propulsion system propellant management study

    NASA Technical Reports Server (NTRS)

    Dergance, R. H.

    1980-01-01

    Propellant requirements, tankage configurations, preferred propellant management techniques, propulsion systems weights, and technology deficiencies for low thrust expendable propulsion systems are examined. A computer program was utilized which provided a complete propellant inventory (including boil-off for cryogenic cases), pressurant and propellant tank dimensions for a given ullage, pressurant requirements, insulation requirements, and miscellaneous masses. The output also includes the masses of all tanks; the mass of the insulation, engines and other components; total wet system and burnout mass; system mass fraction; total impulse and burn time.

  17. SU-D-206-02: Evaluation of Partial Storage of the System Matrix for Cone Beam Computed Tomography Using a GPU Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matenine, D; Cote, G; Mascolo-Fortin, J

    2016-06-15

    Purpose: Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersections between the photons’ trajectories and the object, also called ray-tracing or system matrix computation. This work evaluates different ways to store the system matrix, aiming to reconstruct dense image grids in reasonable time. Methods: We propose an optimized implementation of the Siddon’s algorithm using graphics processing units (GPUs) with a novel data storage scheme. The algorithm computes a part of the system matrix on demand, typically, for one projection angle. The proposed method was enhanced with accelerating options: storage of larger subsets of themore » system matrix, systematic reuse of data via geometric symmetries, an arithmetic-rich parallel code and code configuration via machine learning. It was tested on geometries mimicking a cone beam CT acquisition of a human head. To realistically assess the execution time, the ray-tracing routines were integrated into a regularized Poisson-based reconstruction algorithm. The proposed scheme was also compared to a different approach, where the system matrix is fully pre-computed and loaded at reconstruction time. Results: Fast ray-tracing of realistic acquisition geometries, which often lack spatial symmetry properties, was enabled via the proposed method. Ray-tracing interleaved with projection and backprojection operations required significant additional time. In most cases, ray-tracing was shown to use about 66 % of the total reconstruction time. In absolute terms, tracing times varied from 3.6 s to 7.5 min, depending on the problem size. The presence of geometrical symmetries allowed for non-negligible ray-tracing and reconstruction time reduction. Arithmetic-rich parallel code and machine learning permitted a modest reconstruction time reduction, in the order of 1 %. Conclusion: Partial system matrix storage permitted the reconstruction of higher 3D image grid sizes and larger projection datasets at the cost of additional time, when compared to the fully pre-computed approach. This work was supported in part by the Fonds de recherche du Quebec - Nature et technologies (FRQ-NT). The authors acknowledge partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council of Canada (Grant No. 432290).« less

  18. A thermal NO(x) prediction model - Scalar computation module for CFD codes with fluid and kinetic effects

    NASA Technical Reports Server (NTRS)

    Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue

    1993-01-01

    A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.

  19. Prediction of elemental creep. [steady state and cyclic data from regression analysis

    NASA Technical Reports Server (NTRS)

    Davis, J. W.; Rummler, D. R.

    1975-01-01

    Cyclic and steady-state creep tests were performed to provide data which were used to develop predictive equations. These equations, describing creep as a function of stress, temperature, and time, were developed through the use of a least squares regression analyses computer program for both the steady-state and cyclic data sets. Comparison of the data from the two types of tests, revealed that there was no significant difference between the cyclic and steady-state creep strains for the L-605 sheet under the experimental conditions investigated (for the same total time at load). Attempts to develop a single linear equation describing the combined steady-state and cyclic creep data resulted in standard errors of estimates higher than obtained for the individual data sets. A proposed approach to predict elemental creep in metals uses the cyclic creep equation and a computer program which applies strain and time hardening theories of creep accumulation.

  20. Financial Aid for Full-Time Undergraduates. Higher Education Panel Report Number 60.

    ERIC Educational Resources Information Center

    Andersen, Charles J.

    The level and composition of student financial aid for undergraduate students were estimated, with attention to estimated number of aid recipients, the total amount they received, the distribution of aided students by their families' income level, the composition of their aid packages, and the use of computers in the administration of aid. In…

  1. Thinking Through Computational Exposure as an Evolving Paradign Shift for Exposure Science: Development and Application of Predictive Models from Big Data

    EPA Science Inventory

    Symposium Abstract: Exposure science has evolved from a time when the primary focus was on measurements of environmental and biological media and the development of enabling field and laboratory methods. The Total Exposure Assessment Method (TEAM) studies of the 1980s were class...

  2. Less Daily Computer Use is Related to Smaller Hippocampal Volumes in Cognitively Intact Elderly.

    PubMed

    Silbert, Lisa C; Dodge, Hiroko H; Lahna, David; Promjunyakul, Nutta-On; Austin, Daniel; Mattek, Nora; Erten-Lyons, Deniz; Kaye, Jeffrey A

    2016-01-01

    Computer use is becoming a common activity in the daily life of older individuals and declines over time in those with mild cognitive impairment (MCI). The relationship between daily computer use (DCU) and imaging markers of neurodegeneration is unknown. The objective of this study was to examine the relationship between average DCU and volumetric markers of neurodegeneration on brain MRI. Cognitively intact volunteers enrolled in the Intelligent Systems for Assessing Aging Change study underwent MRI. Total in-home computer use per day was calculated using mouse movement detection and averaged over a one-month period surrounding the MRI. Spearman's rank order correlation (univariate analysis) and linear regression models (multivariate analysis) examined hippocampal, gray matter (GM), white matter hyperintensity (WMH), and ventricular cerebral spinal fluid (vCSF) volumes in relation to DCU. A voxel-based morphometry analysis identified relationships between regional GM density and DCU. Twenty-seven cognitively intact participants used their computer for 51.3 minutes per day on average. Less DCU was associated with smaller hippocampal volumes (r = 0.48, p = 0.01), but not total GM, WMH, or vCSF volumes. After adjusting for age, education, and gender, less DCU remained associated with smaller hippocampal volume (p = 0.01). Voxel-wise analysis demonstrated that less daily computer use was associated with decreased GM density in the bilateral hippocampi and temporal lobes. Less daily computer use is associated with smaller brain volume in regions that are integral to memory function and known to be involved early with Alzheimer's pathology and conversion to dementia. Continuous monitoring of daily computer use may detect signs of preclinical neurodegeneration in older individuals at risk for dementia.

  3. Using a Cloud Computing System to Reduce Door-to-Balloon Time in Acute ST-Elevation Myocardial Infarction Patients Transferred for Percutaneous Coronary Intervention

    PubMed Central

    Ho, Chi-Kung; Wang, Hui-Ting; Lee, Chien-Ho; Chung, Wen-Jung; Lin, Cheng-Jui; Hsueh, Shu-Kai; Hung, Shin-Chiang; Wu, Kuan-Han; Liu, Chu-Feng; Kung, Chia-Te

    2017-01-01

    Background This study evaluated the impact on clinical outcomes using a cloud computing system to reduce percutaneous coronary intervention hospital door-to-balloon (DTB) time for ST segment elevation myocardial infarction (STEMI). Methods A total of 369 patients before and after implementation of the transfer protocol were enrolled. Of these patients, 262 were transferred through protocol while the other 107 patients were transferred through the traditional referral process. Results There were no significant differences in DTB time, pain to door of STEMI receiving center arrival time, and pain to balloon time between the two groups. Pain to electrocardiography time in patients with Killip I/II and catheterization laboratory to balloon time in patients with Killip III/IV were significantly reduced in transferred through protocol group compared to in traditional referral process group (both p < 0.05). There were also no remarkable differences in the complication rate and 30-day mortality between two groups. The multivariate analysis revealed that the independent predictors of 30-day mortality were elderly patients, advanced Killip score, and higher level of troponin-I. Conclusions This study showed that patients transferred through our present protocol could reduce pain to electrocardiography and catheterization laboratory to balloon time in Killip I/II and III/IV patients separately. However, this study showed that using a cloud computing system in our present protocol did not reduce DTB time. PMID:28900621

  4. The VLBA correlator: Real-time in the distributed era

    NASA Technical Reports Server (NTRS)

    Wells, D. C.

    1992-01-01

    The correlator is the signal processing engine of the Very Long Baseline Array (VLBA). Radio signals are recorded on special wideband (128 Mb/s) digital recorders at the 10 telescopes, with sampling times controlled by hydrogen maser clocks. The magnetic tapes are shipped to the Array Operations Center in Socorro, New Mexico, where they are played back simultaneously into the correlator. Real-time software and firmware controls the playback drives to achieve synchronization, compute models of the wavefront delay, control the numerous modules of the correlator, and record FITS files of the fringe visibilities at the back-end of the correlator. In addition to the more than 3000 custom VLSI chips which handle the massive data flow of the signal processing, the correlator contains a total of more than 100 programmable computers, 8-, 16- and 32-bit CPUs. Code is downloaded into front-end CPU's dependent on operating mode. Low-level code is assembly language, high-level code is C running under a RT OS. We use VxWorks on Motorola MVME147 CPU's. Code development is on a complex of SPARC workstations connected to the RT CPU's by Ethernet. The overall management of the correlation process is dependent on a database management system. We use Ingres running on a Sparcstation-2. We transfer logging information from the database of the VLBA Monitor and Control System to our database using Ingres/NET. Job scripts are computed and are transferred to the real-time computers using NFS, and correlation job execution logs and status flow back by the route. Operator status and control displays use windows on workstations, interfaced to the real-time processes by network protocols. The extensive network protocol support provided by VxWorks is invaluable. The VLBA Correlator's dependence on network protocols is an example of the radical transformation of the real-time world over the past five years. Real-time is becoming more like conventional computing. Paradoxically, 'conventional' computing is also adopting practices from the real-time world: semaphores, shared memory, light-weight threads, and concurrency. This appears to be a convergence of thinking.

  5. International Ultraviolet Explorer (IUE) satellite mission analysis

    NASA Technical Reports Server (NTRS)

    Cook, R. A.; Griffin, J. H.

    1975-01-01

    The results are presented of the mission analysis performed by Computer Sciences Corporation (CSC) in support of the International Ultraviolet Explorer (IUE) satellite. The launch window is open for three separate periods (for a total time of 7 months) during the year extending from July 20, 1977, to July 20, 1978. The synchronous orbit shadow constraint limits the launch window to approximately 88 minutes per day. Apogee boost motor fuel was computed to be 455 pounds (206 kilograms) and on-station weight was 931 pounds (422 kilograms). The target orbit is elliptical synchronous, with eccentricity 0.272 and 24 hour period.

  6. Spectral analysis of GEOS-3 altimeter data and frequency domain collocation. [to estimate gravity anomalies

    NASA Technical Reports Server (NTRS)

    Eren, K.

    1980-01-01

    The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.

  7. Electromagnetic scattering calculations on the Intel Touchstone Delta

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Patterson, Jean; Scott, David

    1992-01-01

    During the first year's operation of the Intel Touchstone Delta system, software which solves the electric field integral equations for fields scattered from arbitrarily shaped objects has been transferred to the Delta. To fully realize the Delta's resources, an out-of-core dense matrix solution algorithm that utilizes some or all of the 90 Gbyte of concurrent file system (CFS) has been used. The largest calculation completed to date computes the fields scattered from a perfectly conducting sphere modeled by 48,672 unknown functions, resulting in a complex valued dense matrix needing 37.9 Gbyte of storage. The out-of-core LU matrix factorization algorithm was executed in 8.25 h at a rate of 10.35 Gflops. Total time to complete the calculation was 19.7 h-the additional time was used to compute the 48,672 x 48,672 matrix entries, solve the system for a given excitation, and compute observable quantities. The calculation was performed in 64-b precision.

  8. [A wireless smart home system based on brain-computer interface of steady state visual evoked potential].

    PubMed

    Zhao, Li; Xing, Xiao; Guo, Xuhong; Liu, Zehua; He, Yang

    2014-10-01

    Brain-computer interface (BCI) system is a system that achieves communication and control among humans and computers and other electronic equipment with the electroencephalogram (EEG) signals. This paper describes the working theory of the wireless smart home system based on the BCI technology. We started to get the steady-state visual evoked potential (SSVEP) using the single chip microcomputer and the visual stimulation which composed by LED lamp to stimulate human eyes. Then, through building the power spectral transformation on the LabVIEW platform, we processed timely those EEG signals under different frequency stimulation so as to transfer them to different instructions. Those instructions could be received by the wireless transceiver equipment to control the household appliances and to achieve the intelligent control towards the specified devices. The experimental results showed that the correct rate for the 10 subjects reached 100%, and the control time of average single device was 4 seconds, thus this design could totally achieve the original purpose of smart home system.

  9. Comparative analysis of feature extraction methods in satellite imagery

    NASA Astrophysics Data System (ADS)

    Karim, Shahid; Zhang, Ye; Asif, Muhammad Rizwan; Ali, Saad

    2017-10-01

    Feature extraction techniques are extensively being used in satellite imagery and getting impressive attention for remote sensing applications. The state-of-the-art feature extraction methods are appropriate according to the categories and structures of the objects to be detected. Based on distinctive computations of each feature extraction method, different types of images are selected to evaluate the performance of the methods, such as binary robust invariant scalable keypoints (BRISK), scale-invariant feature transform, speeded-up robust features (SURF), features from accelerated segment test (FAST), histogram of oriented gradients, and local binary patterns. Total computational time is calculated to evaluate the speed of each feature extraction method. The extracted features are counted under shadow regions and preprocessed shadow regions to compare the functioning of each method. We have studied the combination of SURF with FAST and BRISK individually and found very promising results with an increased number of features and less computational time. Finally, feature matching is conferred for all methods.

  10. Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions

    NASA Astrophysics Data System (ADS)

    Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.

    2016-09-01

    Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.

  11. Selective cultivation and rapid detection of Staphylococcus aureus by computer vision.

    PubMed

    Wang, Yong; Yin, Yongguang; Zhang, Chaonan

    2014-03-01

    In this paper, we developed a selective growth medium and a more rapid detection method based on computer vision for selective isolation and identification of Staphylococcus aureus from foods. The selective medium consisted of tryptic soy broth basal medium, 3 inhibitors (NaCl, K2 TeO3 , and phenethyl alcohol), and 2 accelerators (sodium pyruvate and glycine). After 4 h of selective cultivation, bacterial detection was accomplished using computer vision. The total analysis time was 5 h. Compared to the Baird-Parker plate count method, which requires 4 to 5 d, this new detection method offers great time savings. Moreover, our novel method had a correlation coefficient of greater than 0.998 when compared with the Baird-Parker plate count method. The detection range for S. aureus was 10 to 10(7) CFU/mL. Our new, rapid detection method for microorganisms in foods has great potential for routine food safety control and microbiological detection applications. © 2014 Institute of Food Technologists®

  12. Workload Characterization of a Leadership Class Storage Cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Gunasekaran, Raghul; Shipman, Galen M

    2010-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the scientific workloads of the world s fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). Spider provides an aggregate bandwidth of over 240 GB/s with over 10 petabytes of RAID 6 formatted capacity. OLCFs flagship petascale simulation platform, Jaguar, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize themore » system utilization, the demands of reads and writes, idle time, and the distribution of read requests to write requests for the storage system observed over a period of 6 months. From this study we develop synthesized workloads and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution.« less

  13. A microphysical parameterization of aqSOA and sulfate formation in clouds

    NASA Astrophysics Data System (ADS)

    McVay, Renee; Ervens, Barbara

    2017-07-01

    Sulfate and secondary organic aerosol (cloud aqSOA) can be chemically formed in cloud water. Model implementation of these processes represents a computational burden due to the large number of microphysical and chemical parameters. Chemical mechanisms have been condensed by reducing the number of chemical parameters. Here an alternative is presented to reduce the number of microphysical parameters (number of cloud droplet size classes). In-cloud mass formation is surface and volume dependent due to surface-limited oxidant uptake and/or size-dependent pH. Box and parcel model simulations show that using the effective cloud droplet diameter (proportional to total volume-to-surface ratio) reproduces sulfate and aqSOA formation rates within ≤30% as compared to full droplet distributions; other single diameters lead to much greater deviations. This single-class approach reduces computing time significantly and can be included in models when total liquid water content and effective diameter are available.

  14. An ant colony optimization heuristic for an integrated production and distribution scheduling problem

    NASA Astrophysics Data System (ADS)

    Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju

    2014-04-01

    Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.

  15. Protocol for concomitant temporomandibular joint custom-fitted total joint reconstruction and orthognathic surgery utilizing computer-assisted surgical simulation.

    PubMed

    Movahed, Reza; Teschke, Marcus; Wolford, Larry M

    2013-12-01

    Clinicians who address temporomandibular joint (TMJ) pathology and dentofacial deformities surgically can perform the surgery in 1 stage or 2 separate stages. The 2-stage approach requires the patient to undergo 2 separate operations and anesthesia, significantly prolonging the overall treatment. However, performing concomitant TMJ and orthognathic surgery (CTOS) in these cases requires careful treatment planning and surgical proficiency in the 2 surgical areas. This article presents a new treatment protocol for the application of computer-assisted surgical simulation in CTOS cases requiring reconstruction with patient-fitted total joint prostheses. The traditional and new CTOS protocols are described and compared. The new CTOS protocol helps decrease the preoperative workup time and increase the accuracy of model surgery. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Erratum: ``The Effects of Metallicity and Grain Size on Gravitational Instabilities in Protoplanetary Disks'' (ApJ 636, L149 [2006])

    NASA Astrophysics Data System (ADS)

    Cai, Kai; Durisen, Richard H.; Michael, Scott; Boley, Aaron C.; Mejía, Annie C.; Pickett, Megan K.; D'Alessio, Paola

    2006-05-01

    We have found that the total cumulative radiative energy losses shown in Figure 2 of the above-mentioned Letter were computed for only half the disk. This caused the final global cooling times tcool in the eighth column of the original Table 1 to be too large by a factor of 2. Proper values of tcool are given in the revised Table 1 below. To be more consistent with what the Letter states, we now use instantaneous values for both the total internal energy and the final total net cooling rates to compute final tcool's, instead of averaging the loss rates over an interval of time at the end of the calculations before dividing, as was done in the Letter. We also take this opportunity to make a few other inconsequential corrections to the fourth column of the table. In addition to the changes to Table 1, the approximate initial tcool relation in the fourth paragraph of § 3.2 becomes tcool~Z/Zsolar to within tens of percent. Despite the corrections, our conclusions in the Letter remain unchanged. Most importantly, the final tcool's vary with metallicity and are still too long for disk fragmentation to occur with our equation of state over the range of Z examined. We regret any inconvenience our errors may have caused.

  17. Lagrangian velocity and acceleration correlations of large inertial particles in a closed turbulent flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Machicoane, Nathanaël; Volk, Romain

    We investigate the response of large inertial particle to turbulent fluctuations in an inhomogeneous and anisotropic flow. We conduct a Lagrangian study using particles both heavier and lighter than the surrounding fluid, and whose diameters are comparable to the flow integral scale. Both velocity and acceleration correlation functions are analyzed to compute the Lagrangian integral time and the acceleration time scale of such particles. The knowledge of how size and density affect these time scales is crucial in understanding particle dynamics and may permit stochastic process modelization using two-time models (for instance, Sawford’s). As particles are tracked over long timesmore » in the quasi-totality of a closed flow, the mean flow influences their behaviour and also biases the velocity time statistics, in particular the velocity correlation functions. By using a method that allows for the computation of turbulent velocity trajectories, we can obtain unbiased Lagrangian integral time. This is particularly useful in accessing the scale separation for such particles and to comparing it to the case of fluid particles in a similar configuration.« less

  18. Problematic computer game use as expression of Internet addiction and its association with self-rated health in the Lithuanian adolescent population.

    PubMed

    Ustinavičienė, Ruta; Škėmienė, Lina; Lukšienė, Dalia; Radišauskas, Ričardas; Kalinienė, Gintarė; Vasilavičius, Paulius

    2016-01-01

    Computers and the Internet have become an integral part of today's life. Problematic gaming is related to adolescent's health. The aim of our study was to evaluate the prevalence of Internet addiction among 13-18-year-old schoolchildren and its relation to sex, age, and time spent playing computer games, game type, and subjective health evaluation. A total of 1806 schoolchildren aged 13-18 years were interviewed. The evaluation of Internet addiction was conducted by the Diagnostic Questionnaire according to Young's methodology. The relation between the choice of computer games type, time spent while playing computer games and respondents' Internet addiction were assessed by using multivariate logistic regression analysis. One-tenth (10.6%) of the boys and 7.7% of the girls aged 13-18 years were Internet addicted. Internet addiction was associated with the type of computer game (action or combat vs. logic) among boys (OR=2.42; 95% CI, 1.03-5.67) and with the amount of time spent playing computer games per day during the last month (≥5 vs. <5h) among girls (OR=2.10; 95% CI, 1.19-3.70). The boys who were addicted to the Internet were more likely to rate their health poorer in comparison to their peers who were not addicted to the Internet (OR=2.48; 95% CI, 1.33-4.62). Internet addiction was significantly associated with poorer self-rated health among boys. Copyright © 2016 The Lithuanian University of Health Sciences. Production and hosting by Elsevier Urban & Partner Sp. z o.o. All rights reserved.

  19. Video game playing time and cardiometabolic risk in adolescents: the AFINOS study.

    PubMed

    Martinez-Gómez, David; Gomez-Martinez, Sonia; Ruiz, Jonatan R; Ortega, Francisco B; Marcos, Ascension; Veiga, Oscar L

    2012-09-22

    We aimed to examine the association of video games playing time with cardiometabolic risk biomarkers in adolescents. This study comprised 181 adolescents (88 girls), aged 13- to 17 years old. Moderate-to-vigorous physical activity (MVPA) was measured by accelerometry, and video game playing time in computer and console was self-reported. Waist circumference, systolic blood pressure (BP) and diastolic BP, mean arterial pressure, HDL-cholesterol, LDL-cholesterol, total cholesterol, triglycerides, glucose, insulin, and apolipoproteins A-1 and B-100 were measured. Computer games use was not significantly associated with any biomarker (P>0.1) but the time spent using console games was positively associated with diastolic BP, mean arterial pressure, triglycerides, and a clustered cardiometabolic risk score. These results were independent of age, sex, pubertal stage, MVPA, and WC. These results support some evidence regarding a plausible unfavorable role of playing (console) video games on cardiometabolic health in adolescence. Copyright © 2012 Elsevier España, S.L. All rights reserved.

  20. Micro computed tomography evaluation of the Self-adjusting file and ProTaper Universal system on curved mandibular molars.

    PubMed

    Serefoglu, Burcu; Piskin, Beyser

    2017-09-26

    The aim of this investigation was to compare the cleaning and shaping efficiency of Self-adjusting file and Protaper, and to assess the correlation between root canal curvature and working time in mandibular molars using micro-computed tomography. Twenty extracted mandibular molars instrumented with Protaper and Self-adjusting file and the total working time was measured in mesial canals. The changes in canal volume, surface area and structure model index, transportation, uninstrumented area and the correlation between working-time and the curvature were analyzed. Although no statistically significant difference was observed between two systems in distal canals (p>0.05), a significantly higher amount of removed dentin volume and lower uninstrumented area were provided by Protaper in mesial canals (p<0.0001). A correlation between working-time and the canal-curvature was also observed in mesial canals for both groups (SAFr 2 =0.792, p<0.0004, PTUr 2 =0.9098, p<0.0001).

  1. Development of urban runoff model FFC-QUAL for first-flush water-quality analysis in urban drainage basins.

    PubMed

    Hur, Sungchul; Nam, Kisung; Kim, Jungsoo; Kwak, Changjae

    2018-01-01

    An urban runoff model that is able to compute the runoff, the pollutant loadings, and the concentrations of water-quality constituents in urban drainages during the first flush was developed. This model, which is referred to as FFC-QUAL, was modified from the existing ILLUDAS model and added for use during the water-quality analysis process for dry and rainy periods. For the dry period, the specifications of the coefficients for the discharge and water quality were used. During rainfall, we used the Clark and time-area methods for the runoff analyses of pervious and impervious areas to consider the effects of the subbasin shape; moreover, four pollutant accumulation methods and the washoff equation for computing the water quality each time were used. According to the verification results, FFC-QUAL provides generally similar output as the measured data for the peak flow, total runoff volume, total loadings, peak concentration, and time of peak concentration for three rainfall events in the Gunja subbasin. In comparison with the ILLUDAS, SWMM, and MOUSE models, there is little difference between these models and the model developed in this study. The proposed model should be useful in urban watersheds because of its simplicity and its capacity to model common pollutants (e.g., biological oxygen demand, chemical oxygen demand, Escherichia coli, suspended solids, and total nitrogen and phosphorous) in runoff. The proposed model can also be used in design studies to determine how changes in infrastructure will affect the runoff and pollution loads. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. A time and imaging cost analysis of low-risk ED observation patients: a conservative 64-section computed tomography coronary angiography "triple rule-out" compared to nuclear stress test strategy.

    PubMed

    Takakuwa, Kevin M; Halpern, Ethan J; Shofer, Frances S

    2011-02-01

    The study aimed to examine time and imaging costs of 2 different imaging strategies for low-risk emergency department (ED) observation patients with acute chest pain or symptoms suggestive of acute coronary syndrome. We compared a "triple rule-out" (TRO) 64-section multidetector computed tomography protocol with nuclear stress testing. This was a prospective observational cohort study of consecutive ED patients who were enrolled in our chest pain observation protocol during a 16-month period. Our standard observation protocol included a minimum of 2 sets of cardiac enzymes at least 6 hours apart followed by a nuclear stress test. Once a week, observation patients were offered a TRO (to evaluate for coronary artery disease, thoracic dissection, and pulmonary embolus) multidetector computed tomography with the option of further stress testing for those patients found to have evidence of coronary artery disease. We analyzed 832 consecutive observation patients including 214 patients who underwent the TRO protocol. Mean total length of stay was 16.1 hours for TRO patients, 16.3 hours for TRO plus other imaging test, 22.6 hours for nuclear stress testing, 23.3 hours for nuclear stress testing plus other imaging tests, and 23.7 hours for nuclear stress testing plus TRO (P < .0001 for TRO and TRO + other test compared to stress test ± other test). Mean imaging times were 3.6, 4.4, 5.9, 7.5, and 6.6 hours, respectively (P < .05 for TRO and TRO + other test compared to stress test ± other test). Mean imaging costs were $1307 for TRO patients vs $945 for nuclear stress testing. Triple rule-out reduced total length of stay and imaging time but incurred higher imaging costs. A per-hospital analysis would be needed to determine if patient time savings justify the higher imaging costs. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Electron-impact vibrational relaxation in high-temperature nitrogen

    NASA Technical Reports Server (NTRS)

    Lee, Jong-Hun

    1992-01-01

    Vibrational relaxation process of N2 molecules by electron-impact is examined for the future planetary entry environments. Multiple-quantum transitions from excited states to higher/lower states are considered for the electronic ground state of the nitrogen molecule N2 (X 1Sigma-g(+)). Vibrational excitation and deexcitation rate coefficients obtained by computational quantum chemistry are incorporated into the 'diffusion model' to evaluate the time variations of vibrational number densities of each energy state and total vibrational energy. Results show a non-Boltzmann distribution of number densities at the earlier stage of relaxation, which in turn suppresses the equilibrium process but affects little the time variation of total vibrational energy. An approximate rate equation and a corresponding relaxation time from the excited states, compatible with the system of flow conservation equations, are derived. The relaxation time from the excited states indicates the weak dependency of the initial vibrational temperature. The empirical curve-fit formula for the improved e-V relaxation time is obtained.

  4. Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity

    DOE PAGES

    Gordiz, Kiarash; Singh, David J.; Henry, Asegun

    2015-01-29

    In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less

  5. Intercommunications in Real Time, Redundant, Distributed Computer System

    NASA Technical Reports Server (NTRS)

    Zanger, H.

    1980-01-01

    An investigation into the applicability of fiber optic communication techniques to real time avionic control systems, in particular the total automatic flight control system used for the VSTOL aircraft is presented. The system consists of spatially distributed microprocessors. The overall control function is partitioned to yield a unidirectional data flow between the processing elements (PE). System reliability is enhanced by the use of triple redundancy. Some general overall system specifications are listed here to provide the necessary background for the requirements of the communications system.

  6. Quantum Algorithms for Computational Physics: Volume 3 of Lattice Gas Dynamics

    DTIC Science & Technology

    2007-01-03

    time- dependent state |q(t)〉 of a two- energy level quantum mechanical system, which is a fermionic qubit and is governed by the Schroedinger wave...on-site ket of size 2B |Ψ〉 total system ket of size 2Q 2.2 The quantum state in the number representation From the previous section, a time- dependent ...duration depend on the particular experimental realization, so that the natural coupling along with the program of externally applied pulses together

  7. Television screen time, but not computer use and reading time, is associated with cardio-metabolic biomarkers in a multiethnic Asian population: a cross-sectional study.

    PubMed

    Nang, Ei Ei Khaing; Salim, Agus; Wu, Yi; Tai, E Shyong; Lee, Jeannette; Van Dam, Rob M

    2013-05-30

    Recent evidence shows that sedentary behaviour may be an independent risk factor for cardiovascular diseases, diabetes, cancers and all-cause mortality. However, results are not consistent and different types of sedentary behaviour might have different effects on health. Thus the aim of this study was to evaluate the association between television screen time, computer/reading time and cardio-metabolic biomarkers in a multiethnic urban Asian population. We also sought to understand the potential mediators of this association. The Singapore Prospective Study Program (2004-2007), was a cross-sectional population-based study in a multiethnic population in Singapore. We studied 3305 Singaporean adults of Chinese, Malay and Indian ethnicity who did not have pre-existing diseases and conditions that could affect their physical activity. Multiple linear regression analysis was used to assess the association of television screen time and computer/reading time with cardio-metabolic biomarkers [blood pressure, lipids, glucose, adiponectin, C reactive protein and homeostasis model assessment of insulin resistance (HOMA-IR)]. Path analysis was used to examine the role of mediators of the observed association. Longer television screen time was significantly associated with higher systolic blood pressure, total cholesterol, triglycerides, C reactive protein, HOMA-IR, and lower adiponectin after adjustment for potential socio-demographic and lifestyle confounders. Dietary factors and body mass index, but not physical activity, were potential mediators that explained most of these associations between television screen time and cardio-metabolic biomarkers. The associations of television screen time with triglycerides and HOMA-IR were only partly explained by dietary factors and body mass index. No association was observed between computer/ reading time and worse levels of cardio-metabolic biomarkers. In this urban Asian population, television screen time was associated with worse levels of various cardio-metabolic risk factors. This may reflect detrimental effects of television screen time on dietary habits rather than replacement of physical activity.

  8. A hybrid meta-heuristic algorithm for the vehicle routing problem with stochastic travel times considering the driver's satisfaction

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Alinaghian, Mehdi; Salamat-Bakhsh, Alireza; Norouzi, Narges

    2012-05-01

    A vehicle routing problem is a significant problem that has attracted great attention from researchers in recent years. The main objectives of the vehicle routing problem are to minimize the traveled distance, total traveling time, number of vehicles and cost function of transportation. Reducing these variables leads to decreasing the total cost and increasing the driver's satisfaction level. On the other hand, this satisfaction, which will decrease by increasing the service time, is considered as an important logistic problem for a company. The stochastic time dominated by a probability variable leads to variation of the service time, while it is ignored in classical routing problems. This paper investigates the problem of the increasing service time by using the stochastic time for each tour such that the total traveling time of the vehicles is limited to a specific limit based on a defined probability. Since exact solutions of the vehicle routing problem that belong to the category of NP-hard problems are not practical in a large scale, a hybrid algorithm based on simulated annealing with genetic operators was proposed to obtain an efficient solution with reasonable computational cost and time. Finally, for some small cases, the related results of the proposed algorithm were compared with results obtained by the Lingo 8 software. The obtained results indicate the efficiency of the proposed hybrid simulated annealing algorithm.

  9. Consequences of Base Time for Redundant Signals Experiments

    PubMed Central

    Townsend, James T.; Honey, Christopher

    2007-01-01

    We report analytical and computational investigations into the effects of base time on the diagnosticity of two popular theoretical tools in the redundant signals literature: (1) the race model inequality and (2) the capacity coefficient. We show analytically and without distributional assumptions that the presence of base time decreases the sensitivity of both of these measures to model violations. We further use simulations to investigate the statistical power model selection tools based on the race model inequality, both with and without base time. Base time decreases statistical power, and biases the race model test toward conservatism. The magnitude of this biasing effect increases as we increase the proportion of total reaction time variance contributed by base time. We marshal empirical evidence to suggest that the proportion of reaction time variance contributed by base time is relatively small, and that the effects of base time on the diagnosticity of our model-selection tools are therefore likely to be minor. However, uncertainty remains concerning the magnitude and even the definition of base time. Experimentalists should continue to be alert to situations in which base time may contribute a large proportion of the total reaction time variance. PMID:18670591

  10. Minimally invasive and computer-navigated total hip arthroplasty: a qualitative and systematic review of the literature

    PubMed Central

    2010-01-01

    Background Both minimally invasive surgery (MIS) and computer-assisted surgery (CAS) for total hip arthroplasty (THA) have gained popularity in recent years. We conducted a qualitative and systematic review to assess the effectiveness of MIS, CAS and computer-assisted MIS for THA. Methods An extensive computerised literature search of PubMed, Medline, Embase and OVIDSP was conducted. Both randomised clinical trials and controlled clinical trials on the effectiveness of MIS, CAS and computer-assisted MIS for THA were included. Methodological quality was independently assessed by two reviewers. Effect estimates were calculated and a best-evidence synthesis was performed. Results Four high-quality and 14 medium-quality studies with MIS THA as study contrast, and three high-quality and four medium-quality studies with CAS THA as study contrast were included. No studies with computer-assisted MIS for THA as study contrast were identified. Strong evidence was found for a decrease in operative time and intraoperative blood loss for MIS THA, with no difference in complication rates and risk for acetabular outliers. Strong evidence exists that there is no difference in physical functioning, measured either by questionnaires or by gait analysis. Moderate evidence was found for a shorter length of hospital stay after MIS THA. Conflicting evidence was found for a positive effect of MIS THA on pain in the early postoperative period, but that effect diminished after three months postoperatively. Strong evidence was found for an increase in operative time for CAS THA, and limited evidence was found for a decrease in intraoperative blood loss. Furthermore, strong evidence was found for no difference in complication rates, as well as for a significantly lower risk for acetabular outliers. Conclusions The results indicate that MIS THA is a safe surgical procedure, without increases in operative time, blood loss, operative complication rates and component malposition rates. However, the beneficial effect of MIS THA on functional recovery has to be proven. The results also indicate that CAS THA, though resulting in an increase in operative time, may have a positive effect on operative blood loss and operative complication rates. More importantly, the use of CAS results in better positioning of acetabular component of the prosthesis. PMID:20470443

  11. Productivity associated with visual status of computer users.

    PubMed

    Daum, Kent M; Clore, Katherine A; Simms, Suzanne S; Vesely, Jon W; Wilczek, Dawn D; Spittle, Brian M; Good, Greg W

    2004-01-01

    The aim of this project is to examine the potential connection between the astigmatic refractive corrections of subjects using computers and their productivity and comfort. We hypothesize that improving the visual status of subjects using computers results in greater productivity, as well as improved visual comfort. Inclusion criteria required subjects 19 to 30 years of age with complete vision examinations before being enrolled. Using a double-masked, placebo-controlled, randomized design, subjects completed three experimental tasks calculated to assess the effects of refractive error on productivity (time to completion and the number of errors) at a computer. The tasks resembled those commonly undertaken by computer users and involved visual search tasks of: (1) counties and populations; (2) nonsense word search; and (3) a modified text-editing task. Estimates of productivity for time to completion varied from a minimum of 2.5% upwards to 28.7% with 2 D cylinder miscorrection. Assuming a conservative estimate of an overall 2.5% increase in productivity with appropriate astigmatic refractive correction, our data suggest a favorable cost-benefit ratio of at least 2.3 for the visual correction of an employee (total cost 268 dollars) with a salary of 25,000 dollars per year. We conclude that astigmatic refractive error affected both productivity and visual comfort under the conditions of this experiment. These data also suggest a favorable cost-benefit ratio for employers who provide computer-specific eyewear to their employees.

  12. Virtual reality neurosurgery: a simulator blueprint.

    PubMed

    Spicer, Mark A; van Velsen, Martin; Caffrey, John P; Apuzzo, Michael L J

    2004-04-01

    This article details preliminary studies undertaken to integrate the most relevant advancements across multiple disciplines in an effort to construct a highly realistic neurosurgical simulator based on a distributed computer architecture. Techniques based on modified computational modeling paradigms incorporating finite element analysis are presented, as are current and projected efforts directed toward the implementation of a novel bidirectional haptic device. Patient-specific data derived from noninvasive magnetic resonance imaging sequences are used to construct a computational model of the surgical region of interest. Magnetic resonance images of the brain may be coregistered with those obtained from magnetic resonance angiography, magnetic resonance venography, and diffusion tensor imaging to formulate models of varying anatomic complexity. The majority of the computational burden is encountered in the presimulation reduction of the computational model and allows realization of the required threshold rates for the accurate and realistic representation of real-time visual animations. Intracranial neurosurgical procedures offer an ideal testing site for the development of a totally immersive virtual reality surgical simulator when compared with the simulations required in other surgical subspecialties. The material properties of the brain as well as the typically small volumes of tissue exposed in the surgical field, coupled with techniques and strategies to minimize computational demands, provide unique opportunities for the development of such a simulator. Incorporation of real-time haptic and visual feedback is approached here and likely will be accomplished soon.

  13. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography.

    PubMed

    Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin

    2016-07-01

    Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P < 0.001), and no systematic bias was found in Bland-Altman analysis: mean difference was -0.00081 ± 0.0039. Invasive FFR ≤ 0.80 was found in 38 lesions out of 125 and was predicted by the machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P < 0.001). Compared with the physics-based computation, average execution time was reduced by more than 80 times, leading to near real-time assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor. Copyright © 2016 the American Physiological Society.

  14. Application of a hybrid MPI/OpenMP approach for parallel groundwater model calibration using multi-core computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan

    2010-01-01

    Calibration of groundwater models involves hundreds to thousands of forward solutions, each of which may solve many transient coupled nonlinear partial differential equations, resulting in a computationally intensive problem. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelisms in software and hardware to reduce calibration time on multi-core computers. HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for direct solutions for a reactive transport model application, and a field-scale coupled flow and transport model application. In the reactive transport model, a single parallelizable loop is identified to account for over 97% of the total computational time using GPROF.more » Addition of a few lines of OpenMP compiler directives to the loop yields a speedup of about 10 on a 16-core compute node. For the field-scale model, parallelizable loops in 14 of 174 HGC5 subroutines that require 99% of the execution time are identified. As these loops are parallelized incrementally, the scalability is found to be limited by a loop where Cray PAT detects over 90% cache missing rates. With this loop rewritten, similar speedup as the first application is achieved. The OpenMP-parallelized code can be run efficiently on multiple workstations in a network or multiple compute nodes on a cluster as slaves using parallel PEST to speedup model calibration. To run calibration on clusters as a single task, the Levenberg Marquardt algorithm is added to HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, 100 200 compute cores are used to reduce the calibration time from weeks to a few hours for these two applications. This approach is applicable to most of the existing groundwater model codes for many applications.« less

  15. Virtual viewpoint synthesis in multi-view video system

    NASA Astrophysics Data System (ADS)

    Li, Fang; Yang, Shiqiang

    2005-07-01

    In this paper, we present a virtual viewpoint video synthesis algorithm to satisfy the following three aims: low computing consuming; real time interpolation and acceptable video quality. In contrast with previous technologies, this method obtain incompletely 3D structure using neighbor video sources instead of getting total 3D information with all video sources, so that the computation is reduced greatly. So we demonstrate our interactive multi-view video synthesis algorithm in a personal computer. Furthermore, adopting the method of choosing feature points to build the correspondence between the frames captured by neighbor cameras, we need not require camera calibration. Finally, our method can be used when the angle between neighbor cameras is 25-30 degrees that it is much larger than common computer vision experiments. In this way, our method can be applied into many applications such as sports live, video conference, etc.

  16. Analysis of severe storm data

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.

    1983-01-01

    The Mesoscale Analysis and Space Sensor (MASS) Data Management and Analysis System developed by Atsuko Computing International (ACI) on the MASS HP-1000 Computer System within the Systems Dynamics Laboratory of the Marshall Space Flight Center is described. The MASS Data Management and Analysis System was successfully implemented and utilized daily by atmospheric scientists to graphically display and analyze large volumes of conventional and satellite derived meteorological data. The scientists can process interactively various atmospheric data (Sounding, Single Level, Gird, and Image) by utilizing the MASS (AVE80) share common data and user inputs, thereby reducing overhead, optimizing execution time, and thus enhancing user flexibility, useability, and understandability of the total system/software capabilities. In addition ACI installed eight APPLE III graphics/imaging computer terminals in individual scientist offices and integrated them into the MASS HP-1000 Computer System thus providing significant enhancement to the overall research environment.

  17. Information Processing Capacity of Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge

    2012-07-01

    Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory.

  18. Zero-block mode decision algorithm for H.264/AVC.

    PubMed

    Lee, Yu-Ming; Lin, Yinyi

    2009-03-01

    In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.

  19. On the Fast Evaluation Method of Temperature and Gas Mixing Ratio Weighting Functions for Remote Sensing of Planetary Atmospheres in Thermal IR and Microwave

    NASA Technical Reports Server (NTRS)

    Ustinov, E. A.

    1999-01-01

    Evaluation of weighting functions in the atmospheric remote sensing is usually the most computer-intensive part of the inversion algorithms. We present an analytic approach to computations of temperature and mixing ratio weighting functions that is based on our previous results but the resulting expressions use the intermediate variables that are generated in computations of observable radiances themselves. Upwelling radiances at the given level in the atmosphere and atmospheric transmittances from space to the given level are combined with local values of the total absorption coefficient and its components due to absorption of atmospheric constituents under study. This makes it possible to evaluate the temperature and mixing ratio weighting functions in parallel with evaluation of radiances. This substantially decreases the computer time required for evaluation of weighting functions. Implications for the nadir and limb viewing geometries are discussed.

  20. Study of high speed complex number algorithms. [for determining antenna for field radiation patterns

    NASA Technical Reports Server (NTRS)

    Heisler, R.

    1981-01-01

    A method of evaluating the radiation integral on the curved surface of a reflecting antenna is presented. A three dimensional Fourier transform approach is used to generate a two dimensional radiation cross-section along a planer cut at any angle phi through the far field pattern. Salient to the method is an algorithm for evaluating a subset of the total three dimensional discrete Fourier transform results. The subset elements are selectively evaluated to yield data along a geometric plane of constant. The algorithm is extremely efficient so that computation of the induced surface currents via the physical optics approximation dominates the computer time required to compute a radiation pattern. Application to paraboloid reflectors with off-focus feeds in presented, but the method is easily extended to offset antenna systems and reflectors of arbitrary shapes. Numerical results were computed for both gain and phase and are compared with other published work.

  1. Information Processing Capacity of Dynamical Systems

    PubMed Central

    Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge

    2012-01-01

    Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory. PMID:22816038

  2. The Direct Lighting Computation in Global Illumination Methods

    NASA Astrophysics Data System (ADS)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  3. Thermodynamic characterization of networks using graph polynomials

    NASA Astrophysics Data System (ADS)

    Ye, Cheng; Comin, César H.; Peron, Thomas K. DM.; Silva, Filipi N.; Rodrigues, Francisco A.; Costa, Luciano da F.; Torsello, Andrea; Hancock, Edwin R.

    2015-09-01

    In this paper, we present a method for characterizing the evolution of time-varying complex networks by adopting a thermodynamic representation of network structure computed from a polynomial (or algebraic) characterization of graph structure. Commencing from a representation of graph structure based on a characteristic polynomial computed from the normalized Laplacian matrix, we show how the polynomial is linked to the Boltzmann partition function of a network. This allows us to compute a number of thermodynamic quantities for the network, including the average energy and entropy. Assuming that the system does not change volume, we can also compute the temperature, defined as the rate of change of entropy with energy. All three thermodynamic variables can be approximated using low-order Taylor series that can be computed using the traces of powers of the Laplacian matrix, avoiding explicit computation of the normalized Laplacian spectrum. These polynomial approximations allow a smoothed representation of the evolution of networks to be constructed in the thermodynamic space spanned by entropy, energy, and temperature. We show how these thermodynamic variables can be computed in terms of simple network characteristics, e.g., the total number of nodes and node degree statistics for nodes connected by edges. We apply the resulting thermodynamic characterization to real-world time-varying networks representing complex systems in the financial and biological domains. The study demonstrates that the method provides an efficient tool for detecting abrupt changes and characterizing different stages in network evolution.

  4. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-file Systems.

    PubMed

    Prabhakar, Attiguppe R; Yavagal, Chandrashekar; Dixit, Kratika; Naik, Saraswathi V

    2016-01-01

    Primary root canals are considered to be most challenging due to their complex anatomy. "Wave one" and "one shape" are single-file systems with reciprocating and rotary motion respectively. The aim of this study was to evaluate and compare dentin thickness, centering ability, canal transportation, and instrumentation time of wave one and one shape files in primary root canals using a cone beam computed tomographic (CBCT) analysis. This is an experimental, in vitro study comparing the two groups. A total of 24 extracted human primary teeth with minimum 7 mm root length were included in the study. Cone beam computed tomographic images were taken before and after the instrumentation for each group. Dentin thickness, centering ability, canal transportation, and instrumentation times were evaluated for each group. A significant difference was found in instrumentation time and canal transportation measures between the two groups. Wave one showed less canal transportation as compared with one shape, and the mean instrumentation time of wave one was significantly less than one shape. Reciprocating single-file systems was found to be faster with much less procedural errors and can hence be recommended for shaping the root canals of primary teeth. How to cite this article: Prabhakar AR, Yavagal C, Dixit K, Naik SV. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-File Systems. Int J Clin Pediatr Dent 2016;9(1):45-49.

  5. Automating NEURON Simulation Deployment in Cloud Resources.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2017-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the OpenStack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon's proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model.

  6. Automating NEURON Simulation Deployment in Cloud Resources

    PubMed Central

    Santamaria, Fidel

    2016-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the Open-Stack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon’s proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model. PMID:27655341

  7. Validity of the modified RULA for computer workers and reliability of one observation compared to six.

    PubMed

    Levanon, Yafa; Lerman, Yehuda; Gefen, Amit; Ratzon, Navah Z

    2014-01-01

    Awkward body posture while typing is associated with musculoskeletal disorders (MSDs). Valid rapid assessment of computer workers' body posture is essential for the prevention of MSD among this large population. This study aimed to examine the validity of the modified rapid upper limb assessment (mRULA) which adjusted the rapid upper limb assessment (RULA) for computer workers. Moreover, this study examines whether one observation during a working day is sufficient or more observations are needed. A total of 29 right-handed computer workers were recruited. RULA and mRULA were conducted. The observations were then repeated six times at one-hour intervals. A significant moderate correlation (r = 0.6 and r = 0.7 for mouse and keyboard, respectively) was found between the assessments. No significant differences were found between one observation and six observations per working day. The mRULA was found to be valid for the assessment of computer workers, and one observation was sufficient to assess the work-related risk factor.

  8. Short communication: Analysis of genomic predictor population for Holstein dairy cattle in the United States – Effects of sex and age

    USDA-ARS?s Scientific Manuscript database

    The number of females genotyped in the US has increased to 12,650 per month, comprising 74% of the total genotypes received in 2013. Concerns of increased computing time of the ever-growing predictor population set and linkage decay between the ancestral population and the current animals have arise...

  9. 20 CFR 404.1263 - When fractional part of a cent may be disregarded-for wages paid prior to 1987.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... cent shall be used in computing the total of contributions. If a State Fails To Make Timely Payments... disregarded-for wages paid prior to 1987. 404.1263 Section 404.1263 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Coverage of Employees of State and...

  10. Estimating Cone and Seed Production and Monitoring Pest Damage in Southern Pine Seed Orchards

    Treesearch

    Carl W. Fatzinger; H. David Muse; Thomas Miller; Helen T. Bhattacharyya

    1988-01-01

    Field sampling procedures and computer programs are described for monitoring seed production and pest damage in southern pine seed orchards. The system estimates total orchard yields of female strobili and seeds, quantifies pest damage, determines times of year when losses occur, and produces life tables for female strobili. An example is included to illustrate the...

  11. Adiposity and different types of screen time.

    PubMed

    Falbe, Jennifer; Rosner, Bernard; Willett, Walter C; Sonneville, Kendrin R; Hu, Frank B; Field, Alison E

    2013-12-01

    Few prospective studies have examined separate forms of screen time in relation to adiposity. Our objective was to assess independent relations of television, electronic games (video/computer), and digital versatile disc (DVD)/videos and total screen time with change in adolescent BMI. Using data from the 2004, 2006, and 2008 waves of the ongoing Growing up Today Study II, we assessed baseline and 2-year change in reported screen time in relation to concurrent change in BMI among 4287 girls and 3505 boys aged 9 to 16 years in 2004. Gender-specific models adjusted for previous BMI, age, race/ethnicity, growth/development, months between questionnaires, and physical activity. Among girls and boys, each hour per day increase in reported television viewing was associated with a 0.09 increase in BMI (Ps < .001), and each hour per day increase in total screen time was associated with a 0.07 increase among girls and 0.05 increase among boys (Ps < .001). Among girls only, greater baseline television, games, and total screen time and change in DVDs/videos were associated with gains in BMI (Ps < .05). BMI gains associated with change in television and total screen time were stronger among overweight girls than lean girls (Ps-heterogeneity < .001). Television, which remains the steadiest source of food advertising, was most consistently associated with BMI gains. Among girls, electronic games and DVDs/videos were also related to increased BMI, possibly due to influences of product placements and advergames on diet and/or distracted eating. Adolescents, especially overweight adolescents, may benefit from reduced time with multiple types of media.

  12. Adiposity and Different Types of Screen Time

    PubMed Central

    Rosner, Bernard; Willett, Walter C.; Sonneville, Kendrin R.; Hu, Frank B.; Field, Alison E.

    2013-01-01

    OBJECTIVE: Few prospective studies have examined separate forms of screen time in relation to adiposity. Our objective was to assess independent relations of television, electronic games (video/computer), and digital versatile disc (DVD)/videos and total screen time with change in adolescent BMI. METHODS: Using data from the 2004, 2006, and 2008 waves of the ongoing Growing up Today Study II, we assessed baseline and 2-year change in reported screen time in relation to concurrent change in BMI among 4287 girls and 3505 boys aged 9 to 16 years in 2004. Gender-specific models adjusted for previous BMI, age, race/ethnicity, growth/development, months between questionnaires, and physical activity. RESULTS: Among girls and boys, each hour per day increase in reported television viewing was associated with a 0.09 increase in BMI (Ps < .001), and each hour per day increase in total screen time was associated with a 0.07 increase among girls and 0.05 increase among boys (Ps < .001). Among girls only, greater baseline television, games, and total screen time and change in DVDs/videos were associated with gains in BMI (Ps < .05). BMI gains associated with change in television and total screen time were stronger among overweight girls than lean girls (Ps-heterogeneity < .001). CONCLUSIONS: Television, which remains the steadiest source of food advertising, was most consistently associated with BMI gains. Among girls, electronic games and DVDs/videos were also related to increased BMI, possibly due to influences of product placements and advergames on diet and/or distracted eating. Adolescents, especially overweight adolescents, may benefit from reduced time with multiple types of media. PMID:24276840

  13. Comparison of missing value imputation methods in time series: the case of Turkish meteorological data

    NASA Astrophysics Data System (ADS)

    Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci

    2013-04-01

    This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.

  14. Screen media time usage of 12-16 year-old Spanish school adolescents: Effects of personal and socioeconomic factors, season and type of day.

    PubMed

    Devís-Devís, José; Peiró-Velert, Carmen; Beltrán-Carrillo, Vicente J; Tomás, José Manuel

    2009-04-01

    This study examined screen media time usage (SMTU) and its association with personal and socioeconomic factors, as well as the effect of season and type of day, in a Spanish sample of 12-16 year-old school adolescents (N=323). The research design was a cross-sectional survey, in which an interviewer-administered recall questionnaire was used. Statistical analyses included repeated measures analyses of variance, analysis of covariance and structural equation models. Results showed an average of 2.52h per day of total SMTU and partial times of 1.73h per day in TV viewing, 0.27h per day in computer/videogames, and 0.52h per day in mobile use. Four significant predictors of SMTU emerged. Firstly, the type of school was associated with the three media of our study, particularly students from state/public school spent more time on them than their private schools counterparts. Secondly, older adolescents (14-16 years old) were more likely to use computer/videogame and mobile phone than younger adolescents. Thirdly, the more accessibility to household technology the more probable computer/videogames and mobile phone were used. Finally, boys spent significantly more time in mobile phone than girls. Additionally, results revealed that adolescents seemed to consume more TV and computer/videogames in autumn than in winter, and more TV and mobile phones on weekends than on weekdays, especially among state school students. Findings from this study contribute to the existing knowledge on adolescents' SMTU patterns that can be transferred to families and policies.

  15. Implementation of bipartite or remote unitary gates with repeater nodes

    NASA Astrophysics Data System (ADS)

    Yu, Li; Nemoto, Kae

    2016-08-01

    We propose some protocols to implement various classes of bipartite unitary operations on two remote parties with the help of repeater nodes in-between. We also present a protocol to implement a single-qubit unitary with parameters determined by a remote party with the help of up to three repeater nodes. It is assumed that the neighboring nodes are connected by noisy photonic channels, and the local gates can be performed quite accurately, while the decoherence of memories is significant. A unitary is often a part of a larger computation or communication task in a quantum network, and to reduce the amount of decoherence in other systems of the network, we focus on the goal of saving the total time for implementing a unitary including the time for entanglement preparation. We review some previously studied protocols that implement bipartite unitaries using local operations and classical communication and prior shared entanglement, and apply them to the situation with repeater nodes without prior entanglement. We find that the protocols using piecewise entanglement between neighboring nodes often require less total time compared to preparing entanglement between the two end nodes first and then performing the previously known protocols. For a generic bipartite unitary, as the number of repeater nodes increases, the total time could approach the time cost for direct signal transfer from one end node to the other. We also prove some lower bounds of the total time when there are a small number of repeater nodes. The application to position-based cryptography is discussed.

  16. Effects of glycolic acid chemical peeling on facial pigment deposition: evaluation using novel computer analysis of digital-camera-captured images.

    PubMed

    Kakudo, Natsuko; Kushida, Satoshi; Suzuki, Kenji; Kusumoto, Kenji

    2013-12-01

    Chemical peeling is becoming increasingly popular for skin rejuvenation in dermatological cosmetic medicine. However, the improvements seen with chemical peeling are often very minor, and it is difficult to conduct a quantitative assessment of pre- and post-treatment appearance. We report the pre- and postpeeling effects for facial pigment deposition using a novel computer analysis method for digital-camera-captured images. Glycolic acid chemical peeling was performed a total of 5 times at 2-week intervals in 23 healthy women. We conducted a computer image analysis by utilizing Robo Skin Analyzer CS 50 and Clinical Suite 2.1 and then reviewed each parameter for the area of facial pigment deposition pre- and post-treatment. Parameters were pigmentation size and four pigmentation categories: little pigmentation and three levels of marked pigmentation (Lv1, 2, and 3) based on detection threshold. Each parameter was measured, and the total area of facial pigmentation was calculated. The total area of little pigmentation and marked pigmentation (Lv1) was significantly reduced. On the other hand, a significant difference was not observed for the total area of marked pigmentation Lv2 and Lv3. This suggests that glycolic acid chemical peeling has an effect on small facial pigment disposition or has an effect on light pigment deposition. As the Robo Skin Analyzer is useful for objectively quantifying and analyzing minor changes in facial skin, it is considered to be an effective tool for accumulating treatment evidence in the cosmetic and esthetic skin field. © 2013 Wiley Periodicals, Inc.

  17. Effects of computer-based stress management training on psychological well-being and work performance in japanese employees: a cluster randomized controlled trial.

    PubMed

    Umanodan, Rino; Shimazu, Akihito; Minami, Masahide; Kawakami, Norito

    2014-01-01

    This study evaluated the effectiveness of a computer-based stress management training (SMT) program in improving employees' psychological well-being and work performance. A total of 12 work units (N=263) were randomly assigned to either an intervention group (8 work units, n=142) or to a wait-list control group (4 work units, n=121). All participants were requested to answer online questionnaires assessing psychological well-being as a primary outcome, and coping style, social support, and knowledge about stress management as secondary outcomes at baseline (T0), immediately after the intervention (T1), and 2 months after the intervention (T2). The group × time interaction was tested using a mixed-model repeated measures ANOVA. Results showed a group × time interaction for "knowledge about stress management" in the entire sample. Among participants who had more than 3 d of training, a significant group × time interaction was observed for "problem-solving" and "avoidance and suppression" as well as "knowledge about stress management." Our computer-based stress management program was effective for improving knowledge about stress management. It was also effective for improving coping skills in instances where participants had enough time (at least 3 d) to complete all sessions.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y. M., E-mail: ymingy@gmail.com; Bednarz, B.; Svatos, M.

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship withinmore » a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead.« less

  19. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead. PMID:27277051

  20. Cyber Contingency Analysis version 1.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Contingency analysis based approach for quantifying and examining the resiliency of a cyber system in respect to confidentiality, integrity and availability. A graph representing an organization's cyber system and related resources is used for the availability contingency analysis. The mission critical paths associated with an organization are used to determine the consequences of a potential contingency. A node (or combination of nodes) are removed from the graph to analyze a particular contingency. The value of all mission critical paths that are disrupted by that contingency are used to quantify its severity. A total severity score can be calculated based onmore » the complete list of all these contingencies. A simple n1 analysis can be done in which only one node is removed at a time for the analysis. We can also compute nk analysis, where k is the number of nodes to simultaneously remove for analysis. A contingency risk score can also be computed, which takes the probability of the contingencies into account. In addition to availability, we can also quantify confidentiality and integrity scores for the system. These treat user accounts as potential contingencies. The amount (and type) of files that an account can read to is used to compute the confidentiality score. The amount (and type) of files that an account can write to is used to compute the integrity score. As with availability analysis, we can use this information to compute total severity scores in regards to confidentiality and integrity. We can also take probability into account to compute associated risk scores.« less

  1. Multiscale modeling and distributed computing to predict cosmesis outcome after a lumpectomy

    NASA Astrophysics Data System (ADS)

    Garbey, M.; Salmon, R.; Thanoon, D.; Bass, B. L.

    2013-07-01

    Surgery for early stage breast carcinoma is either total mastectomy (complete breast removal) or surgical lumpectomy (only tumor removal). The lumpectomy or partial mastectomy is intended to preserve a breast that satisfies the woman's cosmetic, emotional and physical needs. But in a fairly large number of cases the cosmetic outcome is not satisfactory. Today, predicting that surgery outcome is essentially based on heuristic. Modeling such a complex process must encompass multiple scales, in space from cells to tissue, as well as in time, from minutes for the tissue mechanics to months for healing. The goal of this paper is to present a first step in multiscale modeling of the long time scale prediction of breast shape after tumor resection. This task requires coupling very different mechanical and biological models with very different computing needs. We provide a simple illustration of the application of heterogeneous distributed computing and modular software design to speed up the model development. Our computational framework serves currently to test hypothesis on breast tissue healing in a pilot study with women who have been elected to undergo BCT and are being treated at the Methodist Hospital in Houston, TX.

  2. Validation study of an interpolation method for calculating whole lung volumes and masses from reduced numbers of CT-images in ponies.

    PubMed

    Reich, H; Moens, Y; Braun, C; Kneissl, S; Noreikat, K; Reske, A

    2014-12-01

    Quantitative computer tomographic analysis (qCTA) is an accurate but time intensive method used to quantify volume, mass and aeration of the lungs. The aim of this study was to validate a time efficient interpolation technique for application of qCTA in ponies. Forty-one thoracic computer tomographic (CT) scans obtained from eight anaesthetised ponies positioned in dorsal recumbency were included. Total lung volume and mass and their distribution into four compartments (non-aerated, poorly aerated, normally aerated and hyperaerated; defined based on the attenuation in Hounsfield Units) were determined for the entire lung from all 5 mm thick CT-images, 59 (55-66) per animal. An interpolation technique validated for use in humans was then applied to calculate qCTA results for lung volumes and masses from only 10, 12, and 14 selected CT-images per scan. The time required for both procedures was recorded. Results were compared statistically using the Bland-Altman approach. The bias ± 2 SD for total lung volume calculated from interpolation of 10, 12, and 14 CT-images was -1.2 ± 5.8%, 0.1 ± 3.5%, and 0.0 ± 2.5%, respectively. The corresponding results for total lung mass were -1.1 ± 5.9%, 0.0 ± 3.5%, and 0.0 ± 3.0%. The average time for analysis of one thoracic CT-scan using the interpolation method was 1.5-2 h compared to 8 h for analysis of all images of one complete thoracic CT-scan. The calculation of pulmonary qCTA data by interpolation from 12 CT-images was applicable for equine lung CT-scans and reduced the time required for analysis by 75%. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Electronic screens in children's bedrooms and adiposity, physical activity and sleep: do the number and type of electronic devices matter?

    PubMed

    Chaput, Jean-Philippe; Leduc, Geneviève; Boyer, Charles; Bélanger, Priscilla; LeBlanc, Allana G; Borghese, Michael M; Tremblay, Mark S

    2014-07-11

    To examine whether the number and type of electronic screens available in children's bedrooms matter in their relationship to adiposity, physical activity and sleep. A cross-sectional study was conducted involving 502 children aged 9-11 years from Ottawa, Ontario. The presence (yes/no) of a television (TV), computer or video game system in the child's bedroom was reported by the parents. Percentage body fat was measured using bioelectrical impedance. An accelerometer was worn over seven days to assess moderate-to-vigorous physical activity (MVPA), total sedentary time, sleep duration and sleep efficiency. Screen time was self-reported by the child. After adjustment for age, sex, ethnicity, annual household income and highest level of parental education, children with 2-3 screens in their bedroom had a significantly higher percentage of body fat than children with no screen in their bedroom. However, while children with 2-3 screens in their bedroom engaged in more screen time overall than those with no screen, total sedentary time and MVPA were not significantly different. Sleep duration was not related to the number of screens in the bedroom, but sleep efficiency was significantly lower in children with at least 2 screens in the bedroom. Finally, children having only a TV in their bedroom had significantly higher adiposity than those having no screen at all. In contrast, the presence of a computer in children's bedrooms was not associated with higher adiposity than that of children with no screen. A higher number of screens in a child's bedroom was associated with higher adiposity, more total screen time and lower sleep efficiency. Having a TV in the bedroom appears to be the type of screen presence associated with higher levels of adiposity. Given the popularity of screens among children, these findings are increasingly relevant to health promotion strategies.

  4. Computer program for the computation of total sediment discharge by the modified Einstein procedure

    USGS Publications Warehouse

    Stevens, H.H.

    1985-01-01

    Two versions of a computer program to compute total sediment discharge by the modified Einstein procedure are presented. The FORTRAN 77 language version is for use on the PRIME computer, and the BASIC language version is for use on most microcomputers. The program contains built-in limitations and input-output options that closely follow the original modified Einstein procedure. Program documentation and listings of both versions of the program are included. (USGS)

  5. A new linear least squares method for T1 estimation from SPGR signals with multiple TRs

    NASA Astrophysics Data System (ADS)

    Chang, Lin-Ching; Koay, Cheng Guan; Basser, Peter J.; Pierpaoli, Carlo

    2009-02-01

    The longitudinal relaxation time, T1, can be estimated from two or more spoiled gradient recalled echo x (SPGR) images with two or more flip angles and one or more repetition times (TRs). The function relating signal intensity and the parameters are nonlinear; T1 maps can be computed from SPGR signals using nonlinear least squares regression. A widely-used linear method transforms the nonlinear model by assuming a fixed TR in SPGR images. This constraint is not desirable since multiple TRs are a clinically practical way to reduce the total acquisition time, to satisfy the required resolution, and/or to combine SPGR data acquired at different times. A new linear least squares method is proposed using the first order Taylor expansion. Monte Carlo simulations of SPGR experiments are used to evaluate the accuracy and precision of the estimated T1 from the proposed linear and the nonlinear methods. We show that the new linear least squares method provides T1 estimates comparable in both precision and accuracy to those from the nonlinear method, allowing multiple TRs and reducing computation time significantly.

  6. Family structure as a predictor of screen time among youth.

    PubMed

    McMillan, Rachel; McIsaac, Michael; Janssen, Ian

    2015-01-01

    The family plays a central role in the development of health-related behaviors among youth. The objective of this study was to determine whether non-traditional parental structure and shared custody arrangements predict how much time youth spend watching television, using a computer recreationally, and playing video games. Participants were a nationally representative sample of Canadian youth (N = 26,068) in grades 6-10 who participated in the 2009/10 Health Behaviour in School-aged Children Survey. Screen time in youth from single parent and reconstituted families, with or without regular visitation with their non-residential parent, was compared to that of youth from traditional dual-parent families. Multiple imputation was used to account for missing data. After multiple imputation, the relative odds of being in the highest television, computer use, video game, and total screen time quartiles were not different in boys and girls from non-traditional families by comparison to boys and girls from traditional dual-parent families. In conclusion, parental structure and child custody arrangements did not have a meaningful impact on screen time among youth.

  7. Malignant tumors of the maxilla: virtual planning and real-time rehabilitation with custom-made R-zygoma fixtures and carbon-graphite fiber-reinforced polymer prosthesis.

    PubMed

    Ekstrand, Karl; Hirsch, Jan-M

    2008-03-01

    Oral cancer is a mutilating disease. Because of the expanding application of computer technology in medicine, new methods are constantly evolving. This project leads into a new technology in maxillofacial reconstructive therapy using a redesigned zygoma fixture. Previous development experiences showed that the procedure was time-consuming and painful for the patients. Frequent episodes of sedation or general anesthetics were required and the rehabilitation is costly. The aim of our new treatment goal was to allow the patients to wake up after tumor surgery with a functional rehabilitation in place. Stereolithographic models were introduced to produce a model from the three-dimensional computed tomography (CT). A guide with the proposed resection was fabricated, and the real-time maxillectomy was performed. From the postoperative CT, a second stereolithographic model was manufactured and in addition, a stent for the optimal position of the implants. Customized zygoma implants were installed (R-zygoma, Integration AB, Göteborg, Sweden). A fixed construction was fabricated by using a new material based on poly(methylacrylate) reinforced with carbon/graphite fibers and attached to the implants. On the same master cast, a separate obturator was fabricated in permanent soft silicon. The result of this project showed that it was possible to create a virtual plan preoperatively to apply during surgery in order for the patient to wake up functionally rehabilitated. From a quality-of-life perspective, it is an advantage to be rehabilitated fast. By using new computer technology, pain and discomfort are less and the total rehabilitation is faster, which in turn reduces days in hospital and thereby total costs.

  8. NDE Imaging of Time Differential Terahertz Waves

    NASA Technical Reports Server (NTRS)

    Trinh, Long B.

    2008-01-01

    Natural voids are present in the vicinity of a conathane interface that bonds two different foam materials. These voids are out of focus with the terahertz imaging system and multiple optical reflections also make it difficult to determine their depths. However, waves passing through the top foam article at normal incidence are partially reflected at the denser conathane layer prior to total reflection at the tank s wall. Reflections embedded in the oscillating noise segment prior to the main signals can be extracted with dual applications of filtering and time derivative. Void's depth is computed from direct path's time of flight.

  9. The relationship between computer games and quality of life in adolescents.

    PubMed

    Dolatabadi, Nayereh Kasiri; Eslami, Ahmad Ali; Mostafavi, Firooze; Hassanzade, Akbar; Moradi, Azam

    2013-01-01

    Term of doing computer games among teenagers is growing rapidly. This popular phenomenon can cause physical and psychosocial issues in them. Therefore, this study examined the relationship between computer games and quality of life domains in adolescents aging 12-15 years. In a cross-sectional study using the 2-stage stratified cluster sampling method, 444 male and female students in Borkhar were selected. The data collection tool consisted of 1) World Health Organization Quality Of Life - BREF questionnaire and 2) personal information questionnaire. The data were analyzed by Pearson correlation, Spearman correlation, chi-square, independent t-tests and analysis of covariance. The total mean score of quality of life in students was 67.11±13.34. The results showed a significant relationship between the age of starting to play games and the overall quality of life score and its fourdomains (range r=-0.13 to -0.18). The mean of overall quality of life score in computer game users was 68.27±13.03 while it was 64.81±13.69 among those who did not play computer games and the difference was significant (P=0.01). There were significant differences in environmental and mental health domains between the two groups (P<0.05). However, there was no significant relationship between BMI with the time spent and the type of computer games. Playing computer games for a short time under parental supervision can have positive effects on quality of life in adolescents. However, spending long hours for playing computer games may have negative long-term effects.

  10. Portable multiplicity counter

    DOEpatents

    Newell, Matthew R [Los Alamos, NM; Jones, David Carl [Los Alamos, NM

    2009-09-01

    A portable multiplicity counter has signal input circuitry, processing circuitry and a user/computer interface disposed in a housing. The processing circuitry, which can comprise a microcontroller integrated circuit operably coupled to shift register circuitry implemented in a field programmable gate array, is configured to be operable via the user/computer interface to count input signal pluses receivable at said signal input circuitry and record time correlations thereof in a total counting mode, coincidence counting mode and/or a multiplicity counting mode. The user/computer interface can be for example an LCD display/keypad and/or a USB interface. The counter can include a battery pack for powering the counter and low/high voltage power supplies for biasing external detectors so that the counter can be configured as a hand-held device for counting neutron events.

  11. Report on the Total System Computer Program for Medical Libraries.

    ERIC Educational Resources Information Center

    Divett, Robert T.; Jones, W. Wayne

    The objective of this project was to develop an integrated computer program for the total operations of a medical library including acquisitions, cataloging, circulation, reference, a computer catalog, serials controls, and current awareness services. The report describes two systems approaches: the batch system and the terminal system. The batch…

  12. Contact Graph Routing

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    Contact Graph Routing (CGR) is a dynamic routing system that computes routes through a time-varying topology of scheduled communication contacts in a network based on the DTN (Delay-Tolerant Networking) architecture. It is designed to enable dynamic selection of data transmission routes in a space network based on DTN. This dynamic responsiveness in route computation should be significantly more effective and less expensive than static routing, increasing total data return while at the same time reducing mission operations cost and risk. The basic strategy of CGR is to take advantage of the fact that, since flight mission communication operations are planned in detail, the communication routes between any pair of bundle agents in a population of nodes that have all been informed of one another's plans can be inferred from those plans rather than discovered via dialogue (which is impractical over long one-way-light-time space links). Messages that convey this planning information are used to construct contact graphs (time-varying models of network connectivity) from which CGR automatically computes efficient routes for bundles. Automatic route selection increases the flexibility and resilience of the space network, simplifying cross-support and reducing mission management costs. Note that there are no routing tables in Contact Graph Routing. The best route for a bundle destined for a given node may routinely be different from the best route for a different bundle destined for the same node, depending on bundle priority, bundle expiration time, and changes in the current lengths of transmission queues for neighboring nodes; routes must be computed individually for each bundle, from the Bundle Protocol agent's current network connectivity model for the bundle s destination node (the contact graph). Clearly this places a premium on optimizing the implementation of the route computation algorithm. The scalability of CGR to very large networks remains a research topic. The information carried by CGR contact plan messages is useful not only for dynamic route computation, but also for the implementation of rate control, congestion forecasting, transmission episode initiation and termination, timeout interval computation, and retransmission timer suspension and resumption.

  13. Relationships between media use, body fatness and physical activity in children and youth: a meta-analysis.

    PubMed

    Marshall, S J; Biddle, S J H; Gorely, T; Cameron, N; Murdey, I

    2004-10-01

    To review the empirical evidence of associations between television (TV) viewing, video/computer game use and (a) body fatness, and (b) physical activity. Meta-analysis. Published English-language studies were located from computerized literature searches, bibliographies of primary studies and narrative reviews, and manual searches of personal archives. Included studies presented at least one empirical association between TV viewing, video/computer game use and body fatness or physical activity among samples of children and youth aged 3-18 y. The mean sample-weighted corrected effect size (Pearson r). Based on data from 52 independent samples, the mean sample-weighted effect size between TV viewing and body fatness was 0.066 (95% CI=0.056-0.078; total N=44,707). The sample-weighted fully corrected effect size was 0.084. Based on data from six independent samples, the mean sample-weighted effect size between video/computer game use and body fatness was 0.070 (95% CI=-0.048 to 0.188; total N=1,722). The sample-weighted fully corrected effect size was 0.128. Based on data from 39 independent samples, the mean sample-weighted effect size between TV viewing and physical activity was -0.096 (95% CI=-0.080 to -0.112; total N=141,505). The sample-weighted fully corrected effect size was -0.129. Based on data from 10 independent samples, the mean sample-weighted effect size between video/computer game use and physical activity was -0.104 (95% CI=-0.080 to -0.128; total N=119,942). The sample-weighted fully corrected effect size was -0.141. A statistically significant relationship exists between TV viewing and body fatness among children and youth although it is likely to be too small to be of substantial clinical relevance. The relationship between TV viewing and physical activity is small but negative. The strength of these relationships remains virtually unchanged even after correcting for common sources of bias known to impact study outcomes. While the total amount of time per day engaged in sedentary behavior is inevitably prohibitive of physical activity, media-based inactivity may be unfairly implicated in recent epidemiologic trends of overweight and obesity among children and youth. Relationships between sedentary behavior and health are unlikely to be explained using single markers of inactivity, such as TV viewing or video/computer game use.

  14. Helicity conservation under quantum reconnection of vortex rings.

    PubMed

    Zuccher, Simone; Ricca, Renzo L

    2015-12-01

    Here we show that under quantum reconnection, simulated by using the three-dimensional Gross-Pitaevskii equation, self-helicity of a system of two interacting vortex rings remains conserved. By resolving the fine structure of the vortex cores, we demonstrate that the total length of the vortex system reaches a maximum at the reconnection time, while both writhe helicity and twist helicity remain separately unchanged throughout the process. Self-helicity is computed by two independent methods, and topological information is based on the extraction and analysis of geometric quantities such as writhe, total torsion, and intrinsic twist of the reconnecting vortex rings.

  15. The combined use of computer-guided, minimally invasive, flapless corticotomy and clear aligners as a novel approach to moderate crowding: A case report

    PubMed Central

    Cassetta, Michele; Altieri, Federica; Pandolfi, Stefano; Giansanti, Matteo

    2017-01-01

    The aim of this case report was to describe an innovative orthodontic treatment method that combined surgical and orthodontic techniques. The novel method was used to achieve a positive result in a case of moderate crowding by employing a computer-guided piezocision procedure followed by the use of clear aligners. A 23-year-old woman had a malocclusion with moderate crowding. Her periodontal indices, oral health-related quality of life (OHRQoL), and treatment time were evaluated. The treatment included interproximal corticotomy cuts extending through the entire thickness of the cortical layer, without a full-thickness flap reflection. This was achieved with a three-dimensionally printed surgical guide using computer-aided design and computer-aided manufacturing. Orthodontic force was applied to the teeth immediately after surgery by using clear appliances for better control of tooth movement. The total treatment time was 8 months. The periodontal indices improved after crowding correction, but the oral health impact profile showed a slight deterioration of OHRQoL during the 3 days following surgery. At the 2-year retention follow-up, the stability of treatment was excellent. The reduction in surgical time and patient discomfort, increased periodontal safety and patient acceptability, and accurate control of orthodontic movement without the risk of losing anchorage may encourage the use of this combined technique in appropriate cases. PMID:28337422

  16. The combined use of computer-guided, minimally invasive, flapless corticotomy and clear aligners as a novel approach to moderate crowding: A case report.

    PubMed

    Cassetta, Michele; Altieri, Federica; Pandolfi, Stefano; Giansanti, Matteo

    2017-03-01

    The aim of this case report was to describe an innovative orthodontic treatment method that combined surgical and orthodontic techniques. The novel method was used to achieve a positive result in a case of moderate crowding by employing a computer-guided piezocision procedure followed by the use of clear aligners. A 23-year-old woman had a malocclusion with moderate crowding. Her periodontal indices, oral health-related quality of life (OHRQoL), and treatment time were evaluated. The treatment included interproximal corticotomy cuts extending through the entire thickness of the cortical layer, without a full-thickness flap reflection. This was achieved with a three-dimensionally printed surgical guide using computer-aided design and computer-aided manufacturing. Orthodontic force was applied to the teeth immediately after surgery by using clear appliances for better control of tooth movement. The total treatment time was 8 months. The periodontal indices improved after crowding correction, but the oral health impact profile showed a slight deterioration of OHRQoL during the 3 days following surgery. At the 2-year retention follow-up, the stability of treatment was excellent. The reduction in surgical time and patient discomfort, increased periodontal safety and patient acceptability, and accurate control of orthodontic movement without the risk of losing anchorage may encourage the use of this combined technique in appropriate cases.

  17. Intercomparison of Recent Anomaly Time-Series of OLR as Observed by CERES and Computed Using AIRS Products

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Molnar, Gyula; Iredell, Lena; Loeb, Norman G.

    2011-01-01

    This paper compares recent spatial and temporal anomaly time series of OLR as observed by CERES and computed based on AIRS retrieved surface and atmospheric geophysical parameters over the 7 year time period September 2002 through February 2010. This time period is marked by a substantial decrease of OLR, on the order of +/-0.1 W/sq m/yr, averaged over the globe, and very large spatial variations of changes in OLR in the tropics, with local values ranging from -2.8 W/sq m/yr to +3.1 W/sq m/yr. Global and Tropical OLR both began to decrease significantly at the onset of a strong La Ni a in mid-2007. Late 2009 is characterized by a strong El Ni o, with a corresponding change in sign of both Tropical and Global OLR anomalies. The spatial patterns of the 7 year short term changes in AIRS and CERES OLR have a spatial correlation of 0.97 and slopes of the linear least squares fits of anomaly time series averaged over different spatial regions agree on the order of +/-0.01 W/sq m/yr. This essentially perfect agreement of OLR anomaly time series derived from observations by two different instruments, determined in totally independent and different manners, implies that both sets of results must be highly stable. This agreement also validates the anomaly time series of the AIRS derived products used to compute OLR and furthermore indicates that anomaly time series of AIRS derived products can be used to explain the factors contributing to anomaly time series of OLR.

  18. Axial to transverse energy mixing dynamics in octupole-based magnetostatic antihydrogen traps

    NASA Astrophysics Data System (ADS)

    Zhong, M.; Fajans, J.; Zukor, A. F.

    2018-05-01

    The nature of the trajectories of antihydrogen atoms confined in an octupole minimum-B trap is of great importance for upcoming spectroscopy, cooling, and gravity experiments. Of particular interest is the mixing time between the axial and transverse energies for the antiatoms. Here, using computer simulations, we establish that almost all trajectories are chaotic, and then quantify the characteristic mixing time between the axial and transverse energies. We find that there are two classes of trajectories: for trajectories whose axial energy is higher than about 20% of the total energy, the axial energy substantially mixes within about 10 s, whereas for trajectories whose axial energy is lower than about 10% of the total energy, the axial energy remains nearly constant for 1000 s or longer.

  19. Iterated greedy algorithms to minimize the total family flow time for job-shop scheduling with job families and sequence-dependent set-ups

    NASA Astrophysics Data System (ADS)

    Kim, Ji-Su; Park, Jung-Hyeon; Lee, Dong-Ho

    2017-10-01

    This study addresses a variant of job-shop scheduling in which jobs are grouped into job families, but they are processed individually. The problem can be found in various industrial systems, especially in reprocessing shops of remanufacturing systems. If the reprocessing shop is a job-shop type and has the component-matching requirements, it can be regarded as a job shop with job families since the components of a product constitute a job family. In particular, sequence-dependent set-ups in which set-up time depends on the job just completed and the next job to be processed are also considered. The objective is to minimize the total family flow time, i.e. the maximum among the completion times of the jobs within a job family. A mixed-integer programming model is developed and two iterated greedy algorithms with different local search methods are proposed. Computational experiments were conducted on modified benchmark instances and the results are reported.

  20. Thermoelastic damping in bilayered microbar resonators with circular cross-section

    NASA Astrophysics Data System (ADS)

    Liang, Xiaoyao; Li, Pu

    2017-11-01

    It is always a challenge to determine the Thermoelastic damping (TED) in bilayered microbars precisely. In this paper, a model for TED in the bilayered and cantilevered microbar was proposed, in which the total damping was derived by calculating the energy evanished in each layer. The distribution of temperature in the bilayered microbar with a thermodynamically ideal boundary receiving a time-harmonic force is obtained. An infinite summation for the computing of TED in the bilayered slender microbars under axial loading is presented, and the convergence rate of it is discussed. There are little differences between the results computed by our model and that by finite element method (FEM).

  1. Implementation of Biogas Stations into Smart Heating and Cooling Network

    NASA Astrophysics Data System (ADS)

    Milčák, P.; Konvička, J.; Jasenská, M.

    2016-10-01

    The paper is aimed at the description of implementation of a biogas station into software environment for the "Smart Heating and Cooling Networks". The aim of this project is creation of a software tool for preparation of operation and optimization of treatment of heat/cool in small regions. In this case, the biogas station represents a kind of renewable energy source, which, however, has its own operational specifics which need to be taken into account at the creation of an implementation project. For a specific biogas station, a detailed computational model was elaborated, which is parameterized in particular for an optimization of the total computational time.

  2. Application of artificial neural networks to composite ply micromechanics

    NASA Technical Reports Server (NTRS)

    Brown, D. A.; Murthy, P. L. N.; Berke, L.

    1991-01-01

    Artificial neural networks can provide improved computational efficiency relative to existing methods when an algorithmic description of functional relationships is either totally unavailable or is complex in nature. For complex calculations, significant reductions in elapsed computation time are possible. The primary goal is to demonstrate the applicability of artificial neural networks to composite material characterization. As a test case, a neural network was trained to accurately predict composite hygral, thermal, and mechanical properties when provided with basic information concerning the environment, constituent materials, and component ratios used in the creation of the composite. A brief introduction on neural networks is provided along with a description of the project itself.

  3. A radiative transfer model for remote sensing of laser induced fluorescence of phytoplankton in non-homogeneous turbid water

    NASA Technical Reports Server (NTRS)

    Venable, D. D.

    1980-01-01

    A radiative transfer computer model was developed to characterize the total flux of chlorophyll a fluoresced or backscattered photons when laser radiation is incident on turbid water that contains a non-homogeneous suspension of inorganic sediments and phytoplankton. The radiative transfer model is based on the Monte Carlo technique and assumes that: (1) the aquatic medium can be represented by a stratified concentration profile; and (2) that appropriate optical parameters can be defined for each layer. The model was designed to minimize the required computer resources and run time. Results are presented for an anacystis marinus culture.

  4. On the performance of large Gaussian basis sets for the computation of total atomization energies

    NASA Technical Reports Server (NTRS)

    Martin, J. M. L.

    1992-01-01

    The total atomization energies of a number of molecules have been computed using an augmented coupled-cluster method and (5s4p3d2f1g) and 4s3p2d1f) atomic natural orbital (ANO) basis sets, as well as the correlation consistent valence triple zeta plus polarization (cc-pVTZ) correlation consistent valence quadrupole zeta plus polarization (cc-pVQZ) basis sets. The performance of ANO and correlation consistent basis sets is comparable throughout, although the latter can result in significant CPU time savings. Whereas the inclusion of g functions has significant effects on the computed Sigma D(e) values, chemical accuracy is still not reached for molecules involving multiple bonds. A Gaussian-1 (G) type correction lowers the error, but not much beyond the accuracy of the G1 model itself. Using separate corrections for sigma bonds, pi bonds, and valence pairs brings down the mean absolute error to less than 1 kcal/mol for the spdf basis sets, and about 0.5 kcal/mol for the spdfg basis sets. Some conclusions on the success of the Gaussian-1 and Gaussian-2 models are drawn.

  5. Using the computer in the clinical consultation; setting the stage, reviewing, recording, and taking actions: multi-channel video study.

    PubMed

    Kumarapeli, Pushpa; de Lusignan, Simon

    2013-06-01

    Electronic patient record (EPR) systems are widely used. This study explores the context and use of systems to provide insights into improving their use in clinical practice. We used video to observe 163 consultations by 16 clinicians using four EPR brands. We made a visual study of the consultation room and coded interactions between clinician, patient, and computer. Few patients (6.9%, n=12) declined to participate. Patients looked at the computer twice as much (47.6 s vs 20.6 s, p<0.001) when it was within their gaze. A quarter of consultations were interrupted (27.6%, n=45); and in half the clinician left the room (12.3%, n=20). The core consultation takes about 87% of the total session time; 5% of time is spent pre-consultation, reading the record and calling the patient in; and 8% of time is spent post-consultation, largely entering notes. Consultations with more than one person and where prescribing took place were longer (R(2) adj=22.5%, p<0.001). The core consultation can be divided into 61% of direct clinician-patient interaction, of which 15% is examination, 25% computer use with no patient involvement, and 14% simultaneous clinician-computer-patient interplay. The proportions of computer use are similar between consultations (mean=40.6%, SD=13.7%). There was more data coding in problem-orientated EPR systems, though clinicians often used vague codes. The EPR system is used for a consistent proportion of the consultation and should be designed to facilitate multi-tasking. Clinicians who want to promote screen sharing should change their consulting room layout.

  6. Validity of a multi-context sitting questionnaire across demographically diverse population groups: AusDiab3.

    PubMed

    Clark, Bronwyn K; Lynch, Brigid M; Winkler, Elisabeth Ah; Gardiner, Paul A; Healy, Genevieve N; Dunstan, David W; Owen, Neville

    2015-12-04

    Sitting time questionnaires have largely been validated in small convenience samples. The validity of this multi-context sitting questionnaire against an accurate measure of sitting time is reported in a large demographically diverse sample allowing assessment of validity in varied demographic subgroups. A subgroup of participants of the third wave of the Australian Diabetes, Obesity, and Lifestyle (AusDiab3) study wore activPAL3™ monitors (7 days, 24 hours/day protocol) and reported their sitting time for work, travel, television viewing, leisure computer use and "other" purposes, on weekdays and weekend days (n = 700, age 36-89 years, 45% men). Correlations (Pearson's r; Spearman's ρ) of the self-report measures (the composite total, contextual measures and items) with monitor-assessed sitting time were assessed in the whole sample and separately in socio-demographic subgroups. Agreement was assessed using Bland-Altman plots. The composite total had a correlation with monitor-assessed sitting time of r = 0.46 (95% confidence interval [CI]: 0.40, 0.52); this correlation did not vary significantly between demographic subgroups (all >0.4). The contextual measure most strongly correlated with monitor-assessed sitting time was work (ρ = 0.25, 95 % CI: 0.17, 0.31), followed by television viewing (ρ = 0.16, 95 % CI: 0.09, 0.24). Agreement of the composite total with monitored sitting time was poor, with a positive bias (B = 0.53, SE 0.04, p < 0.001) and wide limits of agreement (±4.32 h). This multi-context questionnaire provides a total sitting time measure that ranks participants well for the purposes of assessing health associations but has limited accuracy relative to activPAL-assessed sitting time. Findings did not differ in demographic subgroups.

  7. Impact of introduction of an acute surgical unit on management and outcomes of small bowel obstruction.

    PubMed

    Musiienko, Anton M; Shakerian, Rose; Gorelik, Alexandra; Thomson, Benjamin N J; Skandarajah, Anita R

    2016-10-01

    The acute surgical unit (ASU) is a recently established model of care in Australasia and worldwide. Limited data are available regarding its effect on the management of small bowel obstruction. We compared the management of small bowel obstruction before and after introduction of ASU at a major tertiary referral centre. We hypothesized that introduction of ASU would correlate with improved patient outcomes. A retrospective review of prospectively maintained databases was performed over two separate 2-year periods, before and after the introduction of ASU. Data collected included demographics, co-morbidity status, use of water-soluble contrast agent and computed tomography. Outcome measures included surgical intervention, time to surgery, hospital length of stay, complications, 30-day readmissions, use of total parenteral nutrition, intensive care unit admissions and overall mortality. Total emergency admissions to the ASU increased from 2640 to 4575 between the two time periods. A total of 481 cases were identified (225 prior and 256 after introduction of ASU). Mortality decreased from 5.8% to 2.0% (P = 0.03), which remained significant after controlling for confounders with multivariate analysis (odds ratio = 0.24, 95% confidence interval 0.08-0.73, P = 0.012). The proportion of surgically managed patients increased (20.9% versus 32.0%, P = 0.003) and more operations were performed within 5 days from presentation (76.6% versus 91.5%, P = 0.02). Fewer patients received water-soluble contrast agent (27.1% versus 18.4%, P = 0.02), but more patients were investigated with a computed tomography (70.7% versus 79.7%, P = 0.02). The ASU model of care resulted in decreased mortality, shorter time to intervention and increased surgical management. Overall complications rate and length of stay did not change. © 2015 Royal Australasian College of Surgeons.

  8. Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 2: numerical application

    NASA Astrophysics Data System (ADS)

    Dib, Alain; Kavvas, M. Levent

    2018-03-01

    The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.

  9. Computer-aided detection of colorectal polyps: can it improve sensitivity of less-experienced readers? Preliminary findings.

    PubMed

    Baker, Mark E; Bogoni, Luca; Obuchowski, Nancy A; Dass, Chandra; Kendzierski, Renee M; Remer, Erick M; Einstein, David M; Cathier, Pascal; Jerebko, Anna; Lakare, Sarang; Blum, Andrew; Caroline, Dina F; Macari, Michael

    2007-10-01

    To determine whether computer-aided detection (CAD) applied to computed tomographic (CT) colonography can help improve sensitivity of polyp detection by less-experienced radiologist readers, with colonoscopy or consensus used as the reference standard. The release of the CT colonographic studies was approved by the individual institutional review boards of each institution. Institutions from the United States were HIPAA compliant. Written informed consent was waived at all institutions. The CT colonographic studies in 30 patients from six institutions were collected; 24 images depicted at least one confirmed polyp 6 mm or larger (39 total polyps) and six depicted no polyps. By using an investigational software package, seven less-experienced readers from two institutions evaluated the CT colonographic images and marked or scored polyps by using a five-point scale before and after CAD. The time needed to interpret the CT colonographic findings without CAD and then to re-evaluate them with CAD was recorded. For each reader, the McNemar test, adjusted for clustered data, was used to compare sensitivities for readers without and with CAD; a Wilcoxon signed-rank test was used to analyze the number of false-positive results per patient. The average sensitivity of the seven readers for polyp detection was significantly improved with CAD-from 0.810 to 0.908 (P=.0152). The number of false-positive results per patient without and with CAD increased from 0.70 to 0.96 (95% confidence interval for the increase: -0.39, 0.91). The mean total time for the readings was 17 minutes 54 seconds; for interpretation of CT colonographic findings alone, the mean time was 14 minutes 16 seconds; and for review of CAD findings, the mean time was 3 minutes 38 seconds. Results of this feasibility study suggest that CAD for CT colonography significantly improves per-polyp detection for less-experienced readers. Copyright (c) RSNA, 2007.

  10. The influence of age, gender and other information technology use on young people's computer use at school and home.

    PubMed

    Harris, C; Straker, L; Pollock, C

    2013-01-01

    Young people are exposed to a range of information technologies (IT) in different environments, including home and school, however the factors influencing IT use at home and school are poorly understood. The aim of this study was to investigate young people's computer exposure patterns at home and school, and related factors such as age, gender and the types of IT used. 1351 children in Years 1, 6, 9 and 11 from 10 schools in metropolitan Western Australia were surveyed. Most children had access to computers at home and school, with computer exposures comparable to TV, reading and writing. Total computer exposure was greater at home than school, and increased with age. Computer activities varied with age and gender and became more social with increased age, at the same time parental involvement reduced. Bedroom computer use was found to result in higher exposure patterns. High use of home and school computers were associated with each other. Associations varied depending on the type of IT exposure measure (frequency, mean weekly hours, usual and longest duration). The frequency and duration of children's computer exposure were associated with a complex interplay of the environment of use, the participant's age and gender and other IT activities.

  11. Program Helps Decompose Complex Design Systems

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Hall, Laura E.

    1994-01-01

    DeMAID (A Design Manager's Aid for Intelligent Decomposition) computer program is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problem. Groups modular subsystems on basis of interactions among them. Saves considerable money and time in total design process, particularly in new design problem in which order of modules has not been defined. Available in two machine versions: Macintosh and Sun.

  12. Nurses' attitudes towards computers: cross sectional questionnaire study.

    PubMed

    Brumini, Gordan; Kovic, Ivor; Zombori, Dejvid; Lulic, Ileana; Petrovecki, Mladen

    2005-02-01

    To estimate the attitudes of hospital nurses towards computers and the influence of gender, age, education, and computer usage on these attitudes. The study was conducted in two Croatian hospitals where integrated hospital information system is being implemented. There were 1,081 nurses surveyed by an anonymous questionnaire consisting of 8 questions about demographic data, education, and computer usage, and 30 statements on attitudes towards computers. The statements were adapted to a Likert type scale. Differences in attitudes towards computers were compared using one-way ANOVA and Tukey-b post-hoc test. The total score was 120+/-15 (mean+/-standard deviation) out of maximal 150. Nurses younger than 30 years had a higher total score than those older than 30 years (124+/-13 vs 119+/-16 for 30-39 age groups and 117+/-15 for>39 age groups, P<0.001). Nurses with a bachelor's degree (119+/-16 vs 122+/-14, P=0.002) and nurses who had attended computer science courses had a higher total score compared to the others (124+/-13 vs 118+/-16, P<0.001). Nurses using computers more than 5 hours per week had higher total score than those who used computers less than 5 hours (127+/-13 vs 124+/-12 for 1-5 h and and 119+/-14 for <1 hour per day, P<0.001, post-hoc test). Nurses in general have positive attitudes towards computers. These results are important for the planning and implementing an integrated hospital information system.

  13. Brain-computer interfaces in the continuum of consciousness.

    PubMed

    Kübler, Andrea; Kotchoubey, Boris

    2007-12-01

    To summarize recent developments and look at important future aspects of brain-computer interfaces. Recent brain-computer interface studies are largely targeted at helping severely or even completely paralysed patients. The former are only able to communicate yes or no via a single muscle twitch, and the latter are totally nonresponsive. Such patients can control brain-computer interfaces and use them to select letters, words or items on a computer screen, for neuroprosthesis control or for surfing the Internet. This condition of motor paralysis, in which cognition and consciousness appear to be unaffected, is traditionally opposed to nonresponsiveness due to disorders of consciousness. Although these groups of patients may appear to be very alike, numerous transition states between them are demonstrated by recent studies. All nonresponsive patients can be regarded on a continuum of consciousness which may vary even within short time periods. As overt behaviour is lacking, cognitive functions in such patients can only be investigated using neurophysiological methods. We suggest that brain-computer interfaces may provide a new tool to investigate cognition in disorders of consciousness, and propose a hierarchical procedure entailing passive stimulation, active instructions, volitional paradigms, and brain-computer interface operation.

  14. Parallel Calculation of Sensitivity Derivatives for Aircraft Design using Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Bischof, c. H.; Green, L. L.; Haigler, K. J.; Knauff, T. L., Jr.

    1994-01-01

    Sensitivity derivative (SD) calculation via automatic differentiation (AD) typical of that required for the aerodynamic design of a transport-type aircraft is considered. Two ways of computing SD via code generated by the ADIFOR automatic differentiation tool are compared for efficiency and applicability to problems involving large numbers of design variables. A vector implementation on a Cray Y-MP computer is compared with a coarse-grained parallel implementation on an IBM SP1 computer, employing a Fortran M wrapper. The SD are computed for a swept transport wing in turbulent, transonic flow; the number of geometric design variables varies from 1 to 60 with coupling between a wing grid generation program and a state-of-the-art, 3-D computational fluid dynamics program, both augmented for derivative computation via AD. For a small number of design variables, the Cray Y-MP implementation is much faster. As the number of design variables grows, however, the IBM SP1 becomes an attractive alternative in terms of compute speed, job turnaround time, and total memory available for solutions with large numbers of design variables. The coarse-grained parallel implementation also can be moved easily to a network of workstations.

  15. Investigation and appreciation of optimal output feedback. Volume 1: A convergent algorithm for the stochastic infinite-time discrete optimal output feedback problem

    NASA Technical Reports Server (NTRS)

    Halyo, N.; Broussard, J. R.

    1984-01-01

    The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.

  16. Model based Computerized Ionospheric Tomography in space and time

    NASA Astrophysics Data System (ADS)

    Tuna, Hakan; Arikan, Orhan; Arikan, Feza

    2018-04-01

    Reconstruction of the ionospheric electron density distribution in space and time not only provide basis for better understanding the physical nature of the ionosphere, but also provide improvements in various applications including HF communication. Recently developed IONOLAB-CIT technique provides physically admissible 3D model of the ionosphere by using both Slant Total Electron Content (STEC) measurements obtained from a GPS satellite - receiver network and IRI-Plas model. IONOLAB-CIT technique optimizes IRI-Plas model parameters in the region of interest such that the synthetic STEC computations obtained from the IRI-Plas model are in accordance with the actual STEC measurements. In this work, the IONOLAB-CIT technique is extended to provide reconstructions both in space and time. This extension exploits the temporal continuity of the ionosphere to provide more reliable reconstructions with a reduced computational load. The proposed 4D-IONOLAB-CIT technique is validated on real measurement data obtained from TNPGN-Active GPS receiver network in Turkey.

  17. SU-E-T-614: Plan Averaging for Multi-Criteria Navigation of Step-And-Shoot IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, M; Gao, H; Craft, D

    2015-06-15

    Purpose: Step-and-shoot IMRT is fundamentally discrete in nature, while multi-criteria optimization (MCO) is fundamentally continuous: the MCO planning consists of continuous sliding across the Pareto surface (the set of plans which represent the tradeoffs between organ-at-risk doses and target doses). In order to achieve close to real-time dose display during this sliding, it is desired that averaged plans share many of the same apertures as the pre-computed plans, since dose computation for apertures generated on-the-fly would be expensive. We propose a method to ensure that neighboring plans on a Pareto surface share many apertures. Methods: Our baseline step-and-shoot sequencing methodmore » is that of K. Engel (a method which minimizes the number of segments while guaranteeing the minimum number of monitor units), which we customize to sequence a set of Pareto optimal plans simultaneously. We also add an error tolerance to study the relationship between the number of shared apertures, the total number of apertures needed, and the quality of the fluence map re-creation. Results: We run tests for a 2D Pareto surface trading off rectum and bladder dose versus target coverage for a clinical prostate case. We find that if we enforce exact fluence map recreation, we are not able to achieve much sharing of apertures across plans. The total number of apertures for all seven beams and 4 plans without sharing is 217. With sharing and a 2% error tolerance, this number is reduced to 158 (73%). Conclusion: With the proposed method, total number of apertures can be decreased by 42% (averaging) with no increment of total MU, when an error tolerance of 5% is allowed. With this large amount of sharing, dose computations for averaged plans which occur during Pareto navigation will be much faster, leading to a real-time what-you-see-is-what-you-get Pareto navigation experience. Minghao Guo and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  18. First assembly times and equilibration in stochastic coagulation-fragmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Orsogna, Maria R.; Department of Mathematics, CSUN, Los Angeles, California 91330-8313; Lei, Qi

    2015-07-07

    We develop a fully stochastic theory for coagulation and fragmentation (CF) in a finite system with a maximum cluster size constraint. The process is modeled using a high-dimensional master equation for the probabilities of cluster configurations. For certain realizations of total mass and maximum cluster sizes, we find exact analytical results for the expected equilibrium cluster distributions. If coagulation is fast relative to fragmentation and if the total system mass is indivisible by the mass of the largest allowed cluster, we find a mean cluster-size distribution that is strikingly broader than that predicted by the corresponding mass-action equations. Combinations ofmore » total mass and maximum cluster size under which equilibration is accelerated, eluding late-stage coarsening, are also delineated. Finally, we compute the mean time it takes particles to first assemble into a maximum-sized cluster. Through careful state-space enumeration, the scaling of mean assembly times is derived for all combinations of total mass and maximum cluster size. We find that CF accelerates assembly relative to monomer kinetic only in special cases. All of our results hold in the infinite system limit and can be only derived from a high-dimensional discrete stochastic model, highlighting how classical mass-action models of self-assembly can fail.« less

  19. An analysis of the viscous flow through a compact radial turbine by the average passage approach

    NASA Technical Reports Server (NTRS)

    Heidmann, James D.; Beach, Timothy A.

    1990-01-01

    A steady, three-dimensional viscous average passage computer code is used to analyze the flow through a compact radial turbine rotor. The code models the flow as spatially periodic from blade passage to blade passage. Results from the code using varying computational models are compared with each other and with experimental data. These results include blade surface velocities and pressures, exit vorticity and entropy contour plots, shroud pressures, and spanwise exit total temperature, total pressure, and swirl distributions. The three computational models used are inviscid, viscous with no blade clearance, and viscous with blade clearance. It is found that modeling viscous effects improves correlation with experimental data, while modeling hub and tip clearances further improves some comparisons. Experimental results such as a local maximum of exit swirl, reduced exit total pressures at the walls, and exit total temperature magnitudes are explained by interpretation of the flow physics and computed secondary flows. Trends in the computed blade loading diagrams are similarly explained.

  20. A Low Mach Number Model for Moist Atmospheric Flows

    DOE PAGES

    Duarte, Max; Almgren, Ann S.; Bell, John B.

    2015-04-01

    A low Mach number model for moist atmospheric flows is introduced that accurately incorporates reversible moist processes in flows whose features of interest occur on advective rather than acoustic time scales. Total water is used as a prognostic variable, so that water vapor and liquid water are diagnostically recovered as needed from an exact Clausius–Clapeyron formula for moist thermodynamics. Low Mach number models can be computationally more efficient than a fully compressible model, but the low Mach number formulation introduces additional mathematical and computational complexity because of the divergence constraint imposed on the velocity field. Here in this paper, latentmore » heat release is accounted for in the source term of the constraint by estimating the rate of phase change based on the time variation of saturated water vapor subject to the thermodynamic equilibrium constraint. Finally, the authors numerically assess the validity of the low Mach number approximation for moist atmospheric flows by contrasting the low Mach number solution to reference solutions computed with a fully compressible formulation for a variety of test problems.« less

  1. Understanding I/O workload characteristics of a Peta-scale storage system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Gunasekaran, Raghul

    2015-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization,more » and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.« less

  2. Computer programs for pressurization (RAMP) and pressurized expulsion from a cryogenic liquid propellant tank

    NASA Technical Reports Server (NTRS)

    Masters, P. A.

    1974-01-01

    An analysis to predict the pressurant gas requirements for the discharge of cryogenic liquid propellants from storage tanks is presented, along with an algorithm and two computer programs. One program deals with the pressurization (ramp) phase of bringing the propellant tank up to its operating pressure. The method of analysis involves a numerical solution of the temperature and velocity functions for the tank ullage at a discrete set of points in time and space. The input requirements of the program are the initial ullage conditions, the initial temperature and pressure of the pressurant gas, and the time for the expulsion or the ramp. Computations are performed which determine the heat transfer between the ullage gas and the tank wall. Heat transfer to the liquid interface and to the hardware components may be included in the analysis. The program output includes predictions of mass of pressurant required, total energy transfer, and wall and ullage temperatures. The analysis, the algorithm, a complete description of input and output, and the FORTRAN 4 program listings are presented. Sample cases are included to illustrate use of the programs.

  3. A Low Mach Number Model for Moist Atmospheric Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duarte, Max; Almgren, Ann S.; Bell, John B.

    A low Mach number model for moist atmospheric flows is introduced that accurately incorporates reversible moist processes in flows whose features of interest occur on advective rather than acoustic time scales. Total water is used as a prognostic variable, so that water vapor and liquid water are diagnostically recovered as needed from an exact Clausius–Clapeyron formula for moist thermodynamics. Low Mach number models can be computationally more efficient than a fully compressible model, but the low Mach number formulation introduces additional mathematical and computational complexity because of the divergence constraint imposed on the velocity field. Here in this paper, latentmore » heat release is accounted for in the source term of the constraint by estimating the rate of phase change based on the time variation of saturated water vapor subject to the thermodynamic equilibrium constraint. Finally, the authors numerically assess the validity of the low Mach number approximation for moist atmospheric flows by contrasting the low Mach number solution to reference solutions computed with a fully compressible formulation for a variety of test problems.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinuesa, Ricardo; Fick, Lambert; Negi, Prabal

    In the present document we describe a toolbox for the spectral-element code Nek5000, aimed at computing turbulence statistics. The toolbox is presented for a small test case, namely a square duct with L x = 2h, L y = 2h and L z = 4h, where x, y and z are the horizontal, vertical and streamwise directions, respectively. The number of elements in the xy-plane is 16 X 16 = 256, and the number of elements in z is 4, leading to a total of 1,204 spectral elements. A polynomial order of N = 5 is chosen, and the meshmore » is generated using the Nek5000 tool genbox. The toolbox presented here allows to compute mean-velocity components, the Reynolds-stress tensor as well as turbulent kinetic energy (TKE) and Reynolds-stress budgets. Note that the present toolbox allows to compute turbulence statistics in turbulent flows with one homogeneous direction (where the statistics are based on time-averaging as well as averaging in the homogeneous direction), as well as in fully three-dimensional flows (with no periodic directions, where only time-averaging is considered).« less

  5. Gray: a ray tracing-based Monte Carlo simulator for PET

    NASA Astrophysics Data System (ADS)

    Freese, David L.; Olcott, Peter D.; Buss, Samuel R.; Levin, Craig S.

    2018-05-01

    Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within % when accounting for differences in peak NECR. We also estimate the peak NECR to be kcps, or within % of published experimental data. The activity concentration of the peak is also estimated within 1.3%.

  6. Computer games and fine motor skills.

    PubMed

    Borecki, Lukasz; Tolstych, Katarzyna; Pokorski, Mieczyslaw

    2013-01-01

    The study seeks to determine the influence of computer games on fine motor skills in young adults, an area of incomplete understanding and verification. We hypothesized that computer gaming could have a positive influence on basic motor skills, such as precision, aiming, speed, dexterity, or tremor. We examined 30 habitual game users (F/M - 3/27; age range 20-25 years) of the highly interactive game Counter Strike, in which players impersonate soldiers on a battlefield, and 30 age- and gender-matched subjects who declared never to play games. Selected tests from the Vienna Test System were used to assess fine motor skills and tremor. The results demonstrate that the game users scored appreciably better than the control subjects in all tests employed. In particular, the players did significantly better in the precision of arm-hand movements, as expressed by a lower time of errors, 1.6 ± 0.6 vs. 2.8 ± 0.6 s, a lower error rate, 13.6 ± 0.3 vs. 20.4 ± 2.2, and a shorter total time of performing a task, 14.6 ± 2.9 vs. 32.1 ± 4.5 s in non-players, respectively; p < 0.001 all. The findings demonstrate a positive influence of computer games on psychomotor functioning. We submit that playing computer games may be a useful training tool to increase fine motor skills and movement coordination.

  7. Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows

    NASA Astrophysics Data System (ADS)

    Jittamai, Phongchai

    This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and approximately 40% of the tested problems were solved optimally by the algorithms.

  8. Translational, rotational and internal dynamics of amyloid β-peptides (Aβ40 and Aβ42) from molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Bora, Ram Prasad; Prabhakar, Rajeev

    2009-10-01

    In this study, diffusion constants [translational (DT) and rotational (DR)], correlation times [rotational (τrot) and internal (τint)], and the intramolecular order parameters (S2) of the Alzheimer amyloid-β peptides Aβ40 and Aβ42 have been calculated from 150 ns molecular dynamics simulations in aqueous solution. The computed parameters have been compared with the experimentally measured values. The calculated DT of 1.61×10-6 cm2/s and 1.43×10-6 cm2/s for Aβ40 and Aβ42, respectively, at 300 K was found to follow the correct trend defined by the Debye-Stokes-Einstein relation that its value should decrease with the increase in the molecular weight. The estimated DR for Aβ40 and Aβ42 at 300 K are 0.085 and 0.071 ns-1, respectively. The rotational (Crot(t)) and internal (Cint(t)) correlation functions of Aβ40 and Aβ42 were observed to decay at nano- and picosecond time scales, respectively. The significantly different time decays of these functions validate the factorization of the total correlation function (Ctot(t)) of Aβ peptides into Crot(t) and Cint(t). At both short and long time scales, the Clore-Szabo model that was used as Cint(t) provided the best behavior of Ctot(t) for both Aβ40 and Aβ42. In addition, an effective rotational correlation time of Aβ40 is also computed at 18 °C and the computed value (2.30 ns) is in close agreement with the experimental value of 2.45 ns. The computed S2 parameters for the central hydrophobic core, the loop region, and C-terminal domains of Aβ40 and Aβ42 are in accord with the previous studies.

  9. Improved accuracy of component alignment with the implementation of image-free navigation in total knee arthroplasty.

    PubMed

    Rosenberger, Ralf E; Hoser, Christian; Quirbach, Sebastian; Attal, Rene; Hennerbichler, Alfred; Fink, Christian

    2008-03-01

    Accuracy of implant positioning and reconstruction of the mechanical leg axis are major requirements for achieving good long-term results in total knee arthroplasty (TKA). The purpose of the present study was to determine whether image-free computer navigation technology has the potential to improve the accuracy of component alignment in TKA cohorts of experienced surgeons immediately and constantly. One hundred patients with primary arthritis of the knee underwent the unilateral total knee arthroplasty. The cohort of 50 TKAs implanted with conventional instrumentation was directly followed by the cohort of the very first 50 computer-assisted TKAs. All surgeries were performed by two senior surgeons. All patients received the Zimmer NexGen total knee prosthesis (Zimmer Inc., Warsaw, IN, USA). There was no variability regarding surgeons or surgical technique, except for the use of the navigation system (StealthStation) Treon plus Medtronic Inc., Minnesota, MI, USA). Accuracy of implant positioning was measured on postoperative long-leg standing radiographs and standard lateral X-rays with regard to the valgus angle and the coronal and sagittal component angles. In addition, preoperative deformities of the mechanical leg axis, tourniquet time, age, and gender were correlated. Statistical analyses were performed using the SPSS 15.0 (SPSS Inc., Chicago, IL, USA) software package. Independent t-tests were used, with significance set at P < 0.05 (two-tailed) to compare differences in mean angular values and frontal mechanical alignment between the two cohorts. To compute the rate of optimally implanted prostheses between the two groups we used the chi(2) test. The average postoperative radiological frontal mechanical alignment was 1.88 degrees of varus (range 6.1 degrees of valgus-10.1 degrees of varus; SD 3.68 degrees ) in the conventional cohort and 0.28 degrees of varus (range 3.7 degrees -6.0 degrees of varus; SD 1.97 degrees ) in the navigated cohort. Including all criteria for optimal implant alignment, 16 cases (32%) in the conventional cohort and 31 cases (62%) in the navigated cohort have been implanted optimally. The average difference in tourniquet time was modest with additional 12.9 min in the navigated cohort compared to the conventional cohort. Our findings suggest that the experienced knee surgeons can improve immediately and constantly the accuracy of component orientation using an image-free computer-assisted navigation system in TKA. The computer-assisted technology has shown to be easy to use, safe, and efficient in routine knee replacement surgery. We believe that navigation is a key technology for various current and future surgical alignment topics and minimal-invasive lower limb surgery.

  10. Patient-specific instrumentation for total knee arthroplasty does not match the pre-operative plan as assessed by intra-operative computer-assisted navigation.

    PubMed

    Scholes, Corey; Sahni, Varun; Lustig, Sebastien; Parker, David A; Coolican, Myles R J

    2014-03-01

    The introduction of patient-specific instruments (PSI) for guiding bone cuts could increase the incidence of malalignment in primary total knee arthroplasty. The purpose of this study was to assess the agreement between one type of patient-specific instrumentation (Zimmer PSI) and the pre-operative plan with respect to bone cuts and component alignment during TKR using imageless computer navigation. A consecutive series of 30 femoral and tibial guides were assessed in-theatre by the same surgeon using computer navigation. Following surgical exposure, the PSI cutting guides were placed on the joint surface and alignment assessed using the navigation tracker. The difference between in-theatre data and the pre-operative plan was recorded and analysed. The error between in-theatre measurements and pre-operative plan for the femoral and tibial components exceeded 3° for 3 and 17% of the sample, respectively, while the error for total coronal alignment exceeded 3° for 27% of the sample. The present results indicate that alignment with Zimmer PSI cutting blocks, assessed by imageless navigation, does not match the pre-operative plan in a proportion of cases. To prevent unnecessary increases in the incidence of malalignment in primary TKR, it is recommended that these devices should not be used without objective verification of alignment, either in real-time or with post-operative imaging. Further work is required to identify the source of discrepancies and validate these devices prior to routine use. II.

  11. Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory

    NASA Astrophysics Data System (ADS)

    Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.

    2010-10-01

    In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.

  12. Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory

    NASA Astrophysics Data System (ADS)

    Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.

    2010-05-01

    In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.

  13. Physical activity and sedentary behaviour in European children: the IDEFICS study.

    PubMed

    Santaliestra-Pasías, Alba Ma; Mouratidou, Theodora; Verbestel, Vera; Bammann, Karin; Molnar, Dénes; Sieri, Sabina; Siani, Alfonso; Veidebaum, Toomas; Mårild, Staffan; Lissner, Lauren; Hadjigeorgiou, Charalambos; Reisch, Lucia; De Bourdeaudhuij, Ilse; Moreno, Luis A

    2014-10-01

    To estimate the prevalence of physical activity and sedentary behaviours in European children, and to evaluate the relationship between media availability in personal space and physical activity in relation to total screen time. Data from the baseline IDEFICS (Identification and prevention of dietary- and lifestyle-induced health effects in children and infants) cross-sectional survey. Information on hours of television/digital video disk/video viewing and computer/games-console use (weekday and weekend days), media device availability in personal space, sports club membership, hours of active organized play and commuting (to and from school) were assessed via a self-reported parental questionnaire. Total screen time was defined as the sum of daily media use and subsequently dichotomized into meeting or not meeting the guidelines of the American Academy of Pediatrics. Eight survey centres (Italy, Estonia, Cyprus, Belgium, Sweden, Germany, Hungary and Spain). Children (n 15 330; 51% males) aged 2-10 years. Percentage of children engaged in total screen time for >2 h/d was higher on weekend days (52% v. 20% on weekdays) and in the older group (71% in males; 57% in females), varying by country. Children with a television set in their bedroom were more likely not to meet the screen time recommendations (OR = 1·54; 95% CI 1·60, 1·74). Approximately a third of the children failed to meet current screen time recommendations. Availability of a television set in personal space increased the risk of excess total screen time. This information could be used to identify potential targets for public health promotion actions of young population groups.

  14. Accelerating the spin-up of the coupled carbon and nitrogen cycle model in CLM4

    DOE PAGES

    Fang, Yilin; Liu, Chongxuan; Leung, Lai-Yung R.

    2015-03-24

    The commonly adopted biogeochemistry spin-up process in an Earth system model (ESM) is to run the model for hundreds to thousands of years subject to periodic atmospheric forcing to reach dynamic steady state of the carbon–nitrogen (CN) models. A variety of approaches have been proposed to reduce the computation time of the spin-up process. Significant improvement in computational efficiency has been made recently. However, a long simulation time is still required to reach the common convergence criteria of the coupled carbon–nitrogen model. A gradient projection method was proposed and used to further reduce the computation time after examining the trendmore » of the dominant carbon pools. The Community Land Model version 4 (CLM4) with a carbon and nitrogen component was used in this study. From point-scale simulations, we found that the method can reduce the computation time by 20–69% compared to one of the fastest approaches in the literature. We also found that the cyclic stability of total carbon for some cases differs from that of the periodic atmospheric forcing, and some cases even showed instability. Close examination showed that one case has a carbon periodicity much longer than that of the atmospheric forcing due to the annual fire disturbance that is longer than half a year. The rest was caused by the instability of water table calculation in the hydrology model of CLM4. The instability issue is resolved after we replaced the hydrology scheme in CLM4 with a flow model for variably saturated porous media.« less

  15. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-file Systems

    PubMed Central

    Prabhakar, Attiguppe R; Yavagal, Chandrashekar; Naik, Saraswathi V

    2016-01-01

    ABSTRACT Background: Primary root canals are considered to be most challenging due to their complex anatomy. "Wave one" and "one shape" are single-file systems with reciprocating and rotary motion respectively. The aim of this study was to evaluate and compare dentin thickness, centering ability, canal transportation, and instrumentation time of wave one and one shape files in primary root canals using a cone beam computed tomographic (CBCT) analysis. Study design: This is an experimental, in vitro study comparing the two groups. Materials and methods: A total of 24 extracted human primary teeth with minimum 7 mm root length were included in the study. Cone beam computed tomographic images were taken before and after the instrumentation for each group. Dentin thickness, centering ability, canal transportation, and instrumentation times were evaluated for each group. Results: A significant difference was found in instrumentation time and canal transportation measures between the two groups. Wave one showed less canal transportation as compared with one shape, and the mean instrumentation time of wave one was significantly less than one shape. Conclusion: Reciprocating single-file systems was found to be faster with much less procedural errors and can hence be recommended for shaping the root canals of primary teeth. How to cite this article: Prabhakar AR, Yavagal C, Dixit K, Naik SV. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-File Systems. Int J Clin Pediatr Dent 2016;9(1):45-49. PMID:27274155

  16. GPU-accelerated regularized iterative reconstruction for few-view cone beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca; Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca

    2015-04-15

    Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it ismore » implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.« less

  17. Progress of High Efficiency Centrifugal Compressor Simulations Using TURBO

    NASA Technical Reports Server (NTRS)

    Kulkarni, Sameer; Beach, Timothy A.

    2017-01-01

    Three-dimensional, time-accurate, and phase-lagged computational fluid dynamics (CFD) simulations of the High Efficiency Centrifugal Compressor (HECC) stage were generated using the TURBO solver. Changes to the TURBO Parallel Version 4 source code were made in order to properly model the no-slip boundary condition along the spinning hub region for centrifugal impellers. A startup procedure was developed to generate a converged flow field in TURBO. This procedure initialized computations on a coarsened mesh generated by the Turbomachinery Gridding System (TGS) and relied on a method of systematically increasing wheel speed and backpressure. Baseline design-speed TURBO results generally overpredicted total pressure ratio, adiabatic efficiency, and the choking flow rate of the HECC stage as compared with the design-intent CFD results of Code Leo. Including diffuser fillet geometry in the TURBO computation resulted in a 0.6 percent reduction in the choking flow rate and led to a better match with design-intent CFD. Diffuser fillets reduced annulus cross-sectional area but also reduced corner separation, and thus blockage, in the diffuser passage. It was found that the TURBO computations are somewhat insensitive to inlet total pressure changing from the TURBO default inlet pressure of 14.7 pounds per square inch (101.35 kilopascals) down to 11.0 pounds per square inch (75.83 kilopascals), the inlet pressure of the component test. Off-design tip clearance was modeled in TURBO in two computations: one in which the blade tip geometry was trimmed by 12 mils (0.3048 millimeters), and another in which the hub flow path was moved to reflect a 12-mil axial shift in the impeller hub, creating a step at the hub. The one-dimensional results of these two computations indicate non-negligible differences between the two modeling approaches.

  18. Linear scaling computation of the Fock matrix. VI. Data parallel computation of the exchange-correlation matrix

    NASA Astrophysics Data System (ADS)

    Gan, Chee Kwan; Challacombe, Matt

    2003-05-01

    Recently, early onset linear scaling computation of the exchange-correlation matrix has been achieved using hierarchical cubature [J. Chem. Phys. 113, 10037 (2000)]. Hierarchical cubature differs from other methods in that the integration grid is adaptive and purely Cartesian, which allows for a straightforward domain decomposition in parallel computations; the volume enclosing the entire grid may be simply divided into a number of nonoverlapping boxes. In our data parallel approach, each box requires only a fraction of the total density to perform the necessary numerical integrations due to the finite extent of Gaussian-orbital basis sets. This inherent data locality may be exploited to reduce communications between processors as well as to avoid memory and copy overheads associated with data replication. Although the hierarchical cubature grid is Cartesian, naive boxing leads to irregular work loads due to strong spatial variations of the grid and the electron density. In this paper we describe equal time partitioning, which employs time measurement of the smallest sub-volumes (corresponding to the primitive cubature rule) to load balance grid-work for the next self-consistent-field iteration. After start-up from a heuristic center of mass partitioning, equal time partitioning exploits smooth variation of the density and grid between iterations to achieve load balance. With the 3-21G basis set and a medium quality grid, equal time partitioning applied to taxol (62 heavy atoms) attained a speedup of 61 out of 64 processors, while for a 110 molecule water cluster at standard density it achieved a speedup of 113 out of 128. The efficiency of equal time partitioning applied to hierarchical cubature improves as the grid work per processor increases. With a fine grid and the 6-311G(df,p) basis set, calculations on the 26 atom molecule α-pinene achieved a parallel efficiency better than 99% with 64 processors. For more coarse grained calculations, superlinear speedups are found to result from reduced computational complexity associated with data parallelism.

  19. The relationship between computer games and quality of life in adolescents

    PubMed Central

    Dolatabadi, Nayereh Kasiri; Eslami, Ahmad Ali; Mostafavi, Firooze; Hassanzade, Akbar; Moradi, Azam

    2013-01-01

    Background: Term of doing computer games among teenagers is growing rapidly. This popular phenomenon can cause physical and psychosocial issues in them. Therefore, this study examined the relationship between computer games and quality of life domains in adolescents aging 12-15 years. Materials and Methods: In a cross-sectional study using the 2-stage stratified cluster sampling method, 444 male and female students in Borkhar were selected. The data collection tool consisted of 1) World Health Organization Quality Of Life – BREF questionnaire and 2) personal information questionnaire. The data were analyzed by Pearson correlation, Spearman correlation, chi-square, independent t-tests and analysis of covariance. Findings: The total mean score of quality of life in students was 67.11±13.34. The results showed a significant relationship between the age of starting to play games and the overall quality of life score and its fourdomains (range r=–0.13 to –0.18). The mean of overall quality of life score in computer game users was 68.27±13.03 while it was 64.81±13.69 among those who did not play computer games and the difference was significant (P=0.01). There were significant differences in environmental and mental health domains between the two groups (P<0.05). However, there was no significant relationship between BMI with the time spent and the type of computer games. Conclusion: Playing computer games for a short time under parental supervision can have positive effects on quality of life in adolescents. However, spending long hours for playing computer games may have negative long-term effects. PMID:24083270

  20. References and benchmarks for pore-scale flow simulated using micro-CT images of porous media and digital rocks

    NASA Astrophysics Data System (ADS)

    Saxena, Nishank; Hofmann, Ronny; Alpak, Faruk O.; Berg, Steffen; Dietderich, Jesse; Agarwal, Umang; Tandon, Kunj; Hunter, Sander; Freeman, Justin; Wilson, Ove Bjorn

    2017-11-01

    We generate a novel reference dataset to quantify the impact of numerical solvers, boundary conditions, and simulation platforms. We consider a variety of microstructures ranging from idealized pipes to digital rocks. Pore throats of the digital rocks considered are large enough to be well resolved with state-of-the-art micro-computerized tomography technology. Permeability is computed using multiple numerical engines, 12 in total, including, Lattice-Boltzmann, computational fluid dynamics, voxel based, fast semi-analytical, and known empirical models. Thus, we provide a measure of uncertainty associated with flow computations of digital media. Moreover, the reference and standards dataset generated is the first of its kind and can be used to test and improve new fluid flow algorithms. We find that there is an overall good agreement between solvers for idealized cross-section shape pipes. As expected, the disagreement increases with increase in complexity of the pore space. Numerical solutions for pipes with sinusoidal variation of cross section show larger variability compared to pipes of constant cross-section shapes. We notice relatively larger variability in computed permeability of digital rocks with coefficient of variation (of up to 25%) in computed values between various solvers. Still, these differences are small given other subsurface uncertainties. The observed differences between solvers can be attributed to several causes including, differences in boundary conditions, numerical convergence criteria, and parameterization of fundamental physics equations. Solvers that perform additional meshing of irregular pore shapes require an additional step in practical workflows which involves skill and can introduce further uncertainty. Computation times for digital rocks vary from minutes to several days depending on the algorithm and available computational resources. We find that more stringent convergence criteria can improve solver accuracy but at the expense of longer computation time.

  1. Utility of screening computed tomography of chest, abdomen and pelvis in patients after heart transplantation.

    PubMed

    Dasari, Tarun W; Pavlovic-Surjancev, Biljana; Dusek, Linda; Patel, Nilamkumar; Heroux, Alain L

    2011-12-01

    Malignancy is a late cause of mortality in heart transplant recipients. It is unknown if screening computed tomography scan would lead to early detection of such malignancies or serious vascular anomalies post heart transplantation. This is a single center observational study of patients undergoing surveillance computed tomography of chest, abdomen and pelvis at least 5 years after transplantation. Abnormal findings, included pulmonary nodules, lymphadenopathy and intra-thoracic and intra-abdominal masses and vascular anomalies such as abdominal aortic aneurysm. The clinical follow up of each of these major abnormal findings is summarized. A total of 63 patients underwent computed tomography scan of chest, abdomen and pelvis at least 5 years after transplantation. Of these, 54 (86%) were male and 9 (14%) were female. Mean age was 52±9.2 years. Computed tomography revealed 1 lung cancer (squamous cell) only. Non specific pulmonary nodules were seen in 6 patients (9.5%). The most common incidental finding was abdominal aortic aneurysms (N=6 (9.5%)), which necessitated follow up computed tomography (N=5) or surgery (N=1). Mean time to detection of abdominal aortic aneurysms from transplantation was 14.6±4.2 years. Mean age at the time of detection of abdominal aortic aneurysms was 74.5±3.2 years. Screening computed tomography scan in patients 5 years from transplantation revealed only one malignancy but lead to increased detection of abdominal aortic aneurysms. Thus the utility is low in terms of detection of malignancy. Based on this study we do not recommend routine computed tomography post heart transplantation. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. Low dose of rectal thiopental sodium for pediatric sedation in spiral computed tomography study.

    PubMed

    Akhlaghpoor, Shahram; Shabestari, Abbas Arjmand; Moghdam, Mohsen Shojaei

    2007-06-01

    The aim of this study was to determine the effectiveness of reduced new dose in rectal sedation by thiopental sodium for computed tomography (CT) diagnostic imaging. A total of 90 children (mean age, 24.21 month +/- 13.63 [standard deviation]) underwent spiral CT study after rectal administration of thiopental sodium injection solution. The new dose ranged from 15 to 25 mg/kg with a total dose of 350 mg. The percentage of success and adverse reaction were evaluated. Sedation was successful in 98% of infants and children with an average time of 8.04 min +/- 6.87 (standard deviation). One of the cases found desaturation, two experienced vomiting, 14 found rectal defecation, and two experienced hyperactivity. No prolonged sedation was observed. Rectal administration of thiopental sodium for pediatric CT imaging is safe and effective even for hyperextend position by new reduced dose of the drug. This procedure could be easily done in the CT department under supervision of the radiologist.

  3. Preliminary Results on Design and Implementation of a Solar Radiation Monitoring System

    PubMed Central

    Balan, Mugur C.; Damian, Mihai; Jäntschi, Lorentz

    2008-01-01

    The paper presents a solar radiation monitoring system, using two scientific pyranometers and an on-line computer home-made data acquisition system. The first pyranometer measures the global solar radiation and the other one, which is shaded, measure the diffuse radiation. The values of total and diffuse solar radiation are continuously stored into a database on a server. Original software was created for data acquisition and interrogation of the created system. The server application acquires the data from pyranometers and stores it into a database with a baud rate of one record at 50 seconds. The client-server application queries the database and provides descriptive statistics. A web interface allow to any user to define the including criteria and to obtain the results. In terms of results, the system is able to provide direct, diffuse and total radiation intensities as time series. Our client-server application computes also derivate heats. The ability of the system to evaluate the local solar energy potential is highlighted. PMID:27879746

  4. Micro-CT image reconstruction based on alternating direction augmented Lagrangian method and total variation.

    PubMed

    Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David

    2013-01-01

    Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Estimated nitrogen loads from selected tributaries in Connecticut draining to Long Island Sound, 1999–2009

    USGS Publications Warehouse

    Mullaney, John R.; Schwarz, Gregory E.

    2013-01-01

    The total nitrogen load to Long Island Sound from Connecticut and contributing areas to the north was estimated for October 1998 to September 2009. Discrete measurements of total nitrogen concentrations and continuous flow data from 37 water-quality monitoring stations in the Long Island Sound watershed were used to compute total annual nitrogen yields and loads. Total annual computed yields and basin characteristics were used to develop a generalized-least squares regression model for use in estimating the total nitrogen yields from unmonitored areas in coastal and central Connecticut. Significant variables in the regression included the percentage of developed land, percentage of row crops, point-source nitrogen yields from wastewater-treatment facilities, and annual mean streamflow. Computed annual median total nitrogen yields at individual monitoring stations ranged from less than 2,000 pounds per square mile in mostly forested basins (typically less than 10 percent developed land) to more than 13,000 pounds per square mile in urban basins (greater than 40 percent developed) with wastewater-treatment facilities and in one agricultural basin. Medians of computed total annual nitrogen yields for water years 1999–2009 at most stations were similar to those previously computed for water years 1988–98. However, computed medians of annual yields at several stations, including the Naugatuck River, Quinnipiac River, and Hockanum River, were lower than during 1988–98. Nitrogen yields estimated for 26 unmonitored areas downstream from monitoring stations ranged from less than 2,000 pounds per square mile to 34,000 pounds per square mile. Computed annual total nitrogen loads at the farthest downstream monitoring stations were combined with the corresponding estimates for the downstream unmonitored areas for a combined estimate of the total nitrogen load from the entire study area. Resulting combined total nitrogen loads ranged from 38 to 68 million pounds per year during water years 1999–2009. Total annual loads from the monitored basins represent 63 to 74 percent of the total load. Computed annual nitrogen loads from four stations near the Massachusetts border with Connecticut represent 52 to 54 percent of the total nitrogen load during water years 2008–9, the only years with data for all the border sites. During the latter part of the 1999–2009 study period, total nitrogen loads to Long Island Sound from the study area appeared to increase slightly. The apparent increase in loads may be due to higher than normal streamflows, which consequently increased nonpoint nitrogen loads during the study, offsetting major reductions of nitrogen from wastewater-treatment facilities. Nitrogen loads from wastewater treatment facilities declined as much as 2.3 million pounds per year in areas of Connecticut upstream from the monitoring stations and as much as 5.8 million pounds per year in unmonitored areas downstream in coastal and central Connecticut.

  6. The association between sedentary behaviors during weekdays and weekend with change in body composition in young adults

    PubMed Central

    Drenowatz, Clemens; DeMello, Madison M.; Shook, Robin P.; Hand, Gregory A.; Burgess, Stephanie; Blair, Steven N.

    2016-01-01

    Background High sedentary time has been considered an important chronic disease risk factor but there is only limited information on the association of specific sedentary behaviors on weekdays and weekend-days with body composition. The present study examines the prospective association of total sedentary time and specific sedentary behaviors during weekdays and the weekend with body composition in young adults. Methods A total of 332 adults (50% male; 27.7 ± 3.7 years) were followed over a period of 1 year. Time spent sedentary, excluding sleep (SED), and in physical activity (PA) during weekdays and weekend-days was objectively assessed every 3 months with a multi-sensor device over a period of at least 8 days. In addition, participants reported sitting time, TV time and non-work related time spent at the computer separately for weekdays and the weekend. Fat mass and fat free mass were assessed via dual x-ray absorptiometry and used to calculate percent body fat (%BF). Energy intake was estimated based on TDEE and change in body composition. Results Cross-sectional analyses showed a significant correlation between SED and body composition (0.18 ≤ r ≤ 0.34). Associations between body weight and specific sedentary behaviors were less pronounced and significant during weekdays only (r ≤ 0.16). Nevertheless, decrease in SED during weekends, rather than during weekdays, was significantly associated with subsequent decrease in %BF (β = 0.06, p <0.01). After adjusting for PA and energy intake, results for SED were no longer significant. Only the association between change in sitting time during weekends and subsequent %BF was independent from change in PA or energy intake (β%BF = 0.04, p = 0.01), while there was no significant association between TV or computer time and subsequent body composition. Conclusions The stronger prospective association between sedentary behavior during weekends with subsequent body composition emphasizes the importance of leisure time behavior in weight management. PMID:29546170

  7. The association between sedentary behaviors during weekdays and weekend with change in body composition in young adults.

    PubMed

    Drenowatz, Clemens; DeMello, Madison M; Shook, Robin P; Hand, Gregory A; Burgess, Stephanie; Blair, Steven N

    2016-01-01

    High sedentary time has been considered an important chronic disease risk factor but there is only limited information on the association of specific sedentary behaviors on weekdays and weekend-days with body composition. The present study examines the prospective association of total sedentary time and specific sedentary behaviors during weekdays and the weekend with body composition in young adults. A total of 332 adults (50% male; 27.7 ± 3.7 years) were followed over a period of 1 year. Time spent sedentary, excluding sleep (SED), and in physical activity (PA) during weekdays and weekend-days was objectively assessed every 3 months with a multi-sensor device over a period of at least 8 days. In addition, participants reported sitting time, TV time and non-work related time spent at the computer separately for weekdays and the weekend. Fat mass and fat free mass were assessed via dual x-ray absorptiometry and used to calculate percent body fat (%BF). Energy intake was estimated based on TDEE and change in body composition. Cross-sectional analyses showed a significant correlation between SED and body composition (0.18 ≤ r ≤ 0.34). Associations between body weight and specific sedentary behaviors were less pronounced and significant during weekdays only ( r ≤ 0.16). Nevertheless, decrease in SED during weekends, rather than during weekdays, was significantly associated with subsequent decrease in %BF ( β = 0.06, p <0.01). After adjusting for PA and energy intake, results for SED were no longer significant. Only the association between change in sitting time during weekends and subsequent %BF was independent from change in PA or energy intake (β %BF = 0.04, p = 0.01), while there was no significant association between TV or computer time and subsequent body composition. The stronger prospective association between sedentary behavior during weekends with subsequent body composition emphasizes the importance of leisure time behavior in weight management.

  8. Direct Measurements of Smartphone Screen-Time: Relationships with Demographics and Sleep.

    PubMed

    Christensen, Matthew A; Bettencourt, Laura; Kaye, Leanne; Moturu, Sai T; Nguyen, Kaylin T; Olgin, Jeffrey E; Pletcher, Mark J; Marcus, Gregory M

    2016-01-01

    Smartphones are increasingly integrated into everyday life, but frequency of use has not yet been objectively measured and compared to demographics, health information, and in particular, sleep quality. The aim of this study was to characterize smartphone use by measuring screen-time directly, determine factors that are associated with increased screen-time, and to test the hypothesis that increased screen-time is associated with poor sleep. We performed a cross-sectional analysis in a subset of 653 participants enrolled in the Health eHeart Study, an internet-based longitudinal cohort study open to any interested adult (≥ 18 years). Smartphone screen-time (the number of minutes in each hour the screen was on) was measured continuously via smartphone application. For each participant, total and average screen-time were computed over 30-day windows. Average screen-time specifically during self-reported bedtime hours and sleeping period was also computed. Demographics, medical information, and sleep habits (Pittsburgh Sleep Quality Index-PSQI) were obtained by survey. Linear regression was used to obtain effect estimates. Total screen-time over 30 days was a median 38.4 hours (IQR 21.4 to 61.3) and average screen-time over 30 days was a median 3.7 minutes per hour (IQR 2.2 to 5.5). Younger age, self-reported race/ethnicity of Black and "Other" were associated with longer average screen-time after adjustment for potential confounders. Longer average screen-time was associated with shorter sleep duration and worse sleep-efficiency. Longer average screen-times during bedtime and the sleeping period were associated with poor sleep quality, decreased sleep efficiency, and longer sleep onset latency. These findings on actual smartphone screen-time build upon prior work based on self-report and confirm that adults spend a substantial amount of time using their smartphones. Screen-time differs across age and race, but is similar across socio-economic strata suggesting that cultural factors may drive smartphone use. Screen-time is associated with poor sleep. These findings cannot support conclusions on causation. Effect-cause remains a possibility: poor sleep may lead to increased screen-time. However, exposure to smartphone screens, particularly around bedtime, may negatively impact sleep.

  9. SEMICONDUCTOR INTEGRATED CIRCUITS: A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    NASA Astrophysics Data System (ADS)

    Jizhi, Liu; Xingbi, Chen

    2009-12-01

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate.

  10. A curve fitting method for solving the flutter equation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Cooper, J. L.

    1972-01-01

    A curve fitting approach was developed to solve the flutter equation for the critical flutter velocity. The psi versus nu curves are approximated by cubic and quadratic equations. The curve fitting technique utilized the first and second derivatives of psi with respect to nu. The method was tested for two structures, one structure being six times the total mass of the other structure. The algorithm never showed any tendency to diverge from the solution. The average time for the computation of a flutter velocity was 3.91 seconds on an IBM Model 50 computer for an accuracy of five per cent. For values of nu close to the critical root of the flutter equation the algorithm converged on the first attempt. The maximum number of iterations for convergence to the critical flutter velocity was five with an assumed value of nu relatively distant from the actual crossover.

  11. Message-passing-interface-based parallel FDTD investigation on the EM scattering from a 1-D rough sea surface using uniaxial perfectly matched layer absorbing boundary.

    PubMed

    Li, J; Guo, L-X; Zeng, H; Han, X-B

    2009-06-01

    A message-passing-interface (MPI)-based parallel finite-difference time-domain (FDTD) algorithm for the electromagnetic scattering from a 1-D randomly rough sea surface is presented. The uniaxial perfectly matched layer (UPML) medium is adopted for truncation of FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different processors is illustrated for one sea surface realization, and the computation time of the parallel FDTD algorithm is dramatically reduced compared to a single-process implementation. Finally, some numerical results are shown, including the backscattering characteristics of sea surface for different polarization and the bistatic scattering from a sea surface with large incident angle and large wind speed.

  12. New Method to Calculate the Time Variation of the Force Field Parameter

    NASA Astrophysics Data System (ADS)

    Santiago, A.; Lara, A.; Enríquez-Rivera, O.; Caballero-Lopez, R. A.

    2018-03-01

    Galactic cosmic rays (CRs) entering the heliosphere are affected by interplanetary magnetic fields and solar wind disturbances resulting in the modulation of the CR total flux observed in the inner heliosphere. The so-called force field model is often used to compute the galactic CR spectrum modulated by the solar activity due to the fact that it characterizes this process by only one parameter (the modulation potential, ϕ). In this work, we present two types of an empirical simplification (ES) method used to reconstruct the time variation of the modulation potential (Δϕ). Our ES offers a simple and fast alternative to compute the Δϕ at any desired time. The first ES type is based on the empirical fact that the dependence between Δϕ and neutron monitor (NM) count rates can be parameterized by a second-degree polynomial. The second ES type is based on the assumption that there is a inverse relation between Δϕ and NM count rates. We report the parameters found for the two types, which may be used to compute Δϕ for some NMs in a very fast and efficient way. In order to test the validity of the proposed ES, we compare our results with Δϕ obtained from literature. Finally, we apply our method to obtain the proton and helium spectra of primary CRs near the Earth at four randomly selected times.

  13. Ionospheric variation observed in Oregon Real-time GNSS network during the total eclipse of 21 August 2017

    NASA Astrophysics Data System (ADS)

    Shahbazi, A.; Park, J.; Kim, S.; Oberg, R.

    2017-12-01

    As the ionospheric behavior is highly related to the solar activity, the total eclipse passing across the North America on 21 August 2017 is expected to significantly affect the electron density in the ionosphere along the path. Taking advantage of GNSS capability for observing total electron content (TEC), this study demonstrates the impact of the total eclipse not only on the TEC variation during the period of the event but also on GNSS positioning. Oregon Department of Transportation (ODOT) runs a dense real time GNSS network, referred to as Oregon Real-time GNSS network (ORGN). From the dual frequency GPS and GLONASS observations in ORGN, the TEC over the network area can be extracted. We observe the vertical TEC (VTEC) from the ORGN for analyzing the ionospheric condition in the local area affected by the eclipse. To observe the temporal variation, we also observe the slant TEC (STEC) in each ray path and analyze the short term variation in different geometry of each ray path. Although the STEC is dependent quantity upon the changing geometry of a satellite, this approach provides insight to the ionospheric behavior of the total eclipse because the STEC does not involve the projection error, which is generated by VTEC computation. During the period of eclipse, the abnormal variations on VTEC and STEC are expected. The experimental results will be presented in time series plots for selected stations as well as the regional TEC map in Oregon. In addition to the TEC monitoring, we also test the positioning result of ORGN stations through Precise Point Positioning (PPP) and relative positioning. The expected result is that the both positioning results are degraded during the solar eclipse due to the instable ionospheric condition over short time.

  14. Accounting utility for determining individual usage of production level software systems

    NASA Technical Reports Server (NTRS)

    Garber, S. C.

    1984-01-01

    An accounting package was developed which determines the computer resources utilized by a user during the execution of a particular program and updates a file containing accumulated resource totals. The accounting package is divided into two separate programs. The first program determines the total amount of computer resources utilized by a user during the execution of a particular program. The second program uses these totals to update a file containing accumulated totals of computer resources utilized by a user for a particular program. This package is useful to those persons who have several other users continually accessing and running programs from their accounts. The package provides the ability to determine which users are accessing and running specified programs along with their total level of usage.

  15. A general method for computing the total solar radiation force on complex spacecraft structures

    NASA Technical Reports Server (NTRS)

    Chan, F. K.

    1981-01-01

    The method circumvents many of the existing difficulties in computational logic presently encountered in the direct analytical or numerical evaluation of the appropriate surface integral. It may be applied to complex spacecraft structures for computing the total force arising from either specular or diffuse reflection or even from non-Lambertian reflection and re-radiation.

  16. Computation of viscous blast wave flowfields

    NASA Technical Reports Server (NTRS)

    Atwood, Christopher A.

    1991-01-01

    A method to determine unsteady solutions of the Navier-Stokes equations was developed and applied. The structural finite-volume, approximately factored implicit scheme uses Newton subiterations to obtain the spatially and temporally second-order accurate time history of the interaction of blast-waves with stationary targets. The inviscid flux is evaluated using MacCormack's modified Steger-Warming flux or Roe flux difference splittings with total variation diminishing limiters, while the viscous flux is computed using central differences. The use of implicit boundary conditions in conjunction with a telescoping in time and space method permitted solutions to this strongly unsteady class of problems. Comparisons of numerical, analytical, and experimental results were made in two and three dimensions. These comparisons revealed accurate wave speed resolution with nonoscillatory discontinuity capturing. The purpose of this effort was to address the three-dimensional, viscous blast-wave problem. Test cases were undertaken to reveal these methods' weaknesses in three regimes: (1) viscous-dominated flow; (2) complex unsteady flow; and (3) three-dimensional flow. Comparisons of these computations to analytic and experimental results provided initial validation of the resultant code. Addition details on the numerical method and on the validation can be found in the appendix. Presently, the code is capable of single zone computations with selection of any permutation of solid wall or flow-through boundaries.

  17. LOSCAR: Long-term Ocean-atmosphere-Sediment CArbon cycle Reservoir Model

    NASA Astrophysics Data System (ADS)

    Zeebe, R. E.

    2011-06-01

    The LOSCAR model is designed to efficiently compute the partitioning of carbon between ocean, atmosphere, and sediments on time scales ranging from centuries to millions of years. While a variety of computationally inexpensive carbon cycle models are already available, many are missing a critical sediment component, which is indispensable for long-term integrations. One of LOSCAR's strengths is the coupling of ocean-atmosphere routines to a computationally efficient sediment module. This allows, for instance, adequate computation of CaCO3 dissolution, calcite compensation, and long-term carbon cycle fluxes, including weathering of carbonate and silicate rocks. The ocean component includes various biogeochemical tracers such as total carbon, alkalinity, phosphate, oxygen, and stable carbon isotopes. We have previously published applications of the model tackling future projections of ocean chemistry and weathering, pCO2 sensitivity to carbon cycle perturbations throughout the Cenozoic, and carbon/calcium cycling during the Paleocene-Eocene Thermal Maximum. The focus of the present contribution is the detailed description of the model including numerical architecture, processes and parameterizations, tuning, and examples of input and output. Typical CPU integration times of LOSCAR are of order seconds for several thousand model years on current standard desktop machines. The LOSCAR source code in C can be obtained from the author by sending a request to loscar.model@gmail.com.

  18. Uniform-large Area BaSrTiO3 Growth and Novel Material Designs to Enable Fabrication of High Quality, Affordable, and Performance Consistent Phase Shifters for OTM Phased Array Antennas

    DTIC Science & Technology

    2012-07-11

    molar flux of each precursor entering the reactor. The molar fluxes for Ba , Sr , and Ti are measured and computed in real-time, and these measured values...allows control of the relative amounts of Ba , Sr , and Ti, and the overall total mass flow in umole/min reaching the substrate. In all, there are three...is the Ba:Sr ratio with depth (from the top of the film). The ratio of Ba to Sr was controlled from 0.87 to 0.43. The total film thickness is 130 nm

  19. A DAG Scheduling Scheme on Heterogeneous Computing Systems Using Tuple-Based Chemical Reaction Optimization

    PubMed Central

    Jiang, Yuyi; Shao, Zhiqing; Guo, Yi

    2014-01-01

    A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977

  20. A DAG scheduling scheme on heterogeneous computing systems using tuple-based chemical reaction optimization.

    PubMed

    Jiang, Yuyi; Shao, Zhiqing; Guo, Yi

    2014-01-01

    A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems.

  1. Effects of Nose Bluntness on Hypersonic Boundary-Layer Receptivity and Stability Over Cones

    NASA Technical Reports Server (NTRS)

    Kara, Kursat; Balakumar, Ponnampalam; Kandil, Osama A.

    2011-01-01

    The receptivity to freestream acoustic disturbances and the stability properties of hypersonic boundary layers are numerically investigated for boundary-layer flows over a 5 straight cone at a freestream Mach number of 6.0. To compute the shock and the interaction of the shock with the instability waves, the Navier-Stokes equations in axisymmetric coordinates were solved. In the governing equations, inviscid and viscous flux vectors are discretized using a fifth-order accurate weighted-essentially-non-oscillatory scheme. A third-order accurate total-variation-diminishing Runge-Kutta scheme is employed for time integration. After the mean flow field is computed, disturbances are introduced at the upstream end of the computational domain. The appearance of instability waves near the nose region and the receptivity of the boundary layer with respect to slow mode acoustic waves are investigated. Computations confirm the stabilizing effect of nose bluntness and the role of the entropy layer in the delay of boundary-layer transition. The current solutions, compared with experimental observations and other computational results, exhibit good agreement.

  2. Real-Time, Multiple, Pan/Tilt/Zoom, Computer Vision Tracking, and 3D Position Estimating System for Unmanned Aerial System Metrology

    DTIC Science & Technology

    2013-10-18

    of the enclosed tasks plus the last parallel task for a total of five parallel tasks for one iteration, i). for j = 1…N for i = 1… 8 Figure...drizzling juices culminating in a state of salivating desire to cut a piece and enjoy. On the other hand, the smell could be that of a pungent, unpleasant

  3. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining

    PubMed Central

    Mendikute, Alberto; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-01-01

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g., 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras. PMID:28891946

  4. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining.

    PubMed

    Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-09-09

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras.

  5. Frequency of postural changes during sitting whilst using a desktop computer--exploring an analytical methodology.

    PubMed

    Niekerk, Sjan-Mari van; Louw, Quinette Abigail; Grimmer-Sommers, Karen

    2014-01-01

    Dynamic movement whilst sitting is advocated as a way to reduce musculoskeletal symptoms from seated activities. Conventionally, in ergonomics research, only a 'snapshot' of static sitting posture is captured, which does not provide information on the number or type of movements over a period of time. A novel approach to analyse the number of postural changes whist sitting was employed in order to describe the sitting behaviour of adolescents whilst undertaking computing activities. A repeated-measures observational study was conducted. A total of 12 high school students were randomly selected from a conveniently selected school. Fifteen minutes of 3D posture measurements were recorded to determine the number of postural changes whilst using computers. Data of 11 students were able to be analysed. Large intra-subject variation of the median and IQR was observed, indicating frequent postural changes whilst sitting. Better understanding of usual dynamic postural movements whilst sitting will provide new insights into causes of musculoskeletal symptoms experienced by computer users.

  6. Parallel computing in genomic research: advances and applications

    PubMed Central

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

  7. Computational Study of Near-limit Propagation of Detonation in Hydrogen-air Mixtures

    NASA Technical Reports Server (NTRS)

    Yungster, S.; Radhakrishnan, K.

    2002-01-01

    A computational investigation of the near-limit propagation of detonation in lean and rich hydrogen-air mixtures is presented. The calculations were carried out over an equivalence ratio range of 0.4 to 5.0, pressures ranging from 0.2 bar to 1.0 bar and ambient initial temperature. The computations involved solution of the one-dimensional Euler equations with detailed finite-rate chemistry. The numerical method is based on a second-order spatially accurate total-variation-diminishing (TVD) scheme, and a point implicit, first-order-accurate, time marching algorithm. The hydrogen-air combustion was modeled with a 9-species, 19-step reaction mechanism. A multi-level, dynamically adaptive grid was utilized in order to resolve the structure of the detonation. The results of the computations indicate that when hydrogen concentrations are reduced below certain levels, the detonation wave switches from a high-frequency, low amplitude oscillation mode to a low frequency mode exhibiting large fluctuations in the detonation wave speed; that is, a 'galloping' propagation mode is established.

  8. Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Maiti, Raman

    2016-06-01

    The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.

  9. Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Maiti, Raman

    2018-06-01

    The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.

  10. Parallel computing in genomic research: advances and applications.

    PubMed

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  11. Polimedication: applicability of a computer tool to reduce polypharmacy in nursing homes.

    PubMed

    García-Caballero, Tomás M; Lojo, Juan; Menéndez, Carlos; Fernández-Álvarez, Roberto; Mateos, Raimundo; Garcia-Caballero, Alejandro

    2018-05-11

    ABSTRACTBackground:The risks of polypharmacy can be far greater than the benefits, especially in the elderly. Comorbidity makes polypharmacy very prevalent in this population; thus, increasing the occurrence of adverse effects. To solve this problem, the most common strategy is to use lists of potentially inappropriate medications. However, this strategy is time consuming. In order to minimize the expenditure of time, our group devised a pilot computer tool (Polimedication) that automatically processes lists of medication providing the corresponding Screening Tool of Older Persons' potentially inappropriate Prescriptions alerts and facilitating standardized reports. The drug lists for 115 residents in Santa Marta Nursing Home (Fundación San Rosendo, Ourense, Spain) were processed. The program detected 10.04 alerts/patient, of which 74.29% were not repeated. After reviewing these alerts, 12.12% of the total (1.30 alerts/patient) were considered relevant. The largest number of alerts (41.48%) involved neuroleptic drugs. Finally, the patient's family physician or psychiatrist accepted the alert and made medication changes in 62.86% of the relevant alerts. The largest number of changes (38.64%) also involved neuroleptic drugs. The mean time spent in the generation and review of the warnings was 6.26 minute/patient. Total changes represented a saving of 32.77 € per resident/year in medication. The application of Polimedication tool detected a high proportion of potentially inappropriate prescriptions in institutionalized elderly patients. The use of the computerized tool achieved significant savings in pharmaceutical expenditure, as well as a reduction in the time taken for medication review.

  12. Tracking real-time neural activation of conceptual knowledge using single-trial event-related potentials.

    PubMed

    Amsel, Ben D

    2011-04-01

    Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed understanding of the timecourse and intensity of influence of several such knowledge types on real-time neural activity. A linear mixed-effects model was applied to single trial event-related potentials for 207 visually presented concrete words measured on total number of features (semantic richness), imageability, and number of visual motion, color, visual form, smell, taste, sound, and function features. Significant influences of multiple feature types occurred before 200ms, suggesting parallel neural computation of word form and conceptual knowledge during language comprehension. Function and visual motion features most prominently influenced neural activity, underscoring the importance of action-related knowledge in computing word meaning. The dynamic time courses and topographies of these effects are most consistent with a flexible conceptual system wherein temporally dynamic recruitment of representations in modal and supramodal cortex are a crucial element of the constellation of processes constituting word meaning computation in the brain. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. HMI Data Driven Magnetohydrodynamic Model Predicted Active Region Photospheric Heating Rates: Their Scale Invariant, Flare Like Power Law Distributions, and Their Possible Association With Flares

    NASA Technical Reports Server (NTRS)

    Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.

    2017-01-01

    There are many flare forecasting models. For an excellent review and comparison of some of them see Barnes et al. (2016). All these models are successful to some degree, but there is a need for better models. We claim the most successful models explicitly or implicitly base their forecasts on various estimates of components of the photospheric current density J, based on observations of the photospheric magnetic field B. However, none of the models we are aware of compute the complete J. We seek to develop a better model based on computing the complete photospheric J. Initial results from this model are presented in this talk. We present a data driven, near photospheric, 3 D, non-force free magnetohydrodynamic (MHD) model that computes time series of the total J, and associated resistive heating rate in each pixel at the photosphere in the neutral line regions (NLRs) of 14 active regions (ARs). The model is driven by time series of B measured by the Helioseismic & Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO) satellite. Spurious Doppler periods due to SDO orbital motion are filtered out of the time series of B in every AR pixel. Errors in B due to these periods can be significant.

  14. Graphics Processing Unit-Accelerated Nonrigid Registration of MR Images to CT Images During CT-Guided Percutaneous Liver Tumor Ablations.

    PubMed

    Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G; Shekher, Raj; Hata, Nobuhiko

    2015-06-01

    Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated volume subdivision technique may enable the implementation of nonrigid registration into routine clinical practice. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  15. Neighborhood disorder and screen time among 10-16 year old Canadian youth: A cross-sectional study

    PubMed Central

    2012-01-01

    Background Screen time activities (e.g., television, computers, video games) have been linked to several negative health outcomes among young people. In order to develop evidence-based interventions to reduce screen time, the factors that influence the behavior need to be better understood. High neighborhood disorder, which may encourage young people to stay indoors where screen time activities are readily available, is one potential factor to consider. Methods Results are based on 15,917 youth in grades 6-10 (aged 10-16 years old) who participated in the Canadian 2009/10 Health Behaviour in School-aged Children Survey (HBSC). Total hours per week of television, video games, and computer use were reported by the participating students in the HBSC student questionnaire. Ten items of neighborhood disorder including safety, neighbors taking advantage, drugs/drinking in public, ethnic tensions, gangs, crime, conditions of buildings/grounds, abandoned buildings, litter, and graffiti were measured using the HBSC student questionnaire, the HBSC administrator questionnaire, and Geographic Information Systems. Based upon these 10 items, social and physical neighborhood disorder variables were derived using principal component analysis. Multivariate multilevel logistic regression analyses were used to examine the relationship between social and physical neighborhood disorder and individual screen time variables. Results High (top quartile) social neighborhood disorder was associated with approximately 35-45% increased risk of high (top quartile) television, computer, and video game use. Physical neighborhood disorder was not associated with screen time activities after adjusting for social neighborhood disorder. However, high social and physical neighborhood disorder combined was associated with approximately 40-60% increased likelihood of high television, computer, and video game use. Conclusion High neighborhood disorder is one environmental factor that may be important to consider for future public health interventions and strategies aiming to reduce screen time among youth. PMID:22651908

  16. High-frequency video capture and a computer program with frame-by-frame angle determination functionality as tools that support judging in artistic gymnastics.

    PubMed

    Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej

    2015-01-01

    The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.

  17. GPU-accelerated compressed-sensing (CS) image reconstruction in chest digital tomosynthesis (CDT) using CUDA programming

    NASA Astrophysics Data System (ADS)

    Choi, Sunghoon; Lee, Haenghwa; Lee, Donghoon; Choi, Seungyeon; Shin, Jungwook; Jang, Woojin; Seo, Chang-Woo; Kim, Hee-Joung

    2017-03-01

    A compressed-sensing (CS) technique has been rapidly applied in medical imaging field for retrieving volumetric data from highly under-sampled projections. Among many variant forms, CS technique based on a total-variation (TV) regularization strategy shows fairly reasonable results in cone-beam geometry. In this study, we implemented the TV-based CS image reconstruction strategy in our prototype chest digital tomosynthesis (CDT) R/F system. Due to the iterative nature of time consuming processes in solving a cost function, we took advantage of parallel computing using graphics processing units (GPU) by the compute unified device architecture (CUDA) programming to accelerate our algorithm. In order to compare the algorithmic performance of our proposed CS algorithm, conventional filtered back-projection (FBP) and simultaneous algebraic reconstruction technique (SART) reconstruction schemes were also studied. The results indicated that the CS produced better contrast-to-noise ratios (CNRs) in the physical phantom images (Teflon region-of-interest) by factors of 3.91 and 1.93 than FBP and SART images, respectively. The resulted human chest phantom images including lung nodules with different diameters also showed better visual appearance in the CS images. Our proposed GPU-accelerated CS reconstruction scheme could produce volumetric data up to 80 times than CPU programming. Total elapsed time for producing 50 coronal planes with 1024×1024 image matrix using 41 projection views were 216.74 seconds for proposed CS algorithms on our GPU programming, which could match the clinically feasible time ( 3 min). Consequently, our results demonstrated that the proposed CS method showed a potential of additional dose reduction in digital tomosynthesis with reasonable image quality in a fast time.

  18. Coital frequency and infertility: which male factors predict less frequent coitus among infertile couples?

    PubMed

    Perlis, Nathan; Lo, Kirk C; Grober, Ethan D; Spencer, Leia; Jarvi, Keith

    2013-08-01

    To determine the coital frequency among infertile couples and which factors are associated with less frequent coitus. Cross-sectional study. Tertiary-level male infertility clinic. A total of 1,298 infertile men. Administration of computer-based survey, semen analysis, and serum hormone evaluation. Monthly coital frequency. A total of 1,298 patients presented to clinic for infertility consultation and completed the computer-based survey. The median male age was 35 years (interquartile range [IQR] 32-39 years) and the median duration of infertility was 2 years (IQR 1-4 years) before consultation. Median monthly coital frequency was seven (IQR 5-10; range 0-40); 24% of couples were having intercourse ≤ 4 times per month. Overall, 0.6%, 2.7%, 4.8%, 5.8%, and 10.8% of the men reported having intercourse 0, 1, 2, 3, and 4 times per month, respectively. When simultaneously taking into account the influence of age, libido, erectile function, and semen volume on coital frequency, older patients had 1.05 times higher odds (per year of age) of less frequent coitus (odds ratio 1.05, 95% confidence interval 1.03-1.08). In addition, patients with better erectile function had 1.12 times higher odds (per point on Sexual Health Inventory for Men scale) of more frequent coitus (odds ratio 1.12, 95% confidence interval 1.09-1.18). Similar to the general population, most infertile couples report having coitus more than four times per month. Older male age and erectile dysfunction are independent risk factors for less frequent coitus among infertile men, which could have an impact on fertility. Coital frequency should be considered in infertility assessments. Copyright © 2013 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  19. Study of basic computer competence among public health nurses in Taiwan.

    PubMed

    Yang, Kuei-Feng; Yu, Shu; Lin, Ming-Sheng; Hsu, Chia-Ling

    2004-03-01

    Rapid advances in information technology and media have made distance learning on the Internet possible. This new model of learning allows greater efficiency and flexibility in knowledge acquisition. Since basic computer competence is a prerequisite for this new learning model, this study was conducted to examine the basic computer competence of public health nurses in Taiwan and explore factors influencing computer competence. A national cross-sectional randomized study was conducted with 329 public health nurses. A questionnaire was used to collect data and was delivered by mail. Results indicate that basic computer competence of public health nurses in Taiwan is still needs to be improved (mean = 57.57 +- 2.83, total score range from 26-130). Among the five most frequently used software programs, nurses were most knowledgeable about Word and least knowledgeable about PowerPoint. Stepwise multiple regression analysis revealed eight variables (weekly number of hours spent online at home, weekly amount of time spent online at work, weekly frequency of computer use at work, previous computer training, computer at workplace and Internet access, job position, education level, and age) that significantly influenced computer competence, which accounted for 39.0 % of the variance. In conclusion, greater computer competence, broader educational programs regarding computer technology, and a greater emphasis on computers at work are necessary to increase the usefulness of distance learning via the Internet in Taiwan. Building a user-friendly environment is important in developing this new media model of learning for the future.

  20. Novel crystal timing calibration method based on total variation

    NASA Astrophysics Data System (ADS)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  1. Double photoionization of Be-like (Be-F5+) ions

    NASA Astrophysics Data System (ADS)

    Abdel Naby, Shahin; Pindzola, Michael; Colgan, James

    2015-04-01

    The time-dependent close-coupling method is used to study the single photon double ionization of Be-like (Be - F5+) ions. Energy and angle differential cross sections are calculated to fully investigate the correlated motion of the two photoelectrons. Symmetric and antisymmetric amplitudes are presented along the isoelectronic sequence for different energy sharing of the emitted electrons. Our total double photoionization cross sections are in good agreement with available theoretical results and experimental measurements along the Be-like ions. This work was supported in part by grants from NSF and US DoE. Computational work was carried out at NERSC in Oakland, California and the National Institute for Computational Sciences in Knoxville, Tennessee.

  2. Development of mpi_EPIC model for global agroecosystem modeling

    DOE PAGES

    Kang, Shujiang; Wang, Dali; Jeff A. Nichols; ...

    2014-12-31

    Models that address policy-maker concerns about multi-scale effects of food and bioenergy production systems are computationally demanding. We integrated the message passing interface algorithm into the process-based EPIC model to accelerate computation of ecosystem effects. Simulation performance was further enhanced by applying the Vampir framework. When this enhanced mpi_EPIC model was tested, total execution time for a global 30-year simulation of a switchgrass cropping system was shortened to less than 0.5 hours on a supercomputer. The results illustrate that mpi_EPIC using parallel design can balance simulation workloads and facilitate large-scale, high-resolution analysis of agricultural production systems, management alternatives and environmentalmore » effects.« less

  3. Computation of tightly-focused laser beams in the FDTD method

    PubMed Central

    Çapoğlu, İlker R.; Taflove, Allen; Backman, Vadim

    2013-01-01

    We demonstrate how a tightly-focused coherent TEMmn laser beam can be computed in the finite-difference time-domain (FDTD) method. The electromagnetic field around the focus is decomposed into a plane-wave spectrum, and approximated by a finite number of plane waves injected into the FDTD grid using the total-field/scattered-field (TF/SF) method. We provide an error analysis, and guidelines for the discrete approximation. We analyze the scattering of the beam from layered spaces and individual scatterers. The described method should be useful for the simulation of confocal microscopy and optical data storage. An implementation of the method can be found in our free and open source FDTD software (“Angora”). PMID:23388899

  4. Computation of tightly-focused laser beams in the FDTD method.

    PubMed

    Capoğlu, Ilker R; Taflove, Allen; Backman, Vadim

    2013-01-14

    We demonstrate how a tightly-focused coherent TEMmn laser beam can be computed in the finite-difference time-domain (FDTD) method. The electromagnetic field around the focus is decomposed into a plane-wave spectrum, and approximated by a finite number of plane waves injected into the FDTD grid using the total-field/scattered-field (TF/SF) method. We provide an error analysis, and guidelines for the discrete approximation. We analyze the scattering of the beam from layered spaces and individual scatterers. The described method should be useful for the simulation of confocal microscopy and optical data storage. An implementation of the method can be found in our free and open source FDTD software ("Angora").

  5. Optimizing integrated airport surface and terminal airspace operations under uncertainty

    NASA Astrophysics Data System (ADS)

    Bosson, Christabelle S.

    In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is performed for the Los Angeles environment and probabilistic distributions of pertinent uncertainty sources are obtained. A sensitivity analysis is then carried out to assess the methodology performance and find optimal sampling parameters. Finally, simulations of increasing traffic density in the presence of uncertainty are conducted first for integrated arrivals and departures, then for integrated surface and air operations. To compare the optimization results and show the benefits of integrated operations, two aircraft separation methods are implemented that offer different routing options. The simulations of integrated air operations and the simulations of integrated air and surface operations demonstrate that significant traveling time savings, both total and individual surface and air times, can be obtained when more direct routes are allowed to be traveled even in the presence of uncertainty. The resulting routings induce however extra take off delay for departing flights. As a consequence, some flights cannot meet their initial assigned runway slot which engenders runway position shifting when comparing resulting runway sequences computed under both deterministic and stochastic conditions. The optimization is able to compute an optimal runway schedule that represents an optimal balance between total schedule delays and total travel times.

  6. A comparison of registration errors with imageless computer navigation during MIS total knee arthroplasty versus standard incision total knee arthroplasty: a cadaveric study.

    PubMed

    Davis, Edward T; Pagkalos, Joseph; Gallie, Price A M; Macgroarty, Kelly; Waddell, James P; Schemitsch, Emil H

    2015-01-01

    Optimal component alignment in total knee arthroplasty has been associated with better functional outcome as well as improved implant longevity. The ability to align components optimally during minimally invasive (MIS) total knee replacement (TKR) has been a cause of concern. Computer navigation is a useful aid in achieving the desired alignment although it is limited by the error during the manual registration of landmarks. Our study aims to compare the registration process error between a standard and a MIS surgical approach. We hypothesized that performing the registration error via an MIS approach would increase the registration process error. Five fresh frozen lower limbs were routinely prepared and draped. The registration process was performed through an MIS approach. This was then extended to the standard approach and the registration was performed again. Two surgeons performed the registration process five times with each approach. Performing the registration process through the MIS approach was not associated with higher error compared to the standard approach in the alignment parameters of interest. This rejects our hypothesis. Image-free navigated MIS TKR does not appear to carry higher risk of component malalignment due to the registration process error. Navigation can be used during MIS TKR to improve alignment without reduced accuracy due to the approach.

  7. Modeling and computational simulation and the potential of virtual and augmented reality associated to the teaching of nanoscience and nanotechnology

    NASA Astrophysics Data System (ADS)

    Ribeiro, Allan; Santos, Helen

    With the advent of new information and communication technologies (ICTs), the communicative interaction changes the way of being and acting of people, at the same time that changes the way of work activities related to education. In this range of possibilities provided by the advancement of computational resources include virtual reality (VR) and augmented reality (AR), are highlighted as new forms of information visualization in computer applications. While the RV allows user interaction with a virtual environment totally computer generated; in RA the virtual images are inserted in real environment, but both create new opportunities to support teaching and learning in formal and informal contexts. Such technologies are able to express representations of reality or of the imagination, as systems in nanoscale and low dimensionality, being imperative to explore, in the most diverse areas of knowledge, the potential offered by ICT and emerging technologies. In this sense, this work presents computer applications of virtual and augmented reality developed with the use of modeling and simulation in computational approaches to topics related to nanoscience and nanotechnology, and articulated with innovative pedagogical practices.

  8. Evolution of the solar irradiance during the Holocene

    NASA Astrophysics Data System (ADS)

    Vieira, L. E. A.; Solanki, S. K.; Krivova, N. A.; Usoskin, I.

    2011-07-01

    Context. Long-term records of solar radiative output are vital for understanding solar variability and past climate change. Measurements of solar irradiance are available for only the last three decades, which calls for reconstructions of this quantity over longer time scales using suitable models. Aims: We present a physically consistent reconstruction of the total solar irradiance for the Holocene. Methods: We extend the SATIRE (Spectral And Total Irradiance REconstruction) models to estimate the evolution of the total (and partly spectral) solar irradiance over the Holocene. The basic assumption is that the variations of the solar irradiance are due to the evolution of the dark and bright magnetic features on the solar surface. The evolution of the decadally averaged magnetic flux is computed from decadal values of cosmogenic isotope concentrations recorded in natural archives employing a series of physics-based models connecting the processes from the modulation of the cosmic ray flux in the heliosphere to their record in natural archives. We then compute the total solar irradiance (TSI) as a linear combination of the jth and jth + 1 decadal values of the open magnetic flux. In order to evaluate the uncertainties due to the evolution of the Earth's magnetic dipole moment, we employ four reconstructions of the open flux which are based on conceptually different paleomagnetic models. Results: Reconstructions of the TSI over the Holocene, each valid for a different paleomagnetic time series, are presented. Our analysis suggests that major sources of uncertainty in the TSI in this model are the heritage of the uncertainty of the TSI since 1610 reconstructed from sunspot data and the uncertainty of the evolution of the Earth's magnetic dipole moment. The analysis of the distribution functions of the reconstructed irradiance for the last 3000 years, which is the period that the reconstructions overlap, indicates that the estimates based on the virtual axial dipole moment are significantly lower at earlier times than the reconstructions based on the virtual dipole moment. We also present a combined reconstruction, which represents our best estimate of total solar irradiance for any given time during the Holocene. Conclusions: We present the first physics-based reconstruction of the total solar irradiance over the Holocene, which will be of interest for studies of climate change over the last 11 500 years. The reconstruction indicates that the decadally averaged total solar irradiance ranges over approximately 1.5 W/m2 from grand maxima to grand minima. Appendix A is available in electronic form at http://www.aanda.orgThe TSI data is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/531/A6

  9. Scalable Prediction of Energy Consumption using Incremental Time Series Clustering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmhan, Yogesh; Noor, Muhammad Usman

    2013-10-09

    Time series datasets are a canonical form of high velocity Big Data, and often generated by pervasive sensors, such as found in smart infrastructure. Performing predictive analytics on time series data can be computationally complex, and requires approximation techniques. In this paper, we motivate this problem using a real application from the smart grid domain. We propose an incremental clustering technique, along with a novel affinity score for determining cluster similarity, which help reduce the prediction error for cumulative time series within a cluster. We evaluate this technique, along with optimizations, using real datasets from smart meters, totaling ~700,000 datamore » points, and show the efficacy of our techniques in improving the prediction error of time series data within polynomial time.« less

  10. Assessment of performances of sun zenith angle and altitude parameterisations of atmospheric radiative transfer for spectral surface downwelling solar irradiance

    NASA Astrophysics Data System (ADS)

    Wald, L.; Blanc, Ph.

    2010-09-01

    Satellite-derived assessments of surface downwelling solar irradiance are more and more used by engineering companies in solar energy. Performances are judged satisfactory for the time being. Nevertheless, requests for more accuracy are increasing, in particular in the spectral definition and in the decomposition of the global radiation into direct and diffuse radiations. One approach to reach this goal is to improve both the modelling of the radiative transfer and the quality of the inputs describing the optical state. Within their joint project Heliosat-4, DLR and MINES ParisTech have adopted this approach to create advanced databases of solar irradiance succeeding to the current ones HelioClim and SolEMi. Regarding the model, we have opted for libRadtran, a well-known model of proven quality. As many similar models, running libRadtran is very time-consuming when it comes to process millions or more pixels or grid cells. This is incompatible with real-time operational process. One may adopt the abacus approach, or look-up tables, to overcome the problem. The model is run for a limited number of cases, covering the whole range of values taken by the various inputs of the model. Abaci are such constructed. For each real case, the irradiance value is computed by interpolating within the abaci. In this way, real-time can be envisioned. Nevertheless, the computation of the abaci themselves requires large computing capabilities. In addition, searching the abaci to find the values to interpolate can be time-consuming as the abaci are very large: several millions of values in total. Moreover, it raises the extrapolation problem of parameter out-of-range during the utilisation of the abaci. Parameterisation, when possible, is a means to reduce the amount of computations to be made and subsequently, the computation effort to create the abaci, the size of the abaci, the extrapolation and the searching time. It describes in analytical manner and with a few parameters the change in irradiance with a specific variable. The communication discusses two parameterisations found in the literature. One deals with the solar zenith angle, the other with the altitude. We assess their performances in retrieving solar irradiance for 32 spectral bands, from 240 nm to 4606 nm. The model libRadtran is run to create data sets for all sun zenith angles (every 5 degrees) and all altitudes (every km). These data sets are considered as a reference. Then, for each parameterisation, we compute the parameters using two irradiance values for specific values of angle (e.g., 0 and 60 degrees) or altitude (e.g., 0 and 3 km). The parameterisations are then applied to other values of angle and altitude. Differences between these assessments and the reference values of irradiance are computed and analysed. We conclude on the level of performances of each parameterisation for each spectral band as well as for the total irradiance. We discuss the possible use of these parameterisations in the future method Heliosat-4 and possible improvements. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement no. 218793 (MACC project).

  11. Paramedics on the job: dynamic trunk motion assessment at the workplace.

    PubMed

    Prairie, Jérôme; Corbeil, Philippe

    2014-07-01

    Many paramedics' work accidents are related to physical aspects of the job, and the most affected body part is the low back. This study documents the trunk motion exposure of paramedics on the job. Nine paramedics were observed over 12 shifts (120 h). Trunk postures were recorded with the computer-assisted CUELA measurement system worn on the back like a knapsack. Average duration of an emergency call was 23.5 min. Sagittal trunk flexion of >40° and twisting rotation of >24° were observed in 21% and 17% of time-sampled postures. Medical care on the scene (44% of total time) involved prolonged flexed and twisted postures (∼ 10s). The highest extreme sagittal trunk flexion (63°) and twisting rotation (40°) were observed during lifting activities, which lasted 2% of the total time. Paramedics adopted trunk motions that may significantly increase the risk of low back disorders during medical care and patient-handling activities. Copyright © 2013. Published by Elsevier Ltd.

  12. Real-time individualization of the unified model of performance.

    PubMed

    Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Balkin, Thomas J; Reifman, Jaques

    2017-12-01

    Existing mathematical models for predicting neurobehavioural performance are not suited for mobile computing platforms because they cannot adapt model parameters automatically in real time to reflect individual differences in the effects of sleep loss. We used an extended Kalman filter to develop a computationally efficient algorithm that continually adapts the parameters of the recently developed Unified Model of Performance (UMP) to an individual. The algorithm accomplishes this in real time as new performance data for the individual become available. We assessed the algorithm's performance by simulating real-time model individualization for 18 subjects subjected to 64 h of total sleep deprivation (TSD) and 7 days of chronic sleep restriction (CSR) with 3 h of time in bed per night, using psychomotor vigilance task (PVT) data collected every 2 h during wakefulness. This UMP individualization process produced parameter estimates that progressively approached the solution produced by a post-hoc fitting of model parameters using all data. The minimum number of PVT measurements needed to individualize the model parameters depended upon the type of sleep-loss challenge, with ~30 required for TSD and ~70 for CSR. However, model individualization depended upon the overall duration of data collection, yielding increasingly accurate model parameters with greater number of days. Interestingly, reducing the PVT sampling frequency by a factor of two did not notably hamper model individualization. The proposed algorithm facilitates real-time learning of an individual's trait-like responses to sleep loss and enables the development of individualized performance prediction models for use in a mobile computing platform. © 2017 European Sleep Research Society.

  13. A Computing Platform for Parallel Sparse Matrix Computations

    DTIC Science & Technology

    2016-01-05

    REPORT NUMBER 19a. NAME OF RESPONSIBLE PERSON 19b. TELEPHONE NUMBER Ahmed Sameh Ahmed H. Sameh, Alicia Klinvex, Yao Zhu 611103 c. THIS PAGE The...PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: Discipline Yao Zhu 0.50 Alicia Klinvex 0.10 0.60 2 Names of Post Doctorates Names of Faculty Supported...PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: NAME Total Number: NAME Total Number: Yao Zhu Alicia Klinvex 2 ...... ...... Sub Contractors (DD882) Names of other

  14. The effect of real-time teleconsultations between hospital-based nurses and patients with severe COPD discharged after an exacerbation.

    PubMed

    Sorknaes, Anne Dichmann; Bech, Mickael; Madsen, Hanne; Titlestad, Ingrid L; Hounsgaard, Lise; Hansen-Nord, Michael; Jest, Peder; Olesen, Finn; Lauridsen, Joergen; Østergaard, Birte

    2013-12-01

    We investigated the effect of daily real-time teleconsultations for one week between hospital-based nurses specialised in respiratory diseases and patients with severe COPD discharged after acute exacerbation. Patients admitted with acute exacerbation of chronic obstructive pulmonary disease (AECOPD) at two hospitals were recruited at hospital discharge. They were randomly assigned to intervention or control. The telemedicine equipment consisted of a briefcase with built-in computer including a web camera, microphone and measurement equipment. The primary outcome was the mean number of total hospital readmissions within 26 weeks of discharge. A total of 266 patients (mean age 72 years) were allocated to either intervention (n = 132) or control (n = 134). There was no significant difference in the unconditional total mean number of hospital readmissions after 26 weeks: mean 1.4 (SD 2.1) in the intervention group and 1.6 (SD 2.4) in the control group. In a secondary analysis, there was no significant difference between the two groups in mortality, time to readmission, mean number of total hospital readmissions, mean number of readmissions with AECOPD, mean number of total hospital readmission days or mean number of readmission days with AECOPD calculated at 4, 8, 12 and 26 weeks. Thus the addition of one week of teleconsultations between hospital-based nurses and patients with severe COPD discharged after hospitalisation did not significantly reduce readmissions or affect mortality.

  15. EHR use and patient satisfaction: What we learned.

    PubMed

    Farber, Neil J; Liu, Lin; Chen, Yunan; Calvitti, Alan; Street, Richard L; Zuest, Danielle; Bell, Kristin; Gabuzda, Mark; Gray, Barbara; Ashfaq, Shazia; Agha, Zia

    2015-11-01

    Few studies have quantitatively examined the degree to which the use of the computer affects patients' satisfaction with the clinician and the quality of the visit. We conducted a study to examine this association. Twenty-three clinicians (21 internal medicine physicians, 2 nurse practitioners) were recruited from 4 Veteran Affairs Medical Center (VAMC) clinics located in San Diego, Calif. Five to 6 patients for most clinicians (one patient each for 2 of the clinicians) were recruited to participate in a study of patient-physician communication. The clinicians' computer use and the patient-clinician interactions in the exam room were captured in real time via video recordings of the interactions and the computer screen, and through the use of the Morae usability testing software system, which recorded clinician clicks and scrolls on the computer. After the visit, patients were asked to complete a satisfaction survey. The final sample consisted of 126 consultations. Total patient satisfaction (beta=0.014; P=.027) and patient satisfaction with patient-centered communication (beta=0.02; P=.02) were significantly associated with higher clinician “gaze time” at the patient. A higher percentage of gaze time during a visit (controlling for the length of the visit) was significantly associated with greater satisfaction with patient-centered communication (beta=0.628; P=.033). Higher clinician gaze time at the patient predicted greater patient satisfaction. This suggests that clinicians would be well served to refine their multitasking skills so that they communicate in a patient-centered manner while performing necessary computer-related tasks. These findings also have important implications for clinical training with respect to using an electronic health record (EHR) system in ways that do not impede the one-on-one conversation between clinician and patient.

  16. BWM*: A Novel, Provable, Ensemble-based Dynamic Programming Algorithm for Sparse Approximations of Computational Protein Design.

    PubMed

    Jou, Jonathan D; Jain, Swati; Georgiev, Ivelin S; Donald, Bruce R

    2016-06-01

    Sparse energy functions that ignore long range interactions between residue pairs are frequently used by protein design algorithms to reduce computational cost. Current dynamic programming algorithms that fully exploit the optimal substructure produced by these energy functions only compute the GMEC. This disproportionately favors the sequence of a single, static conformation and overlooks better binding sequences with multiple low-energy conformations. Provable, ensemble-based algorithms such as A* avoid this problem, but A* cannot guarantee better performance than exhaustive enumeration. We propose a novel, provable, dynamic programming algorithm called Branch-Width Minimization* (BWM*) to enumerate a gap-free ensemble of conformations in order of increasing energy. Given a branch-decomposition of branch-width w for an n-residue protein design with at most q discrete side-chain conformations per residue, BWM* returns the sparse GMEC in O([Formula: see text]) time and enumerates each additional conformation in merely O([Formula: see text]) time. We define a new measure, Total Effective Search Space (TESS), which can be computed efficiently a priori before BWM* or A* is run. We ran BWM* on 67 protein design problems and found that TESS discriminated between BWM*-efficient and A*-efficient cases with 100% accuracy. As predicted by TESS and validated experimentally, BWM* outperforms A* in 73% of the cases and computes the full ensemble or a close approximation faster than A*, enumerating each additional conformation in milliseconds. Unlike A*, the performance of BWM* can be predicted in polynomial time before running the algorithm, which gives protein designers the power to choose the most efficient algorithm for their particular design problem.

  17. Using the computer in the clinical consultation; setting the stage, reviewing, recording, and taking actions: multi-channel video study

    PubMed Central

    Kumarapeli, Pushpa; de Lusignan, Simon

    2013-01-01

    Background and objective Electronic patient record (EPR) systems are widely used. This study explores the context and use of systems to provide insights into improving their use in clinical practice. Methods We used video to observe 163 consultations by 16 clinicians using four EPR brands. We made a visual study of the consultation room and coded interactions between clinician, patient, and computer. Few patients (6.9%, n=12) declined to participate. Results Patients looked at the computer twice as much (47.6 s vs 20.6 s, p<0.001) when it was within their gaze. A quarter of consultations were interrupted (27.6%, n=45); and in half the clinician left the room (12.3%, n=20). The core consultation takes about 87% of the total session time; 5% of time is spent pre-consultation, reading the record and calling the patient in; and 8% of time is spent post-consultation, largely entering notes. Consultations with more than one person and where prescribing took place were longer (R2 adj=22.5%, p<0.001). The core consultation can be divided into 61% of direct clinician–patient interaction, of which 15% is examination, 25% computer use with no patient involvement, and 14% simultaneous clinician–computer–patient interplay. The proportions of computer use are similar between consultations (mean=40.6%, SD=13.7%). There was more data coding in problem-orientated EPR systems, though clinicians often used vague codes. Conclusions The EPR system is used for a consistent proportion of the consultation and should be designed to facilitate multi-tasking. Clinicians who want to promote screen sharing should change their consulting room layout. PMID:23242763

  18. Development and Validation of the Total HUman Model for Safety (THUMS) Toward Further Understanding of Occupant Injury Mechanisms in Precrash and During Crash.

    PubMed

    Iwamoto, Masami; Nakahira, Yuko; Kimpara, Hideyuki

    2015-01-01

    Active safety devices such as automatic emergency brake (AEB) and precrash seat belt have the potential to accomplish further reduction in the number of the fatalities due to automotive accidents. However, their effectiveness should be investigated by more accurate estimations of their interaction with human bodies. Computational human body models are suitable for investigation, especially considering muscular tone effects on occupant motions and injury outcomes. However, the conventional modeling approaches such as multibody models and detailed finite element (FE) models have advantages and disadvantages in computational costs and injury predictions considering muscular tone effects. The objective of this study is to develop and validate a human body FE model with whole body muscles, which can be used for the detailed investigation of interaction between human bodies and vehicular structures including some safety devices precrash and during a crash with relatively low computational costs. In this study, we developed a human body FE model called THUMS (Total HUman Model for Safety) with a body size of 50th percentile adult male (AM50) and a sitting posture. The model has anatomical structures of bones, ligaments, muscles, brain, and internal organs. The total number of elements is 281,260, which would realize relatively low computational costs. Deformable material models were assigned to all body parts. The muscle-tendon complexes were modeled by truss elements with Hill-type muscle material and seat belt elements with tension-only material. The THUMS was validated against 35 series of cadaver or volunteer test data on frontal, lateral, and rear impacts. Model validations for 15 series of cadaver test data associated with frontal impacts are presented in this article. The THUMS with a vehicle sled model was applied to investigate effects of muscle activations on occupant kinematics and injury outcomes in specific frontal impact situations with AEB. In the validations using 5 series of cadaver test data, force-time curves predicted by the THUMS were quantitatively evaluated using correlation and analysis (CORA), which showed good or acceptable agreement with cadaver test data in most cases. The investigation of muscular effects showed that muscle activation levels and timing had significant effects on occupant kinematics and injury outcomes. Although further studies on accident injury reconstruction are needed, the THUMS has the potential for predictions of occupant kinematics and injury outcomes considering muscular tone effects with relatively low computational costs.

  19. Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments

    PubMed Central

    Zapater, Marina; Sanchez, Cesar; Ayala, Jose L.; Moya, Jose M.; Risco-Martín, José L.

    2012-01-01

    Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time. PMID:23112621

  20. A computer program to determine the possible daily release window for sky target experiments

    NASA Technical Reports Server (NTRS)

    Michaud, N. H.

    1973-01-01

    A computer program is presented which is designed to determine the daily release window for sky target experiments. Factors considered in the program include: (1) target illumination by the sun at release time and during the tracking period; (2) look angle elevation above local horizon from each tracking station to the target; (3) solar depression angle from the local horizon of each tracking station during the experimental period after target release; (4) lunar depression angle from the local horizon of each tracking station during the experimental period after target release; and (5) total sky background brightness as seen from each tracking station while viewing the target. Program output is produced in both graphic and data form. Output data can be plotted for a single calendar month or year. The numerical values used to generate the plots are furnished to permit a more detailed review of the computed daily release windows.

  1. Simulation of Laboratory Tests of Steel Arch Support

    NASA Astrophysics Data System (ADS)

    Horyl, Petr; Šňupárek, Richard; Maršálek, Pavel; Pacześniowski, Krzysztof

    2017-03-01

    The total load-bearing capacity of steel arch yielding roadways supports is among their most important characteristics. These values can be obtained in two ways: experimental measurements in a specialized laboratory or computer modelling by FEM. Experimental measurements are significantly more expensive and more time-consuming. However, for proper tuning, a computer model is very valuable and can provide the necessary verification by experiment. In the cooperating workplaces of GIG Katowice, VSB-Technical University of Ostrava and the Institute of Geonics ASCR this verification was successful. The present article discusses the conditions and results of this verification for static problems. The output is a tuned computer model, which may be used for other calculations to obtain the load-bearing capacity of other types of steel arch supports. Changes in other parameters such as the material properties of steel, size torques, friction coefficient values etc. can be determined relatively quickly by changing the properties of the investigated steel arch supports.

  2. Distributed computing for membrane-based modeling of action potential propagation.

    PubMed

    Porras, D; Rogers, J M; Smith, W M; Pollard, A E

    2000-08-01

    Action potential propagation simulations with physiologic membrane currents and macroscopic tissue dimensions are computationally expensive. We, therefore, analyzed distributed computing schemes to reduce execution time in workstation clusters by parallelizing solutions with message passing. Four schemes were considered in two-dimensional monodomain simulations with the Beeler-Reuter membrane equations. Parallel speedups measured with each scheme were compared to theoretical speedups, recognizing the relationship between speedup and code portions that executed serially. A data decomposition scheme based on total ionic current provided the best performance. Analysis of communication latencies in that scheme led to a load-balancing algorithm in which measured speedups at 89 +/- 2% and 75 +/- 8% of theoretical speedups were achieved in homogeneous and heterogeneous clusters of workstations. Speedups in this scheme with the Luo-Rudy dynamic membrane equations exceeded 3.0 with eight distributed workstations. Cluster speedups were comparable to those measured during parallel execution on a shared memory machine.

  3. GPU implementation of the linear scaling three dimensional fragment method for large scale electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Jia, Weile; Wang, Jue; Chi, Xuebin; Wang, Lin-Wang

    2017-02-01

    LS3DF, namely linear scaling three-dimensional fragment method, is an efficient linear scaling ab initio total energy electronic structure calculation code based on a divide-and-conquer strategy. In this paper, we present our GPU implementation of the LS3DF code. Our test results show that the GPU code can calculate systems with about ten thousand atoms fully self-consistently in the order of 10 min using thousands of computing nodes. This makes the electronic structure calculations of 10,000-atom nanosystems routine work. This speed is 4.5-6 times faster than the CPU calculations using the same number of nodes on the Titan machine in the Oak Ridge leadership computing facility (OLCF). Such speedup is achieved by (a) carefully re-designing of the computationally heavy kernels; (b) redesign of the communication pattern for heterogeneous supercomputers.

  4. Computational models of an inductive power transfer system for electric vehicle battery charge

    NASA Astrophysics Data System (ADS)

    Anele, A. O.; Hamam, Y.; Chassagne, L.; Linares, J.; Alayli, Y.; Djouani, K.

    2015-09-01

    One of the issues to be solved for electric vehicles (EVs) to become a success is the technical solution of its charging system. In this paper, computational models of an inductive power transfer (IPT) system for EV battery charge are presented. Based on the fundamental principles behind IPT systems, 3 kW single phase and 22 kW three phase IPT systems for Renault ZOE are designed in MATLAB/Simulink. The results obtained based on the technical specifications of the lithium-ion battery and charger type of Renault ZOE show that the models are able to provide the total voltage required by the battery. Also, considering the charging time for each IPT model, they are capable of delivering the electricity needed to power the ZOE. In conclusion, this study shows that the designed computational IPT models may be employed as a support structure needed to effectively power any viable EV.

  5. Parallelization of implicit finite difference schemes in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Decker, Naomi H.; Naik, Vijay K.; Nicoules, Michel

    1990-01-01

    Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.

  6. Ubiquitous green computing techniques for high demand applications in Smart environments.

    PubMed

    Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L

    2012-01-01

    Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  7. Associations between Screen Time and Physical Activity among Spanish Adolescents

    PubMed Central

    Serrano-Sanchez, Jose A.; Martí-Trujillo, Sara; Lera-Navarro, Angela; Dorado-García, Cecilia; González-Henríquez, Juan J.; Sanchís-Moysi, Joaquín

    2011-01-01

    Background Excessive time in front of a single or several screens could explain a displacement of physical activity. The present study aimed at determining whether screen-time is associated with a reduced level of moderate to vigorous physical activity (MVPA) in Spanish adolescents living in favorable environmental conditions. Methodology/Principal Findings A multi-stage stratified random sampling method was used to select 3503 adolescents (12–18 years old) from the school population of Gran Canaria, Spain. MVPA, screen-time in front of television, computer, video game console and portable console was assessed in the classroom by fulfilling a standardized questionnaire. Bivariate and multivariate logistic regression analyses adjusted by a set of social-environmental variables were carried out. Forty-six percent of girls (95% CI±2.3%) and 26% of boys (95% CI±2.1%) did not meet the MVPA recommendations for adolescents. Major gender differences were observed in the time devoted to vigorous PA, video games and the total time spent on screen-based activities. Boys who reported 4 hours•week−1 or more to total screen-time showed a 64% (OR = 0.61, 95% CI, 0.44–0.86) increased risk of failing to achieve the recommended adolescent MVPA level. Participation in organized physical activities and sports competitions were more strongly associated with MVPA than screen-related behaviors. Conclusions/Significance No single screen-related behavior explained the reduction of MVPA in adolescents. However, the total time accumulated through several screen-related behaviors was negatively associated with MVPA level in boys. This association could be due to lower availability of time for exercise as the time devoted to sedentary screen-time activities increases. Participation in organized physical activities seems to counteract the negative impact of excessive time in front of screens on physical activity. PMID:21909435

  8. Associations between screen time and physical activity among Spanish adolescents.

    PubMed

    Serrano-Sanchez, Jose A; Martí-Trujillo, Sara; Lera-Navarro, Angela; Dorado-García, Cecilia; González-Henríquez, Juan J; Sanchís-Moysi, Joaquín

    2011-01-01

    Excessive time in front of a single or several screens could explain a displacement of physical activity. The present study aimed at determining whether screen-time is associated with a reduced level of moderate to vigorous physical activity (MVPA) in Spanish adolescents living in favorable environmental conditions. A multi-stage stratified random sampling method was used to select 3503 adolescents (12-18 years old) from the school population of Gran Canaria, Spain. MVPA, screen-time in front of television, computer, video game console and portable console was assessed in the classroom by fulfilling a standardized questionnaire. Bivariate and multivariate logistic regression analyses adjusted by a set of social-environmental variables were carried out. Forty-six percent of girls (95% CI±2.3%) and 26% of boys (95% CI±2.1%) did not meet the MVPA recommendations for adolescents. Major gender differences were observed in the time devoted to vigorous PA, video games and the total time spent on screen-based activities. Boys who reported 4 hours•week(-1) or more to total screen-time showed a 64% (OR = 0.61, 95% CI, 0.44-0.86) increased risk of failing to achieve the recommended adolescent MVPA level. Participation in organized physical activities and sports competitions were more strongly associated with MVPA than screen-related behaviors. No single screen-related behavior explained the reduction of MVPA in adolescents. However, the total time accumulated through several screen-related behaviors was negatively associated with MVPA level in boys. This association could be due to lower availability of time for exercise as the time devoted to sedentary screen-time activities increases. Participation in organized physical activities seems to counteract the negative impact of excessive time in front of screens on physical activity.

  9. A design for integration.

    PubMed

    Fenna, D

    1977-09-01

    For nearly two decades, the development of computerized information systems has struggled for acceptable compromises between the unattainable "total system" and the unacceptable separate applications. Integration of related applications is essential if the computer is to be exploited fully, yet relative simplicity is necessary for systems to be implemented in a reasonable time-scale. This paper discusses a system being progressively developed from minimal beginnings but which, from the outset, had a highly flexible and fully integrated system basis. The system is for batch processing, but can accommodate on-line data input; it is similar in its approach to many transaction-processing real-time systems.

  10. New digital capacitive measurement system for blade clearances

    NASA Astrophysics Data System (ADS)

    Moenich, Marcel; Bailleul, Gilles

    This paper presents a totally new concept for tip blade clearance evaluation in turbine engines. This system is able to detect exact 'measurands' even under high temperature and severe conditions like ionization. The system is based on a heavy duty probe head, a miniaturized thick-film hybrid electronic circuit and a signal processing unit for real time computing. The high frequency individual measurement values are digitally filtered and linearized in real time. The electronic is built in hybrid technology and therefore can be kept extremely small and robust, so that the system can be used on actual flights.

  11. An efficient temporal logic for robotic task planning

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey M.

    1989-01-01

    Computations required for temporal reasoning can be prohibitively expensive if fully general representations are used. Overly simple representations, such as totally ordered sequence of time points, are inadequate for use in a nonlinear task planning system. A middle ground is identified which is general enough to support a capable nonlinear task planner, but specialized enough that the system can support online task planning in real time. A Temporal Logic System (TLS) was developed during the Intelligent Task Automation (ITA) project to support robotic task planning. TLS is also used within the ITA system to support plan execution, monitoring, and exception handling.

  12. Associations between accelerometer-derived physical activity and regional adiposity in young men and women.

    PubMed

    Smith, H A; Storti, K L; Arena, V C; Kriska, A M; Gabriel, K K Pettee; Sutton-Tyrrell, K; Hames, K C; Conroy, M B

    2013-06-01

    Empirical evidence supports an inverse relationship between physical activity (PA) and adiposity, but studies using detailed measures of both are scarce. The relationship between regional adiposity and accelerometer-derived PA in men and women are described. Cross-sectional analysis included 253 participants from a weight loss study limited to ages 20-45 years and BMI 25-39.9 kg m(-2) . PA data were collected with accelerometers and expressed as total accelerometer counts and average amount of time per day accumulated in different intensity levels [sedentary, light-, and moderate-to-vigorous intensity PA (MVPA)]. Accumulation of time spent above 100 counts was expressed as total active time. Computed tomography (CT) was used to measure abdominal and adipose tissue (AT). Multivariate linear regression analyses were used to assess the relationship between regional adiposity (dependent variable) and the various PA levels (independent variable), and were executed separately for men and women, adjusting for wear time, age, race, education, and BMI. Among males, light activity was inversely associated with total AT (β = -0.19; P = 0.02) as well as visceral AT (VAT) (β = -0.30; P = 0.03). Among females sedentary time was positively associated with VAT (β = 0.11; P = 0.04) and total active time was inversely associated with VAT (β = -0.12; P = 0.04). Findings from this study suggest that PA intensity level may influence regional adiposity differently in men and women. Additional research is needed in larger samples to clarify the difference in these associations by sex, create recommendations for the frequency, duration and intensity of PA needed to target fat deposits, and determine if these recommendations should differ by sex. Copyright © 2013 The Obesity Society.

  13. Sparse Representation of Multimodality Sensing Databases for Data Mining and Retrieval

    DTIC Science & Technology

    2015-04-09

    Savarese. Estimating the Aspect Layout of Object Categories, EEE Conference on Computer Vision and Pattern Recognition (CVPR). 19-JUN-12...Time Equivalent (FTE) support provided by this agreement, and total for each category): (a) Graduate Students Liang Mei, 50% FTE, EE : systems...PhD candidate Min Sun, 50% FTE, EE : systems, PhD candidate Yu Xiang, 50% FTE, EE : systems, PhD candidate Dae Yon Jung, 50% FTE, EE : systems, PhD

  14. Rapid Oceanographic Data Gathering: Some Problems in Using Remote Sensing to Determine the Horizontal and Vertical Thermal Distributions in the Northeast Pacific Ocean.

    DTIC Science & Technology

    1981-09-01

    Zulu time) GOES Geostationary Operational Environmental Satellite GOSSTCOMP Global Operational Sea Surface Temperature Computation HEPAD High Energy ...Manipulation System IFOV Instantaneous Field-of-View IMP Instrument Mounting Platofrm IR Infrared 12 K Kelvin km kilometer m meter MEPED Medium Energy ...Stratospheric Sounding Unit STREX Storm Transfer and Response Experiment TEP Total Energy Detector TIP TIROS Information Processor TOVS TIROS Operational

  15. Aerial Refueling Process Rescheduling Under Job Related Disruptions

    NASA Technical Reports Server (NTRS)

    Kaplan, Sezgin; Rabadi, Ghaith

    2011-01-01

    The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for each fighter aircraft (job) on the multiple tankers (machines) to minimize the total weighted tardiness. ARSP assumes that the jobs have different release times and due dates. The ARSP is dynamic environment and unexpected events may occur. In this paper, rescheduling in the aerial refueling process with a time set of jobs will be studied to deal with job related disruptions such as the arrival of new jobs, the departure of an existing job, high deviations in the release times and changes in job priorities. In order to keep the stability and to avoid excessive computation, partial schedule repair algorithm is developed and its preliminary results are presented.

  16. Computer program determines gas flow rates in piping systems

    NASA Technical Reports Server (NTRS)

    Franke, R.

    1966-01-01

    Computer program calculates the steady state flow characteristics of an ideal compressible gas in a complex piping system. The program calculates the stagnation and total temperature, static and total pressure, loss factor, and forces on each element in the piping system.

  17. Time-reversed wave mixing in nonlinear optics

    PubMed Central

    Zheng, Yuanlin; Ren, Huaijin; Wan, Wenjie; Chen, Xianfeng

    2013-01-01

    Time-reversal symmetry is important to optics. Optical processes can run in a forward or backward direction through time when such symmetry is preserved. In linear optics, a time-reversed process of laser emission can enable total absorption of coherent light fields inside an optical cavity of loss by time-reversing the original gain medium. Nonlinearity, however, can often destroy such symmetry in nonlinear optics, making it difficult to study time-reversal symmetry with nonlinear optical wave mixings. Here we demonstrate time-reversed wave mixings for optical second harmonic generation (SHG) and optical parametric amplification (OPA) by exploring this well-known but underappreciated symmetry in nonlinear optics. This allows us to observe the annihilation of coherent beams. Our study offers new avenues for flexible control in nonlinear optics and has potential applications in efficient wavelength conversion, all-optical computing. PMID:24247906

  18. 12 CFR Appendix K to Part 1026 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 1026 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION TRUTH IN LENDING (REGULATION Z) Pt. 1026, App. K Appendix K to Part 1026—Total Annual Loan Cost...

  19. 12 CFR Appendix K to Part 1026 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 1026 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION TRUTH IN LENDING (REGULATION Z) Pt. 1026, App. K Appendix K to Part 1026—Total Annual Loan Cost...

  20. Aligner optimization increases accuracy and decreases compute times in multi-species sequence data.

    PubMed

    Robinson, Kelly M; Hawkins, Aziah S; Santana-Cruz, Ivette; Adkins, Ricky S; Shetty, Amol C; Nagaraj, Sushma; Sadzewicz, Lisa; Tallon, Luke J; Rasko, David A; Fraser, Claire M; Mahurkar, Anup; Silva, Joana C; Dunning Hotopp, Julie C

    2017-09-01

    As sequencing technologies have evolved, the tools to analyze these sequences have made similar advances. However, for multi-species samples, we observed important and adverse differences in alignment specificity and computation time for bwa- mem (Burrows-Wheeler aligner-maximum exact matches) relative to bwa-aln. Therefore, we sought to optimize bwa-mem for alignment of data from multi-species samples in order to reduce alignment time and increase the specificity of alignments. In the multi-species cases examined, there was one majority member (i.e. Plasmodium falciparum or Brugia malayi ) and one minority member (i.e. human or the Wolbachia endosymbiont w Bm) of the sequence data. Increasing bwa-mem seed length from the default value reduced the number of read pairs from the majority sequence member that incorrectly aligned to the reference genome of the minority sequence member. Combining both source genomes into a single reference genome increased the specificity of mapping, while also reducing the central processing unit (CPU) time. In Plasmodium , at a seed length of 18 nt, 24.1 % of reads mapped to the human genome using 1.7±0.1 CPU hours, while 83.6 % of reads mapped to the Plasmodium genome using 0.2±0.0 CPU hours (total: 107.7 % reads mapping; in 1.9±0.1 CPU hours). In contrast, 97.1 % of the reads mapped to a combined Plasmodium- human reference in only 0.7±0.0 CPU hours. Overall, the results suggest that combining all references into a single reference database and using a 23 nt seed length reduces the computational time, while maximizing specificity. Similar results were found for simulated sequence reads from a mock metagenomic data set. We found similar improvements to computation time in a publicly available human-only data set.

  1. Solar radiation measurement project

    NASA Technical Reports Server (NTRS)

    Ioup, J. W.

    1981-01-01

    The Xavier solar radiation measurement project and station are described. Measurements of the total solar radiation on a horizontal surface from an Eppley pyranometer were collected into computer data files. Total radiation in watt hours was converted from ten minute intervals to hourly intervals. Graphs of this total radiation data are included. A computer program in Fortran was written to calculate the total extraterrestrial radiation on a horizontal surface for each day of the month. Educational and social benefits of the project are cited.

  2. Optimization of removal function in computer controlled optical surfacing

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Guo, Peiji; Ren, Jianfeng

    2010-10-01

    The technical principle of computer controlled optical surfacing (CCOS) and the common method of optimizing removal function that is used in CCOS are introduced in this paper. A new optimizing method time-sharing synthesis of removal function is proposed to solve problems of the removal function being far away from Gaussian type and slow approaching of the removal function error that encountered in the mode of planet motion or translation-rotation. Detailed time-sharing synthesis of using six removal functions is discussed. For a given region on the workpiece, six positions are selected as the centers of the removal function; polishing tool controlled by the executive system of CCOS revolves around each centre to complete a cycle in proper order. The overall removal function obtained by the time-sharing process is the ratio of total material removal in six cycles to time duration of the six cycles, which depends on the arrangement and distribution of the six removal functions. Simulations on the synthesized overall removal functions under two different modes of motion, i.e., planet motion and translation-rotation are performed from which the optimized combination of tool parameters and distribution of time-sharing synthesis removal functions are obtained. The evaluation function when optimizing is determined by an approaching factor which is defined as the ratio of the material removal within the area of half of the polishing tool coverage from the polishing center to the total material removal within the full polishing tool coverage area. After optimization, it is found that the optimized removal function obtained by time-sharing synthesis is closer to the ideal Gaussian type removal function than those by the traditional methods. The time-sharing synthesis method of the removal function provides an efficient way to increase the convergence speed of the surface error in CCOS for the fabrication of aspheric optical surfaces, and to reduce the intermediate- and high-frequency error.

  3. Behavior of ionospheric total electron content over Ankara. Final report, 1 January 1975--31 December 1977

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tulunay, Y.K.

    1977-12-31

    Total Electron Content (TEC) was determined between October, 1975 and August, 1976 from measurements of the Faraday rotation of a plane polarized wave transmitted at 140 MHz from the geostationary satellite ATS 6, located at approximately 35 deg E over the equator. The computed results are presented as diurnal variations for single days and monthly means. Maximum daytime TEC values were observed in April (about 20 x 10/sup 16/ el/sq m) and minimum in January (about 9 x 10/sup 16/ el/sq m); maximum night-time values were observed in January and February (about 3 x 10/sup 16/ el/sq m). The responsemore » of TEC to the high magnetic activity associated with substorms was found to depend greatly on the time of day when the storm occurred.« less

  4. [Relationship between employees' management factor of visual display terminal (VDT) work time and 28-item General Health Questionnaire (GHQ-28) at one Japanese IT company's computer worksite].

    PubMed

    Sugimura, Hisamichi; Horiguchi, Itsuko; Shimizu, Takashi; Marui, Eiji

    2007-09-01

    We studied 1365 male workers at a Japanese computer worksite in 2004 to determine the relationship between employees' time management factor of visual display terminal (VDT) work and General Health Questionnaire (GHQ) score. We developed questionnaires concerning age, management factor of VDT work time (total daily VDT work time, duration of continuous work), other work-related conditions (commuting time, job rank, type of job, hours of monthly overtime), lifestyle (smoking, alcohol consumption, exercise, having breakfast, sleeping hours), and the Japanese version of 28-item General Health Questionnaire (GHQ). Multivariate logistic regression analyses were performed to estimate the odds ratios (ORs) of the high-GHQ groups (>6.0) associated with age and the time management factor of VDT work. Multivariate logistic regression analyses indicated lower ORs for certain groups: workers older than 50 years old had significantly a lower OR than those younger than 30 years old; workers sleeping less than 6 h showed a lower OR than those sleeping more than 6 h. In contrast, significantly higher ORs were shown for workers with continuous work durations of more than 3 h compared with those with less than 1 h, those with more than 25 h/mo overtime compared with those with less, those doing VDT work of more than 7.5 h/day compared with those doing less than 4.5 h/day, and those with more than 25 h/mo of overtime compared with those with less. Male Japanese computer workers' GHQ scores are significantly associated with time management factors of VDT work.

  5. Sedentary behaviors of adults in relation to neighborhood walkability and income.

    PubMed

    Kozo, Justine; Sallis, James F; Conway, Terry L; Kerr, Jacqueline; Cain, Kelli; Saelens, Brian E; Frank, Lawrence D; Owen, Neville

    2012-11-01

    Sedentary (sitting) time is a newly identified risk factor for obesity and chronic diseases, which is behaviorally and physiologically distinct from lack of physical activity. To inform public health approaches to influencing sedentary behaviors, an understanding of correlates is required. Participants were 2,199 adults aged 20-66 years living in King County/Seattle, WA, and Baltimore, MD, regions, recruited from neighborhoods high or low on a "walkability index" (derived from objective built environment indicators) and having high or low median incomes. Cross-sectional associations of walkability and income with total sedentary time (measured by accelerometers and by self-report) and with self-reported time in seven specific sitting-related behaviors were examined. Neighborhood walkability and income were unrelated to measures of total sitting time. Lower neighborhood walkability was significantly associated with more driving time (difference of 18.2 min/day, p < .001) and more self-reported TV viewing (difference of 14.5 min/day, p < .001). Residents of higher income neighborhoods reported more computer/Internet and reading time, and they had more objectively measured sedentary time. Neighborhood walkability was not related to total sedentary time but was related to two specific sedentary behaviors associated with risk for obesity-driving time and TV viewing time. Future research could examine how these prevalent and often prolonged sedentary behaviors mediate relationships between neighborhood walkability and overweight/obesity. Initiatives to reduce chronic disease risk among residents of both higher-and lower-income low-walkable neighborhoods should include a focus on reducing TV viewing time and other sedentary behaviors and enacting policies that can lead to the development or redevelopment of more-walkable neighborhoods. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  6. Total variation-based neutron computed tomography

    NASA Astrophysics Data System (ADS)

    Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick

    2018-05-01

    We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.

  7. Autonomous manipulation on a robot: Summary of manipulator software functions

    NASA Technical Reports Server (NTRS)

    Lewis, R. A.

    1974-01-01

    A six degree-of-freedom computer-controlled manipulator is examined, and the relationships between the arm's joint variables and 3-space are derived. Arm trajectories using sequences of third-degree polynomials to describe the time history of each joint variable are presented and two approaches to the avoidance of obstacles are given. The equations of motion for the arm are derived and then decomposed into time-dependent factors and time-independent coefficients. Several new and simplifying relationships among the coefficients are proven. Two sample trajectories are analyzed in detail for purposes of determining the most important contributions to total force in order that relatively simple approximations to the equations of motion can be used.

  8. Use of a novel extended blink test to evaluate the performance of two polyvinylpyrrolidone-containing, silicone hydrogel contact lenses

    PubMed Central

    Schafer, Jeffery; Reindel, William; Steffen, Robert; Mosehauer, Gary; Chinn, Joseph

    2018-01-01

    Background Sustained digital display viewing reduces eye blink frequency and tear film stability. To retain water and preserve a smooth optical surface, contact lens manufacturers have integrated the humectant polyvinylpyrrolidone (PVP) into silicone hydrogel contact lenses. In this study, extended blink time (EBT) was used to assess visual stability over a prolonged blink interval of two PVP-containing silicone hydrogel lenses, samfilcon A (SAM) and senofilcon A (SEN). Materials and methods This randomized, bilateral, masked, crossover study assessed lens performance in ten subjects after 16 hours of wear. EBT, ie, the time elapsed between cessation of blinking and blur-out of a threshold letter on the acuity chart, was measured. At the end of the wear period, subjects reported duration of computer use and rated visual quality (VQ) and comfort while wearing the assigned lens, and the investigator evaluated lens surface wetting characteristics. Each lens was removed and immediately weighed to determine total water content. Results EBTs were 10.42 seconds for SAM and 8.04 seconds for SEN (p = 0.015). Subjective ratings of VQ after 16 hours of wear were 84.6 for SAM and 74.4 for SEN (p = 0.049). Comfort ratings were 85.9 for SAM and 80.2 for SEN (p > 0.05). Median times of computer use were 6–8 hours for both lens types. Post blink, 70.0% of SAM and 30.0% of SEN lenses were completely wet (p = 0.021). Total water content after wear was 43.7% for SAM and 35.5% for SEN (p < 0.001). Conclusion EBT measurement indicated more stable vision with the PVP-containing SAM polymer compared with the PVP-containing SEN polymer. The SAM polymer also demonstrated better surface wetting and maintained higher water content after a prolonged period of wear. EBT can be valuable in assessing vision stability of patients after hours of computer use. PMID:29765195

  9. GEM: a dynamic tracking model for mesoscale eddies in the ocean

    NASA Astrophysics Data System (ADS)

    Li, Qiu-Yang; Sun, Liang; Lin, Sheng-Fu

    2016-12-01

    The Genealogical Evolution Model (GEM) presented here is an efficient logical model used to track dynamic evolution of mesoscale eddies in the ocean. It can distinguish between different dynamic processes (e.g., merging and splitting) within a dynamic evolution pattern, which is difficult to accomplish using other tracking methods. To this end, the GEM first uses a two-dimensional (2-D) similarity vector (i.e., a pair of ratios of overlap area between two eddies to the area of each eddy) rather than a scalar to measure the similarity between eddies, which effectively solves the "missing eddy" problem (temporarily lost eddy in tracking). Second, for tracking when an eddy splits, the GEM uses both "parent" (the original eddy) and "child" (eddy split from parent) and the dynamic processes are described as the birth and death of different generations. Additionally, a new look-ahead approach with selection rules effectively simplifies computation and recording. All of the computational steps are linear and do not include iteration. Given the pixel number of the target region L, the maximum number of eddies M, the number N of look-ahead time steps, and the total number of time steps T, the total computer time is O(LM(N + 1)T). The tracking of each eddy is very smooth because we require that the snapshots of each eddy on adjacent days overlap one another. Although eddy splitting or merging is ubiquitous in the ocean, they have different geographic distributions in the North Pacific Ocean. Both the merging and splitting rates of the eddies are high, especially at the western boundary, in currents and in "eddy deserts". The GEM is useful not only for satellite-based observational data, but also for numerical simulation outputs. It is potentially useful for studying dynamic processes in other related fields, e.g., the dynamics of cyclones in meteorology.

  10. Variability of differences between two approaches for determining ground-water discharge and pumpage, including effects of time trends, Lower Arkansas River Basin, southeastern Colorado, 1998-2002

    USGS Publications Warehouse

    Troutman, Brent M.; Edelmann, Patrick; Dash, Russell G.

    2005-01-01

    In the mid-1990s, the Colorado Division of Water Resources (CDWR) adopted rules governing measurement of tributary ground-water pumpage for the Arkansas River Basin. The rules allowed ground-water pumpage to be determined using one of two approaches?power conversion coefficient (PCC) or totalizing flowmeters (TFM). In addition, the rules allowed a PCC to be applied to the electrical power usage up to 4 years in the future to estimate ground-water pumpage. As a result of concerns about potential errors in applying the PCC approach forward in time, a study was done by the U.S. Geological Survey, in cooperation with CDWR and Colorado Water Conservation Board, to evaluate the variability in differences in pumpage between the two approaches, including the effects of time trends. This report compared measured ground-water pumpage using TFMs to computed ground-water pumpage using PCCs by developing statistical models of relations between explanatory variables, such as site, time, and pumping water level, and dependent variables, which are based on discharge, PCC, and pumpage. When differences in pumpage (diffP) were computed using PCC measurements and power consumption for the same year (1998-2002), the median diffP, depending on the year, ranged from +0.1 to -2.9 percent; the median diffP for the entire period was -1.5 percent. However, when diffP was computed using PCC measurements applied to the next year's power consumption, the median diffP was -0.3 percent; and when PCC measurements were applied 2, 3, or 4 years into the future, median diffPs were +1.8 percent for a 2-year forward lag and +5.3 percent for a 4-year forward lag, indicating that pumpage computed with the PCC approach, as generally applied under the ground-water pumpage measurement rules by CDWR, tended to overestimate pumpage as compared to pumpage using TFMs when PCC measurement was applied to future years of measured power consumption. Analyses were done to better understand the causes of the time trend; an estimate of the overall trend with time (uncorrected for pumping water-level changes) yielded a trend of about 2.2 percent per lag year for diffP. A separate analysis that incorporated a surface-water diversion term in the statistical model rendered the time-trend term insignificant, indicating that the time trend in the models served as a surrogate for other variables, some of which reflect underlying hydrologic conditions. A more precise explanation of the potential causes of the time trend was not obtained with the available data. However, the model results with the surface-water diversion term indicate that much of the trend of 2.2 percent per lag year in diffP resulted from applying a PCC to estimate pumpage under hydrologic conditions different from those under which the PCC was measured. Although there is no evidence to conclude that the upward time trend determined in the data for this 5-year period would hold in the future, historical static ground-water levels in the study area generally have exhibited small variations over multidecadal time scales. Therefore, the approximately 2 percent per lag year trend determined in these data is expected to be a reasonable guideline for estimating potential errors in the PCC approach resulting from temporally varying hydrologic conditions between time of PCC measurement and pumpage estimation. Comparisons also were made between total, or aggregated, pumpage for a network of wells as computed by the PCC approach and the TFM approach. For 100 wells and a lag of 4 years between PCC measurement and pumpage estimation, there was a 95-percent probability that the difference between total network pumpage measured by the PCC approach and that measured using a TFM would be between 5.2 and 14.4 percent. These estimates were based on a bias of 2.2 percent per lag year estimated for the period 1998-2002 during which hydrologic conditions were known to have changed. Using the same assumptions, the estimated d

  11. Automatic Modelling of Rubble Mound Breakwaters from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bueno, M.; Díaz-Vilariño, L.; González-Jorge, H.; Martínez-Sánchez, J.; Arias, P.

    2015-08-01

    Rubble mound breakwaters maintenance is critical to the protection of beaches and ports. LiDAR systems provide accurate point clouds from the emerged part of the structure that can be modelled to make it more useful and easy to handle. This work introduces a methodology for the automatic modelling of breakwaters with armour units of cube shape. The algorithm is divided in three main steps: normal vector computation, plane segmentation, and cube reconstruction. Plane segmentation uses the normal orientation of the points and the edge length of the cube. Cube reconstruction uses the intersection of three perpendicular planes and the edge length. Three point clouds cropped from the main point cloud of the structure are used for the tests. The number of cubes detected is around 56 % for two of the point clouds and 32 % for the third one over the total physical cubes. Accuracy assessment is done by comparison with manually drawn cubes calculating the differences between the vertexes. It ranges between 6.4 cm and 15 cm. Computing time ranges between 578.5 s and 8018.2 s. The computing time increases with the number of cubes and the requirements of collision detection.

  12. Gray: a ray tracing-based Monte Carlo simulator for PET.

    PubMed

    Freese, David L; Olcott, Peter D; Buss, Samuel R; Levin, Craig S

    2018-05-21

    Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a [Formula: see text] speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within [Formula: see text]% when accounting for differences in peak NECR. We also estimate the peak NECR to be [Formula: see text] kcps, or within [Formula: see text]% of published experimental data. The activity concentration of the peak is also estimated within 1.3%.

  13. A 4DCT imaging-based breathing lung model with relative hysteresis

    PubMed Central

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-01-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. PMID:28260811

  14. Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Bui, Trong T.

    1993-01-01

    A computer code called Proteus 3D was developed to solve the three-dimensional, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This User's Guide describes the program's features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.

  15. A 4DCT imaging-based breathing lung model with relative hysteresis

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-12-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry.

  16. Predictability Experiments With the Navy Operational Global Atmospheric Prediction System

    NASA Astrophysics Data System (ADS)

    Reynolds, C. A.; Gelaro, R.; Rosmond, T. E.

    2003-12-01

    There are several areas of research in numerical weather prediction and atmospheric predictability, such as targeted observations and ensemble perturbation generation, where it is desirable to combine information about the uncertainty of the initial state with information about potential rapid perturbation growth. Singular vectors (SVs) provide a framework to accomplish this task in a mathematically rigorous and computationally feasible manner. In this study, SVs are calculated using the tangent and adjoint models of the Navy Operational Global Atmospheric Prediction System (NOGAPS). The analysis error variance information produced by the NRL Atmospheric Variational Data Assimilation System is used as the initial-time SV norm. These VAR SVs are compared to SVs for which total energy is both the initial and final time norms (TE SVs). The incorporation of analysis error variance information has a significant impact on the structure and location of the SVs. This in turn has a significant impact on targeted observing applications. The utility and implications of such experiments in assessing the analysis error variance estimates will be explored. Computing support has been provided by the Department of Defense High Performance Computing Center at the Naval Oceanographic Office Major Shared Resource Center at Stennis, Mississippi.

  17. Respiratory analysis system and method

    NASA Technical Reports Server (NTRS)

    Liu, F. F. (Inventor)

    1973-01-01

    A system is described for monitoring the respiratory process in which the gas flow rate and the frequency of respiration and expiration cycles can be determined on a real time basis. A face mask is provided with one-way inlet and outlet valves where the gas flow is through independent flowmeters and through a mass spectrometer. The opening and closing of a valve operates an electrical switch, and the combination of the two switches produces a low frequency electrical signal of the respiratory inhalation and exhalation cycles. During the time a switch is operated, the corresponsing flowmeter produces electric pulses representative of the flow rate; the electrical pulses being at a higher frequency than that of the breathing cycle and combined with the low frequency signal. The high frequency pulses are supplied to conventional analyzer computer which also receives temperature and pressure inputs and computes mass flow rate and totalized mass flow of gas. From the mass spectrometer, components of the gas are separately computed as to flow rate. The electrical switches cause operation of up-down inputs of a reversible counter. The respective up and down cycles can be individually monitored and combined for various respiratory measurements.

  18. G-Guidance Interface Design for Small Body Mission Simulation

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Carson, John; Phan, Linh

    2008-01-01

    The G-Guidance software implements a guidance and control (G and C) algorithm for small-body, autonomous proximity operations, developed under the Small Body GN and C task at JPL. The software is written in Matlab and interfaces with G-OPT, a JPL-developed optimization package written in C that provides G-Guidance with guaranteed convergence to a solution in a finite computation time with a prescribed accuracy. The resulting program is computationally efficient and is a prototype of an onboard, real-time algorithm for autonomous guidance and control. Two thruster firing schemes are available in G-Guidance, allowing tailoring of the software for specific mission maneuvers. For example, descent, landing, or rendezvous benefit from a thruster firing at the maneuver termination to mitigate velocity errors. Conversely, ascent or separation maneuvers benefit from an immediate firing to avoid potential drift toward a second body. The guidance portion of this software explicitly enforces user-defined control constraints and thruster silence times while minimizing total fuel usage. This program is currently specialized to small-body proximity operations, but the underlying method can be generalized to other applications.

  19. Smart Classroom: Bringing Pervasive Computing into Distance Learning

    NASA Astrophysics Data System (ADS)

    Shi, Yuanchun; Qin, Weijun; Suo, Yue; Xiao, Xin

    In recent years, distance learning has increasingly become one of themost important applications on the internet and is being discussed and studied by various universities, institutes and companies. The Web/Internet provides relatively easy ways to publish hyper-linked multimedia content for more audiences. Yet, we find that most of the courseware are simply shifted from textbook to HTML files. However, in ost cases the teacher's live instruction is very important for catching the attention and interest of the students. That's why Real-Time Interactive Virtual Classroom (RTIVC) always plays an indispensable role in distance learning, where teachers nd students located in different places can take part in the class synchronously through certain multimedia communication systems and obtain real-time and mediarich interactions using Pervasive Computing technologies [1]. The Classroom 2000 project [2] at GIT has been devoted to the automated capturing of the classroom experience. Likewise, the Smart Classroom project [3] at our institute is focused on Tele-education. Most currently deployed real-time Tele-education systems are desktop-based, in which the teacher's experience is totally different from teaching in a real classroom.

  20. Fast reconstruction of optical properties for complex segmentations in near infrared imaging

    NASA Astrophysics Data System (ADS)

    Jiang, Jingjing; Wolf, Martin; Sánchez Majos, Salvador

    2017-04-01

    The intrinsic ill-posed nature of the inverse problem in near infrared imaging makes the reconstruction of fine details of objects deeply embedded in turbid media challenging even for the large amounts of data provided by time-resolved cameras. In addition, most reconstruction algorithms for this type of measurements are only suitable for highly symmetric geometries and rely on a linear approximation to the diffusion equation since a numerical solution of the fully non-linear problem is computationally too expensive. In this paper, we will show that a problem of practical interest can be successfully addressed making efficient use of the totality of the information supplied by time-resolved cameras. We set aside the goal of achieving high spatial resolution for deep structures and focus on the reconstruction of complex arrangements of large regions. We show numerical results based on a combined approach of wavelength-normalized data and prior geometrical information, defining a fully parallelizable problem in arbitrary geometries for time-resolved measurements. Fast reconstructions are obtained using a diffusion approximation and Monte-Carlo simulations, parallelized in a multicore computer and a GPU respectively.

Top