Sample records for longer computation time

  1. Estimated activity patterns in British 45 year olds: cross-sectional findings from the 1958 British birth cohort.

    PubMed

    Parsons, T J; Thomas, C; Power, C

    2009-08-01

    To investigate patterns of, and associations between, physical activity at work and in leisure time, television viewing and computer use. 4531 men and 4594 women with complete plausible data, age 44-45 years, participating in the 1958 British birth cohort study. Physical activity, television viewing and computer use (hours/week) were estimated using a self-complete questionnaire and intensity (MET hours/week) derived for physical activity. Relationships were investigated using linear regression and chi(2) tests. From a target sample of 11,971, 9223 provided information on physical activity, of whom 75 and 47% provided complete and plausible activity data on work and leisure time activity respectively. Men and women spent a median of 40.2 and 34.2 h/week, respectively in work activity, and 8.3 and 5.8 h/week in leisure activity. Half of all participants watched television for > or =2 h/day, and half used a computer for <1 h/day. Longer work hours were not associated with a shorter duration of leisure activity, but were associated with a shorter duration of computer use (men only). In men, higher work MET hours were associated with higher leisure-time MET hours, and shorter durations of television viewing and computer use. Watching more television was related to fewer hours or MET hours of leisure activity, as was longer computer use in men. Longer computer use was related to more hours (or MET hours) in leisure activities in women. Physical activity levels at work and in leisure time in mid-adulthood are low. Television viewing (and computer use in men) may compete with leisure activity for time, whereas longer duration of work hours is less influential. To change active and sedentary behaviours, better understanding of barriers and motivators is needed.

  2. Liminal Spaces and Learning Computing

    ERIC Educational Resources Information Center

    McCartney, Robert; Boustedt, Jonas; Eckerdal, Anna; Mostrom, Jan Erik; Sanders, Kate; Thomas, Lynda; Zander, Carol

    2009-01-01

    "Threshold concepts" are concepts that, among other things, transform the way a student looks at a discipline. Although the term "threshold" might suggest that the transformation occurs at a specific point in time, an "aha" moment, it seems more common (at least in computing) that a longer time period is required.…

  3. Predictors for cecal insertion time: the impact of abdominal visceral fat measured by computed tomography.

    PubMed

    Nagata, Naoyoshi; Sakamoto, Kayo; Arai, Tomohiro; Niikura, Ryota; Shimbo, Takuro; Shinozaki, Masafumi; Noda, Mitsuhiko; Uemura, Naomi

    2014-10-01

    Several factors affect the risk for longer cecal insertion time. The aim of this study was to identify the predictors of longer insertion time and to evaluate the effect of visceral fat measured by CT. This is a retrospective observational study. Outpatients for colorectal cancer screening who underwent colonoscopies and CT were enrolled. Computed tomography was performed in individuals who requested cancer screening and in those with GI bleeding. Information on obesity indices (BMI, visceral adipose tissue, and subcutaneous adipose tissue area), constipation score, history of abdominal surgery, poor preparation, fellow involvement, diverticulosis, patient discomfort, and the amount of sedation used was collected. The cecal insertion rate was 95.2% (899/944), and 899 patients were analyzed. Multiple regression analysis showed that female sex, lower BMI, lower visceral adipose tissue area, lower subcutaneous adipose tissue area, higher constipation score, history of surgery, poor bowel preparation, and fellow involvement were independently associated with longer insertion time. When obesity indices were considered simultaneously, smaller subcutaneous adipose tissue area (p = 0.038), but not lower BMI (p = 0.802) or smaller visceral adipose tissue area (p = 0.856), was associated with longer insertion time; the other aforementioned factors remained associated with longer insertion time. In the subanalysis of normal-weight patients (BMI <25 kg/m), a smaller subcutaneous adipose tissue area (p = 0.002), but not a lower BMI (p = 0.782), was independently associated with a longer insertion time. Longer insertion time had a positive correlation with a higher patient discomfort score (ρ = 0.51, p < 0.001) and a greater amount of midazolam use (ρ = 0.32, p < 0.001). This single-center retrospective study includes a potential selection bias. In addition to BMI and intra-abdominal fat, female sex, constipation, history of abdominal surgery, poor preparation, and fellow involvement were predictors of longer cecal insertion time. Among the obesity indices, high subcutaneous fat accumulation was the best predictive factor for easier passage of the colonoscope, even when body weight was normal.

  4. The impact of e-prescribing on prescriber and staff time in ambulatory care clinics: a time motion study.

    PubMed

    Hollingworth, William; Devine, Emily Beth; Hansen, Ryan N; Lawless, Nathan M; Comstock, Bryan A; Wilson-Norton, Jennifer L; Tharp, Kathleen L; Sullivan, Sean D

    2007-01-01

    Electronic prescribing has improved the quality and safety of care. One barrier preventing widespread adoption is the potential detrimental impact on workflow. We used time-motion techniques to compare prescribing times at three ambulatory care sites that used paper-based prescribing, desktop, or laptop e-prescribing. An observer timed all prescriber (n = 27) and staff (n = 42) tasks performed during a 4-hour period. At the sites with optional e-prescribing >75% of prescription-related events were performed electronically. Prescribers at e-prescribing sites spent less time writing, but time-savings were offset by increased computer tasks. After adjusting for site, prescriber and prescription type, e-prescribing tasks took marginally longer than hand written prescriptions (12.0 seconds; -1.6, 25.6 CI). Nursing staff at the e-prescribing sites spent longer on computer tasks (5.4 minutes/hour; 0.0, 10.7 CI). E-prescribing was not associated with an increase in combined computer and writing time for prescribers. If carefully implemented, e-prescribing will not greatly disrupt workflow.

  5. Theory and computation of optimal low- and medium-thrust transfers

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.

    1994-01-01

    This report presents two numerical methods considered for the computation of fuel-optimal, low-thrust orbit transfers in large numbers of burns. The origins of these methods are observations made with the extremal solutions of transfers in small numbers of burns; there seems to exist a trend such that the longer the time allowed to perform an optimal transfer the less fuel that is used. These longer transfers are obviously of interest since they require a motor of low thrust; however, we also find a trend that the longer the time allowed to perform the optimal transfer the more burns are required to satisfy optimality. Unfortunately, this usually increases the difficulty of computation. Both of the methods described use small-numbered burn solutions to determine solutions in large numbers of burns. One method is a homotopy method that corrects for problems that arise when a solution requires a new burn or coast arc for optimality. The other method is to simply patch together long transfers from smaller ones. An orbit correction problem is solved to develop this method. This method may also lead to a good guidance law for transfer orbits with long transfer times.

  6. Adolescents' technology and face-to-face time use predict objective sleep outcomes.

    PubMed

    Tavernier, Royette; Heissel, Jennifer A; Sladek, Michael R; Grant, Kathryn E; Adam, Emma K

    2017-08-01

    The present study examined both within- and between-person associations between adolescents' time use (technology-based activities and face-to-face interactions with friends and family) and sleep behaviors. We also assessed whether age moderated associations between adolescents' time use with friends and family and sleep. Adolescents wore an actigraph monitor and completed brief evening surveys daily for 3 consecutive days. Adolescents (N=71; mean age=14.50 years old, SD=1.84; 43.7% female) were recruited from 3 public high schools in the Midwest. We assessed 8 technology-based activities (eg, texting, working on a computer), as well as time spent engaged in face-to-face interactions with friends and family, via questions on adolescents' evening surveys. Actigraph monitors assessed 3 sleep behaviors: sleep latency, sleep hours, and sleep efficiency. Hierarchical linear models indicated that texting and working on the computer were associated with shorter sleep, whereas time spent talking on the phone predicted longer sleep. Time spent with friends predicted shorter sleep latencies, while family time predicted longer sleep latencies. Age moderated the association between time spent with friends and sleep efficiency, as well as between family time and sleep efficiency. Specifically, longer time spent interacting with friends was associated with higher sleep efficiency but only among younger adolescents. Furthermore, longer family time was associated with higher sleep efficiency but only for older adolescents. Findings are discussed in terms of the importance of regulating adolescents' technology use and improving opportunities for face-to-face interactions with friends, particularly for younger adolescents. Copyright © 2017 National Sleep Foundation. Published by Elsevier Inc. All rights reserved.

  7. Blading Design for Axial Turbomachines

    DTIC Science & Technology

    1989-05-01

    three- dimensional, viscous computation systems appear to have a long development period ahead, in which fluid shear stress modeling and computation time ...and n directions and T is the shear stress , As a consequence the solution time is longer than for integral methods, dependent largely on thc accuracy of...distributions over airfoils is an adaptation of thin plate deflection theory from stress analysis. At the same time , it minimizes designer effort

  8. Daily computer usage correlated with undergraduate students' musculoskeletal symptoms.

    PubMed

    Chang, Che-Hsu Joe; Amick, Benjamin C; Menendez, Cammie Chaumont; Katz, Jeffrey N; Johnson, Peter W; Robertson, Michelle; Dennerlein, Jack Tigh

    2007-06-01

    A pilot prospective study was performed to examine the relationships between daily computer usage time and musculoskeletal symptoms on undergraduate students. For three separate 1-week study periods distributed over a semester, 27 students reported body part-specific musculoskeletal symptoms three to five times daily. Daily computer usage time for the 24-hr period preceding each symptom report was calculated from computer input device activities measured directly by software loaded on each participant's primary computer. General Estimating Equation models tested the relationships between daily computer usage and symptom reporting. Daily computer usage longer than 3 hr was significantly associated with an odds ratio 1.50 (1.01-2.25) of reporting symptoms. Odds of reporting symptoms also increased with quartiles of daily exposure. These data suggest a potential dose-response relationship between daily computer usage time and musculoskeletal symptoms.

  9. An investigation of several numerical procedures for time-asymptotic compressible Navier-Stokes solutions

    NASA Technical Reports Server (NTRS)

    Rudy, D. H.; Morris, D. J.; Blanchard, D. K.; Cooke, C. H.; Rubin, S. G.

    1975-01-01

    The status of an investigation of four numerical techniques for the time-dependent compressible Navier-Stokes equations is presented. Results for free shear layer calculations in the Reynolds number range from 1000 to 81000 indicate that a sequential alternating-direction implicit (ADI) finite-difference procedure requires longer computing times to reach steady state than a low-storage hopscotch finite-difference procedure. A finite-element method with cubic approximating functions was found to require excessive computer storage and computation times. A fourth method, an alternating-direction cubic spline technique which is still being tested, is also described.

  10. The influence of commenting validity, placement, and style on perceptions of computer code trustworthiness: A heuristic-systematic processing approach.

    PubMed

    Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August

    2018-07-01

    Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Symplectic molecular dynamics simulations on specially designed parallel computers.

    PubMed

    Borstnik, Urban; Janezic, Dusanka

    2005-01-01

    We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.

  12. Boundary conditions for simulating large SAW devices using ANSYS.

    PubMed

    Peng, Dasong; Yu, Fengqi; Hu, Jian; Li, Peng

    2010-08-01

    In this report, we propose improved substrate left and right boundary conditions for simulating SAW devices using ANSYS. Compared with the previous methods, the proposed method can greatly reduce computation time. Furthermore, the longer the distance from the first reflector to the last one, the more computation time can be reduced. To verify the proposed method, a design example is presented with device center frequency 971.14 MHz.

  13. Efficient computation of the Grünwald-Letnikov fractional diffusion derivative using adaptive time step memory

    NASA Astrophysics Data System (ADS)

    MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.

    2015-09-01

    Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.

  14. Advantages of Parallel Processing and the Effects of Communications Time

    NASA Technical Reports Server (NTRS)

    Eddy, Wesley M.; Allman, Mark

    2000-01-01

    Many computing tasks involve heavy mathematical calculations, or analyzing large amounts of data. These operations can take a long time to complete using only one computer. Networks such as the Internet provide many computers with the ability to communicate with each other. Parallel or distributed computing takes advantage of these networked computers by arranging them to work together on a problem, thereby reducing the time needed to obtain the solution. The drawback to using a network of computers to solve a problem is the time wasted in communicating between the various hosts. The application of distributed computing techniques to a space environment or to use over a satellite network would therefore be limited by the amount of time needed to send data across the network, which would typically take much longer than on a terrestrial network. This experiment shows how much faster a large job can be performed by adding more computers to the task, what role communications time plays in the total execution time, and the impact a long-delay network has on a distributed computing system.

  15. Qubits and quantum Hamiltonian computing performances for operating a digital Boolean 1/2-adder

    NASA Astrophysics Data System (ADS)

    Dridi, Ghassen; Faizy Namarvar, Omid; Joachim, Christian

    2018-04-01

    Quantum Boolean (1 + 1) digits 1/2-adders are designed with 3 qubits for the quantum computing (Qubits) and 4 quantum states for the quantum Hamiltonian computing (QHC) approaches. Detailed analytical solutions are provided to analyse the time operation of those different 1/2-adder gates. QHC is more robust to noise than Qubits and requires about the same amount of energy for running its 1/2-adder logical operations. QHC is faster in time than Qubits but its logical output measurement takes longer.

  16. Dynamic-Data Driven Modeling of Uncertainties and 3D Effects of Porous Shape Memory Alloys

    DTIC Science & Technology

    2014-02-03

    takes longer since cooling is required. In fact, five to ten times longer is common. Porous SMAs using an appropriately cold liquid is one of the...deploying solar panels, space station component joining, vehicular docking, and numerous Mars rover components. On airplanes or drones, jet engine...Presho, G. Li. Generalized multiscale finite element methods. Nonlinear elliptic equations, Communication in Computational Physics, 15 (2014), pp

  17. Direct Measurements of Smartphone Screen-Time: Relationships with Demographics and Sleep.

    PubMed

    Christensen, Matthew A; Bettencourt, Laura; Kaye, Leanne; Moturu, Sai T; Nguyen, Kaylin T; Olgin, Jeffrey E; Pletcher, Mark J; Marcus, Gregory M

    2016-01-01

    Smartphones are increasingly integrated into everyday life, but frequency of use has not yet been objectively measured and compared to demographics, health information, and in particular, sleep quality. The aim of this study was to characterize smartphone use by measuring screen-time directly, determine factors that are associated with increased screen-time, and to test the hypothesis that increased screen-time is associated with poor sleep. We performed a cross-sectional analysis in a subset of 653 participants enrolled in the Health eHeart Study, an internet-based longitudinal cohort study open to any interested adult (≥ 18 years). Smartphone screen-time (the number of minutes in each hour the screen was on) was measured continuously via smartphone application. For each participant, total and average screen-time were computed over 30-day windows. Average screen-time specifically during self-reported bedtime hours and sleeping period was also computed. Demographics, medical information, and sleep habits (Pittsburgh Sleep Quality Index-PSQI) were obtained by survey. Linear regression was used to obtain effect estimates. Total screen-time over 30 days was a median 38.4 hours (IQR 21.4 to 61.3) and average screen-time over 30 days was a median 3.7 minutes per hour (IQR 2.2 to 5.5). Younger age, self-reported race/ethnicity of Black and "Other" were associated with longer average screen-time after adjustment for potential confounders. Longer average screen-time was associated with shorter sleep duration and worse sleep-efficiency. Longer average screen-times during bedtime and the sleeping period were associated with poor sleep quality, decreased sleep efficiency, and longer sleep onset latency. These findings on actual smartphone screen-time build upon prior work based on self-report and confirm that adults spend a substantial amount of time using their smartphones. Screen-time differs across age and race, but is similar across socio-economic strata suggesting that cultural factors may drive smartphone use. Screen-time is associated with poor sleep. These findings cannot support conclusions on causation. Effect-cause remains a possibility: poor sleep may lead to increased screen-time. However, exposure to smartphone screens, particularly around bedtime, may negatively impact sleep.

  18. [Sedentary behaviour 13-years-olds and its association with selected health behaviours, parenting practices and body mass].

    PubMed

    Jodkowska, Maria; Tabak, Izabela; Oblacińska, Anna; Stalmach, Magdalena

    2013-01-01

    1. To estimate the time spent in sedentary behaviour (watching TV, using the computer, doing homework). 2. To assess the link between the total time spent on watching TV, using the computer, doing homework and dietary habits, physical activity, parental practices and body mass. Cross-sectional study was conducted in Poland in 2008 among 13-year olds (n=600). They self-reported their time of TV viewing, computer use and homework. Their dietary behaviours, physical activity (MVPA) and parenting practices were also self-reported. Height and weight were measured by school nurses. Descriptive statistics and correlation were used in this analysis. The mean time spent watching television in school days was 2.3 hours for girls and 2.2 for boys. Boys spent significantly more time using the computer than girls - respectively 1.8 and 1.5 hours, while girls took longer doing homework - respectively 1.7 and 1.3 hours. Mean screen time was about 4 hours in school days and about 6 hours during weekend, statistically longer for boys in weekdays. Screen time was positively associated with intake of sweets, chips, soft drinks, "fast food" and meals consumption during TV, and negatively with regularity of meals and parental supervision. There was no correlation between screen time with physical activity and body mass. Sedentary behaviours and physical activity are not competing behaviours in Polish teenagers, but their relationship with unhealthy dietary patterns may lead to development of obesity. Good parental practices, both mother's and father's supervision seems to be crucial for screen time limitation in their children. Parents should become aware that relevant lifestyle monitoring of their children is a crucial element of health education in prevention of civilization diseases. This is a task for both healthcare workers and educational staff.

  19. Prognosis of canine patients with nasal tumors according to modified clinical stages based on computed tomography: a retrospective study.

    PubMed

    Kondo, Yumi; Matsunaga, Satoru; Mochizuki, Manabu; Kadosawa, Tsuyoshi; Nakagawa, Takayuki; Nishimura, Ryohei; Sasaki, Nobuo

    2008-03-01

    To evaluate the efficacy of clinical staging based on computed tomography (CT) imaging over the World Health Organization (WHO) staging system based on radiography for nasal tumors in dogs, a retrospective study was conducted. This study used 112 dogs that had nasal tumors; they had undergone radiography and CT and had been histologically confirmed as having nasal tumors. Among 112 dogs, 85 (75.9%) were diagnosed as adenocarcinoma. Then they were analyzed for survival time according to each staging system. More than 70% of the patients with adenocarcinoma were classified as having WHO stage III. The patients classified under WHO stage II tended to survive longer than those classified under WHO stage III. Dogs classified under WHO stage III were further grouped into CT stages III and IV, and CT stage III patients had a significantly longer survival time than CT stage IV patients. In addition, patients treated with a combination of surgery and radiation had a significantly longer survival time than the patients who did not receive any treatment in CT stage III. On the other hand, different treatment modalities did not show a significant difference in the survival time of CT stage IV dogs. The results suggest that WHO stage III dogs may have various levels of tumor progression, indicating that the CT staging system may be more accurate than the WHO staging system.

  20. Agreement processing and attraction errors in aging: evidence from subject-verb agreement in German.

    PubMed

    Reifegerste, Jana; Hauer, Franziska; Felser, Claudia

    2017-11-01

    Effects of aging on lexical processing are well attested, but the picture is less clear for grammatical processing. Where age differences emerge, these are usually ascribed to working-memory (WM) decline. Previous studies on the influence of WM on agreement computation have yielded inconclusive results, and work on aging and subject-verb agreement processing is lacking. In two experiments (Experiment 1: timed grammaticality judgment, Experiment 2: self-paced reading + WM test), we investigated older (OA) and younger (YA) adults' susceptibility to agreement attraction errors. We found longer reading latencies and judgment reaction times (RTs) for OAs. Further, OAs, particularly those with low WM scores, were more accepting of sentences with attraction errors than YAs. OAs showed longer reading latencies for ungrammatical sentences, again modulated by WM, than YAs. Our results indicate that OAs have greater difficulty blocking intervening nouns from interfering with the computation of agreement dependencies. WM can modulate this effect.

  1. Non-equilibrium calculations of atmospheric processes initiated by electron impact.

    NASA Astrophysics Data System (ADS)

    Campbell, L.; Brunger, M. J.

    2007-05-01

    Electron impact in the atmosphere produces ionisation, dissociation, electronic excitation and vibrational excitation of atoms and molecules. The products can then take part in chemical reactions, recombination with electrons, or radiative or collisional deactivation. While most such processes are fast, some longer--lived species do not reach equilibrium. The electron source (photoelectrons or auroral electrons) also varies over time and longer-lived species can move substantially in altitude by molecular, ambipolar or eddy diffusion. Hence non-equilibrium calculations are required in some circumstances. Such time-step calculations need to have sufficiently short steps so that the fastest processes are still calculated correctly, but this can lead to computation times that are too large. Hence techniques to allow for longer time steps by incorporating equilibrium calculations are described. Examples are given for results of atmospheric non-equilibrium calculations, including the populations of the vibrational levels of ground state N2, the electron density and its dependence on vibrationally excited N2, predictions of nitric oxide density, and detailed processes during short duration auroral events.

  2. Statistical aspects of solar flares

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    1987-01-01

    A survey of the statistical properties of 850 H alpha solar flares during 1975 is presented. Comparison of the results found here with those reported elsewhere for different epochs is accomplished. Distributions of rise time, decay time, and duration are given, as are the mean, mode, median, and 90th percentile values. Proportions by selected groupings are also determined. For flares in general, mean values for rise time, decay time, and duration are 5.2 + or - 0.4 min, and 18.1 + or 1.1 min, respectively. Subflares, accounting for nearly 90 percent of the flares, had mean values lower than those found for flares of H alpha importance greater than 1, and the differences are statistically significant. Likewise, flares of bright and normal relative brightness have mean values of decay time and duration that are significantly longer than those computed for faint flares, and mass-motion related flares are significantly longer than non-mass-motion related flares. Seventy-three percent of the mass-motion related flares are categorized as being a two-ribbon flare and/or being accompanied by a high-speed dark filament. Slow rise time flares (rise time greater than 5 min) have a mean value for duration that is significantly longer than that computed for fast rise time flares, and long-lived duration flares (duration greater than 18 min) have a mean value for rise time that is significantly longer than that computed for short-lived duration flares, suggesting a positive linear relationship between rise time and duration for flares. Monthly occurrence rates for flares in general and by group are found to be linearly related in a positive sense to monthly sunspot number. Statistical testing reveals the association between sunspot number and numbers of flares to be significant at the 95 percent level of confidence, and the t statistic for slope is significant at greater than 99 percent level of confidence. Dependent upon the specific fit, between 58 percent and 94 percent of the variation can be accounted for with the linear fits. A statistically significant Northern Hemisphere flare excess (P less than 1 percent) was found, as was a Western Hemisphere excess (P approx 3 percent). Subflares were more prolific within 45 deg of central meridian (P less than 1 percent), while flares of H alpha importance or = 1 were more prolific near the limbs greater than 45 deg from central meridian; P approx 2 percent). Two-ribbon flares were more frequent within 45 deg of central meridian (P less than 1 percent). Slow rise time flares occurred more frequently in the western hemisphere (P approx 2 percent), as did short-lived duration flares (P approx 9 percent), but fast rise time flares were not preferentially distributed (in terms of east-west or limb-disk). Long-lived duration flares occurred more often within 45 deg 0 central meridian (P approx 7 percent). Mean durations for subflares and flares of H alpha importance or + 1, found within 45 deg of central meridian, are 14 percent and 70 percent, respectively, longer than those found for flares closer to the limb. As compared to flares occurring near cycle maximum, the flares of 1975 (near solar minimum) have mean values of rise time, decay time, and duration that are significantly shorter. A flare near solar maximum, on average, is about 1.6 times longer than one occurring near solar minimum.

  3. 20 CFR 229.42 - When a child can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...

  4. 20 CFR 229.42 - When a child can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...

  5. 20 CFR 229.42 - When a child can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...

  6. 20 CFR 229.42 - When a child can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...

  7. Resin Flow Behavior Simulation of Grooved Foam Sandwich Composites with the Vacuum Assisted Resin Infusion (VARI) Molding Process

    PubMed Central

    Zhao, Chenhui; Zhang, Guangcheng; Wu, Yibo

    2012-01-01

    The resin flow behavior in the vacuum assisted resin infusion molding process (VARI) of foam sandwich composites was studied by both visualization flow experiments and computer simulation. Both experimental and simulation results show that: the distribution medium (DM) leads to a shorter molding filling time in grooved foam sandwich composites via the VARI process, and the mold filling time is linearly reduced with the increase of the ratio of DM/Preform. Patterns of the resin sources have a significant influence on the resin filling time. The filling time of center source is shorter than that of edge pattern. Point pattern results in longer filling time than of linear source. Short edge/center patterns need a longer time to fill the mould compared with Long edge/center sources.

  8. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.

    1985-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  9. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1987-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  10. Screen Time, How Much Is Too Much? The Social and Emotional Costs of Technology on the Adolescent Brain

    ERIC Educational Resources Information Center

    DeWeese, Katherine Lynn

    2014-01-01

    Screen time no longer means just the amount of time one spends in front of the television. Now it is an aggregate amount of time spent on smartphones, computers as well as multitasking with different devices. How much are the glowing rectangles taking away from adolescent social and emotional health? How is it changing how students learn and how…

  11. Influence of Planning Time and First-Move Strategy on Tower of Hanoi Problem-Solving Performance of Mentally Retarded Young Adults and Nonretarded Children.

    ERIC Educational Resources Information Center

    Spitz, Herman H.; And Others

    1985-01-01

    In two experiments using a computer-interfaced problem, planning time of 50 retarded young adults was as long as or longer than that of higher performing nonretarded children. In neither group was there a reliable correlation between planning time and performance. There were group differences in preferred strategies, possibly associated with…

  12. Direct Measurements of Smartphone Screen-Time: Relationships with Demographics and Sleep

    PubMed Central

    Christensen, Matthew A.; Bettencourt, Laura; Kaye, Leanne; Moturu, Sai T.; Nguyen, Kaylin T.; Olgin, Jeffrey E.; Pletcher, Mark J.; Marcus, Gregory M.

    2016-01-01

    Background Smartphones are increasingly integrated into everyday life, but frequency of use has not yet been objectively measured and compared to demographics, health information, and in particular, sleep quality. Aims The aim of this study was to characterize smartphone use by measuring screen-time directly, determine factors that are associated with increased screen-time, and to test the hypothesis that increased screen-time is associated with poor sleep. Methods We performed a cross-sectional analysis in a subset of 653 participants enrolled in the Health eHeart Study, an internet-based longitudinal cohort study open to any interested adult (≥ 18 years). Smartphone screen-time (the number of minutes in each hour the screen was on) was measured continuously via smartphone application. For each participant, total and average screen-time were computed over 30-day windows. Average screen-time specifically during self-reported bedtime hours and sleeping period was also computed. Demographics, medical information, and sleep habits (Pittsburgh Sleep Quality Index–PSQI) were obtained by survey. Linear regression was used to obtain effect estimates. Results Total screen-time over 30 days was a median 38.4 hours (IQR 21.4 to 61.3) and average screen-time over 30 days was a median 3.7 minutes per hour (IQR 2.2 to 5.5). Younger age, self-reported race/ethnicity of Black and "Other" were associated with longer average screen-time after adjustment for potential confounders. Longer average screen-time was associated with shorter sleep duration and worse sleep-efficiency. Longer average screen-times during bedtime and the sleeping period were associated with poor sleep quality, decreased sleep efficiency, and longer sleep onset latency. Conclusions These findings on actual smartphone screen-time build upon prior work based on self-report and confirm that adults spend a substantial amount of time using their smartphones. Screen-time differs across age and race, but is similar across socio-economic strata suggesting that cultural factors may drive smartphone use. Screen-time is associated with poor sleep. These findings cannot support conclusions on causation. Effect-cause remains a possibility: poor sleep may lead to increased screen-time. However, exposure to smartphone screens, particularly around bedtime, may negatively impact sleep. PMID:27829040

  13. Accelerating the spin-up of the coupled carbon and nitrogen cycle model in CLM4

    DOE PAGES

    Fang, Yilin; Liu, Chongxuan; Leung, Lai-Yung R.

    2015-03-24

    The commonly adopted biogeochemistry spin-up process in an Earth system model (ESM) is to run the model for hundreds to thousands of years subject to periodic atmospheric forcing to reach dynamic steady state of the carbon–nitrogen (CN) models. A variety of approaches have been proposed to reduce the computation time of the spin-up process. Significant improvement in computational efficiency has been made recently. However, a long simulation time is still required to reach the common convergence criteria of the coupled carbon–nitrogen model. A gradient projection method was proposed and used to further reduce the computation time after examining the trendmore » of the dominant carbon pools. The Community Land Model version 4 (CLM4) with a carbon and nitrogen component was used in this study. From point-scale simulations, we found that the method can reduce the computation time by 20–69% compared to one of the fastest approaches in the literature. We also found that the cyclic stability of total carbon for some cases differs from that of the periodic atmospheric forcing, and some cases even showed instability. Close examination showed that one case has a carbon periodicity much longer than that of the atmospheric forcing due to the annual fire disturbance that is longer than half a year. The rest was caused by the instability of water table calculation in the hydrology model of CLM4. The instability issue is resolved after we replaced the hydrology scheme in CLM4 with a flow model for variably saturated porous media.« less

  14. Time and learning efficiency in Internet-based learning: a systematic review and meta-analysis.

    PubMed

    Cook, David A; Levinson, Anthony J; Garside, Sarah

    2010-12-01

    Authors have claimed that Internet-based instruction promotes greater learning efficiency than non-computer methods. determine, through a systematic synthesis of evidence in health professions education, how Internet-based instruction compares with non-computer instruction in time spent learning, and what features of Internet-based instruction are associated with improved learning efficiency. we searched databases including MEDLINE, CINAHL, EMBASE, and ERIC from 1990 through November 2008. STUDY SELECTION AND DATA ABSTRACTION we included all studies quantifying learning time for Internet-based instruction for health professionals, compared with other instruction. Reviewers worked independently, in duplicate, to abstract information on interventions, outcomes, and study design. we identified 20 eligible studies. Random effects meta-analysis of 8 studies comparing Internet-based with non-Internet instruction (positive numbers indicating Internet longer) revealed pooled effect size (ES) for time -0.10 (p = 0.63). Among comparisons of two Internet-based interventions, providing feedback adds time (ES 0.67, p =0.003, two studies), and greater interactivity generally takes longer (ES 0.25, p = 0.089, five studies). One study demonstrated that adapting to learner prior knowledge saves time without significantly affecting knowledge scores. Other studies revealed that audio narration, video clips, interactive models, and animations increase learning time but also facilitate higher knowledge and/or satisfaction. Across all studies, time correlated positively with knowledge outcomes (r = 0.53, p = 0.021). on average, Internet-based instruction and non-computer instruction require similar time. Instructional strategies to enhance feedback and interactivity typically prolong learning time, but in many cases also enhance learning outcomes. Isolated examples suggest potential for improving efficiency in Internet-based instruction.

  15. Implementation of an Audio Computer-Assisted Self-Interview (ACASI) System in a General Medicine Clinic

    PubMed Central

    Deamant, C.; Smith, J.; Garcia, D.; Angulo, F.

    2015-01-01

    Summary Background Routine implementation of instruments to capture patient-reported outcomes could guide clinical practice and facilitate health services research. Audio interviews facilitate self-interviews across literacy levels. Objectives To evaluate time burden for patients, and factors associated with response times for an audio computer-assisted self interview (ACASI) system integrated into the clinical workflow. Methods We developed an ACASI system, integrated with a research data warehouse. Instruments for symptom burden, self-reported health, depression screening, tobacco use, and patient satisfaction were administered through touch-screen monitors in the general medicine clinic at the Cook County Health & Hospitals System during April 8, 2011-July 27, 2012. We performed a cross-sectional study to evaluate the mean time burden per item and for each module of instruments; we evaluated factors associated with longer response latency. Results Among 1,670 interviews, the mean per-question response time was 18.4 [SD, 6.1] seconds. By multivariable analysis, age was most strongly associated with prolonged response time and increased per decade compared to < 50 years as follows (additional seconds per question; 95% CI): 50–59 years (1.4; 0.7 to 2.1 seconds); 60–69 (3.4; 2.6 to 4.1); 70–79 (5.1; 4.0 to 6.1); and 80–89 (5.5; 4.1 to 7.0). Response times also were longer for Spanish language (3.9; 2.9 to 4.9); no home computer use (3.3; 2.8 to 3.9); and, low mental self-reported health (0.6; 0.0 to 1.1). However, most interviews were completed within 10 minutes. Conclusions An ACASI software system can be included in a patient visit and adds minimal time burden. The burden was greatest for older patients, interviews in Spanish, and for those with less computer exposure. A patient’s self-reported health had minimal impact on response times. PMID:25848420

  16. Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows

    NASA Astrophysics Data System (ADS)

    Jittamai, Phongchai

    This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and approximately 40% of the tested problems were solved optimally by the algorithms.

  17. Implementation of an audio computer-assisted self-interview (ACASI) system in a general medicine clinic: patient response burden.

    PubMed

    Trick, W E; Deamant, C; Smith, J; Garcia, D; Angulo, F

    2015-01-01

    Routine implementation of instruments to capture patient-reported outcomes could guide clinical practice and facilitate health services research. Audio interviews facilitate self-interviews across literacy levels. To evaluate time burden for patients, and factors associated with response times for an audio computer-assisted self interview (ACASI) system integrated into the clinical workflow. We developed an ACASI system, integrated with a research data warehouse. Instruments for symptom burden, self-reported health, depression screening, tobacco use, and patient satisfaction were administered through touch-screen monitors in the general medicine clinic at the Cook County Health & Hospitals System during April 8, 2011-July 27, 2012. We performed a cross-sectional study to evaluate the mean time burden per item and for each module of instruments; we evaluated factors associated with longer response latency. Among 1,670 interviews, the mean per-question response time was 18.4 [SD, 6.1] seconds. By multivariable analysis, age was most strongly associated with prolonged response time and increased per decade compared to < 50 years as follows (additional seconds per question; 95% CI): 50-59 years (1.4; 0.7 to 2.1 seconds); 60-69 (3.4; 2.6 to 4.1); 70-79 (5.1; 4.0 to 6.1); and 80-89 (5.5; 4.1 to 7.0). Response times also were longer for Spanish language (3.9; 2.9 to 4.9); no home computer use (3.3; 2.8 to 3.9); and, low mental self-reported health (0.6; 0.0 to 1.1). However, most interviews were completed within 10 minutes. An ACASI software system can be included in a patient visit and adds minimal time burden. The burden was greatest for older patients, interviews in Spanish, and for those with less computer exposure. A patient's self-reported health had minimal impact on response times.

  18. Computation and projection of spiral wave trajectories during atrial fibrillation: a computational study.

    PubMed

    Pashaei, Ali; Bayer, Jason; Meillet, Valentin; Dubois, Rémi; Vigmond, Edward

    2015-03-01

    To show how atrial fibrillation rotor activity on the heart surface manifests as phase on the torso, fibrillation was induced on a geometrically accurate computer model of the human atria. The Hilbert transform, time embedding, and filament detection were compared. Electrical activity on the epicardium was used to compute potentials on different surfaces from the atria to the torso. The Hilbert transform produces erroneous phase when pacing for longer than the action potential duration. The number of phase singularities, frequency content, and the dominant frequency decreased with distance from the heart, except for the convex hull. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Turbulent transport measurements with a laser Doppler velocimeter.

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Angus, J. C.; Dunning, J. W., Jr.

    1972-01-01

    The power spectrum of phototube current from a laser Doppler velocimeter operating in the heterodyne mode has been computed. The spectral width and shape predicted by the theory are in agreement with experiment. For normal operating parameters the time-average spectrum contains information only for times shorter than the Lagrangian-integral time scale of the turbulence. To examine the long-time behavior, one must use either extremely small scattering angles, much-longer-wavelength radiation, or a different mode of signal analysis, e.g., FM detection.

  20. A discrete classical space-time could require 6 extra-dimensions

    NASA Astrophysics Data System (ADS)

    Guillemant, Philippe; Medale, Marc; Abid, Cherifa

    2018-01-01

    We consider a discrete space-time in which conservation laws are computed in such a way that the density of information is kept bounded. We use a 2D billiard as a toy model to compute the uncertainty propagation in ball positions after every shock and the corresponding loss of phase information. Our main result is the computation of a critical time step above which billiard calculations are no longer deterministic, meaning that a multiverse of distinct billiard histories begins to appear, caused by the lack of information. Then, we highlight unexpected properties of this critical time step and the subsequent exponential evolution of the number of histories with time, to observe that after certain duration all billiard states could become possible final states, independent of initial conditions. We conclude that if our space-time is really a discrete one, one would need to introduce extra-dimensions in order to provide supplementary constraints that specify which history should be played.

  1. Computational Fluid Dynamics Modeling of Supersonic Coherent Jets for Electric Arc Furnace Steelmaking Process

    NASA Astrophysics Data System (ADS)

    Alam, Morshed; Naser, Jamal; Brooks, Geoffrey; Fontana, Andrea

    2010-12-01

    Supersonic coherent gas jets are now used widely in electric arc furnace steelmaking and many other industrial applications to increase the gas-liquid mixing, reaction rates, and energy efficiency of the process. However, there has been limited research on the basic physics of supersonic coherent jets. In the present study, computational fluid dynamics (CFD) simulation of the supersonic jet with and without a shrouding flame at room ambient temperature was carried out and validated against experimental data. The numerical results show that the potential core length of the supersonic oxygen and nitrogen jet with shrouding flame is more than four times and three times longer, respectively, than that without flame shrouding, which is in good agreement with the experimental data. The spreading rate of the supersonic jet decreased dramatically with the use of the shrouding flame compared with a conventional supersonic jet. The present CFD model was used to investigate the characteristics of the supersonic coherent oxygen jet at steelmaking conditions of around 1700 K (1427 °C). The potential core length of the supersonic coherent oxygen jet at steelmaking conditions was 1.4 times longer than that at room ambient temperature.

  2. Accuracy and time requirements of a bar-code inventory system for medical supplies.

    PubMed

    Hanson, L B; Weinswig, M H; De Muth, J E

    1988-02-01

    The effects of implementing a bar-code system for issuing medical supplies to nursing units at a university teaching hospital were evaluated. Data on the time required to issue medical supplies to three nursing units at a 480-bed, tertiary-care teaching hospital were collected (1) before the bar-code system was implemented (i.e., when the manual system was in use), (2) one month after implementation, and (3) four months after implementation. At the same times, the accuracy of the central supply perpetual inventory was monitored using 15 selected items. One-way analysis of variance tests were done to determine any significant differences between the bar-code and manual systems. Using the bar-code system took longer than using the manual system because of a significant difference in the time required for order entry into the computer. Multiple-use requirements of the central supply computer system made entering bar-code data a much slower process. There was, however, a significant improvement in the accuracy of the perpetual inventory. Using the bar-code system for issuing medical supplies to the nursing units takes longer than using the manual system. However, the accuracy of the perpetual inventory was significantly improved with the implementation of the bar-code system.

  3. Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer.

    PubMed

    Ashtiani, Matin N; Kheradpisheh, Saeed R; Masquelier, Timothée; Ganjtabesh, Mohammad

    2017-01-01

    The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the "entry" level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies).

  4. Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer

    PubMed Central

    Ashtiani, Matin N.; Kheradpisheh, Saeed R.; Masquelier, Timothée; Ganjtabesh, Mohammad

    2017-01-01

    The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the “entry” level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies). PMID:28790954

  5. An XML-based method for astronomy software designing

    NASA Astrophysics Data System (ADS)

    Liao, Mingxue; Aili, Yusupu; Zhang, Jin

    XML-based method for standardization of software designing is introduced and analyzed and successfully applied to renovating the hardware and software of the digital clock at Urumqi Astronomical Station. Basic strategy for eliciting time information from the new digital clock of FT206 in the antenna control program is introduced. By FT206, the need to compute how many centuries passed since a certain day with sophisticated formulas is eliminated and it is no longer necessary to set right UT time for the computer holding control over antenna because the information about year, month, day are all deduced from Julian day dwelling in FT206, rather than from computer time. With XML-based method and standard for software designing, various existing designing methods are unified, communications and collaborations between developers are facilitated, and thus Internet-based mode of developing software becomes possible. The trend of development of XML-based designing method is predicted.

  6. Cycle-averaged dynamics of a periodically driven, closed-loop circulation model

    NASA Technical Reports Server (NTRS)

    Heldt, T.; Chang, J. L.; Chen, J. J. S.; Verghese, G. C.; Mark, R. G.

    2005-01-01

    Time-varying elastance models have been used extensively in the past to simulate the pulsatile nature of cardiovascular waveforms. Frequently, however, one is interested in dynamics that occur over longer time scales, in which case a detailed simulation of each cardiac contraction becomes computationally burdensome. In this paper, we apply circuit-averaging techniques to a periodically driven, closed-loop, three-compartment recirculation model. The resultant cycle-averaged model is linear and time invariant, and greatly reduces the computational burden. It is also amenable to systematic order reduction methods that lead to further efficiencies. Despite its simplicity, the averaged model captures the dynamics relevant to the representation of a range of cardiovascular reflex mechanisms. c2004 Elsevier Ltd. All rights reserved.

  7. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1990-01-01

    A ground-based adaptive optics imaging telescope system attempts to improve image quality by detecting and correcting for atmospherically induced wavefront aberrations. The required control computations during each cycle will take a finite amount of time. Longer time delays result in larger values of residual wavefront error variance since the atmosphere continues to change during that time. Thus an optical processor may be well-suited for this task. This paper presents a study of the accuracy requirements in a general optical processor that will make it competitive with, or superior to, a conventional digital computer for the adaptive optics application. An optimization of the adaptive optics correction algorithm with respect to an optical processor's degree of accuracy is also briefly discussed.

  8. Ionization of polarized 3He+ ions in EBIS trap with slanted electrostatic mirror.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pikin,A.; Zelenski, A.; Kponou, A.

    2007-09-10

    Methods of producing the nuclear polarized {sup 3}He{sup +} ions and their ionization to {sup 3}H{sup ++} in ion trap of the electron Beam Ion Source (EBIS) are discussed. Computer simulations show that injection and accumulation of {sup 3}He{sup +} ions in the EBIS trap with slanted electrostatic mirror can be very effective for injection times longer than the ion traversal time through the trap.

  9. Ionization of polarized {sup 3}He{sup +} ions in EBIS trap with slanted electrostatic mirror

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pikin, A.; Zelenski, A.; Kponou, A.

    2008-02-06

    Methods of producing the nuclear polarized {sup 3}He{sup +} ions and their ionization to {sup 3}He{sup ++} in ion trap of the electron Beam Ion Source (EBIS) are discussed. Computer simulations show that injection and accumulation of {sup 3}He{sup +} ions in the EBIS trap with slanted electrostatic mirror can be very effective for injection times longer than the ion traversal time through the trap.

  10. Relationships between menopausal and mood symptoms and EEG sleep measures in a multi-ethnic sample of middle-aged women: the SWAN sleep study.

    PubMed

    Kravitz, Howard M; Avery, Elizabeth; Sowers, Maryfran; Bromberger, Joyce T; Owens, Jane F; Matthews, Karen A; Hall, Martica; Zheng, Huiyong; Gold, Ellen B; Buysse, Daniel J

    2011-09-01

    Examine associations of vasomotor and mood symptoms with visually scored and computer-generated measures of EEG sleep. Cross-sectional analysis. Community-based in-home polysomnography (PSG). 343 African American, Caucasian, and Chinese women; ages 48-58 years; pre-, peri- or post-menopausal; participating in the Study of Women's Health Across the Nation Sleep Study (SWAN Sleep Study). None. Measures included PSG-assessed sleep duration, continuity, and architecture, delta sleep ratio (DSR) computed from automated counts of delta wave activity, daily diary-assessed vasomotor symptoms (VMS), questionnaires to collect mood (depression, anxiety) symptoms, medication, and lifestyle information, and menopausal status using bleeding criteria. Sleep outcomes were modeled using linear regression. Nocturnal VMS were associated with longer sleep time. Higher anxiety symptom scores were associated with longer sleep latency and lower sleep efficiency, but only in women reporting nocturnal VMS. Contrary to expectations, VMS and mood symptoms were unrelated to either DSR or REM latency. Vasomotor symptoms moderated associations of anxiety with EEG sleep measures of sleep latency and sleep efficiency and was associated with longer sleep duration in this multi-ethnic sample of midlife women.

  11. An efficient method for the computation of Legendre moments.

    PubMed

    Yap, Pew-Thian; Paramesran, Raveendran

    2005-12-01

    Legendre moments are continuous moments, hence, when applied to discrete-space images, numerical approximation is involved and error occurs. This paper proposes a method to compute the exact values of the moments by mathematically integrating the Legendre polynomials over the corresponding intervals of the image pixels. Experimental results show that the values obtained match those calculated theoretically, and the image reconstructed from these moments have lower error than that of the conventional methods for the same order. Although the same set of exact Legendre moments can be obtained indirectly from the set of geometric moments, the computation time taken is much longer than the proposed method.

  12. Designing Interaction for Next Generation Personal Computing

    NASA Astrophysics Data System (ADS)

    de Michelis, Giorgio; Loregian, Marco; Moderini, Claudio; Marti, Patrizia; Colombo, Cesare; Bannon, Liam; Storni, Cristiano; Susani, Marco

    Over two decades of research in the field of Interaction Design and Computer Supported Cooperative Work convinced us that the current design of workstations no longer fits users’ needs. It is time to design new personal computers based on metaphors alternative to the desktop one. With this SIG, we are seeking to involve international HCI professionals into the challenges of designing products that are radically new and tackling the many different issues of modern knowledge workers. We would like to engage a wider cross-section of the community: our focus will be on issues of development and participation and the impact of different values in our work.

  13. Computations of Flow over a Hump Model Using Higher Order Method with Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Balakumar, P.

    2005-01-01

    Turbulent separated flow over a two-dimensional hump is computed by solving the RANS equations with k - omega (SST) turbulence model for the baseline, steady suction and oscillatory blowing/suction flow control cases. The flow equations and the turbulent model equations are solved using a fifth-order accurate weighted essentially. nonoscillatory (WENO) scheme for space discretization and a third order, total variation diminishing (TVD) Runge-Kutta scheme for time integration. Qualitatively the computed pressure distributions exhibit the same behavior as those observed in the experiments. The computed separation regions are much longer than those observed experimentally. However, the percentage reduction in the separation region in the steady suction case is closer to what was measured in the experiment. The computations did not predict the expected reduction in the separation length in the oscillatory case. The predicted turbulent quantities are two to three times smaller than the measured values pointing towards the deficiencies in the existing turbulent models when they are applied to strong steady/unsteady separated flows.

  14. CT fluoroscopy-guided preoperative short hook wire placement for small pulmonary lesions: evaluation of safety and identification of risk factors for pneumothorax.

    PubMed

    Iguchi, Toshihiro; Hiraki, Takao; Gobara, Hideo; Fujiwara, Hiroyasu; Matsui, Yusuke; Miyoshi, Shinichiro; Kanazawa, Susumu

    2016-01-01

    To retrospectively evaluate the safety of computed tomography (CT) fluoroscopy-guided short hook wire placement for video-assisted thoracoscopic surgery and the risk factors for pneumothorax associated with this procedure. We analyzed 267 short hook wire placements for 267 pulmonary lesions (mean diameter, 9.9 mm). Multiple variables related to the patients, lesions, and procedures were assessed to determine the risk factors for pneumothorax. Complications (219 grade 1 and 4 grade 2 adverse events) occurred in 196 procedures. No grade 3 or above adverse events were observed. Univariate analysis revealed increased vital capacity (odds ratio [OR], 1.518; P = 0.021), lower lobe lesion (OR, 2.343; P =0.001), solid lesion (OR, 1.845; P = 0.014), prone positioning (OR, 1.793; P = 0.021), transfissural approach (OR, 11.941; P = 0.017), and longer procedure time (OR, 1.036; P = 0.038) were significant predictors of pneumothorax. Multivariate analysis revealed only the transfissural approach (OR, 12.171; P = 0.018) and a longer procedure time (OR, 1.048; P = 0.012) as significant independent predictors. Complications related to CT fluoroscopy-guided preoperative short hook wire placement often occurred, but all complications were minor. A transfissural approach and longer procedure time were significant independent predictors of pneumothorax. Complications related to CT fluoroscopy-guided preoperative short hook wire placement often occur. Complications are usually minor and asymptomatic. A transfissural approach and longer procedure time are significant independent predictors of pneumothorax.

  15. Stroke Severity Affects Timing: Time From Stroke Code Activation to Initial Imaging is Longer in Patients With Milder Strokes.

    PubMed

    Kwei, Kimberly T; Liang, John; Wilson, Natalie; Tuhrim, Stanley; Dhamoon, Mandip

    2018-05-01

    Optimizing the time it takes to get a potential stroke patient to imaging is essential in a rapid stroke response. At our hospital, door-to-imaging time is comprised of 2 time periods: the time before a stroke is recognized, followed by the period after the stroke code is called during which the stroke team assesses and brings the patient to the computed tomography scanner. To control for delays due to triage, we isolated the time period after a potential stroke has been recognized, as few studies have examined the biases of stroke code responders. This "code-to-imaging time" (CIT) encompassed the time from stroke code activation to initial imaging, and we hypothesized that perception of stroke severity would affect how quickly stroke code responders act. In consecutively admitted ischemic stroke patients at The Mount Sinai Hospital emergency department, we tested associations between National Institutes of Health Stroke Scale scores (NIHSS), continuously and at different cutoffs, and CIT using spline regression, t tests for univariate analysis, and multivariable linear regression adjusting for age, sex, and race/ethnicity. In our study population, mean CIT was 26 minutes, and mean presentation NIHSS was 8. In univariate and multivariate analyses comparing CIT between mild and severe strokes, stroke scale scores <4 were associated with longer response times. Milder strokes are associated with a longer CIT with a threshold effect at a NIHSS of 4.

  16. An Object-Oriented Approach to the Development of Computer-Assisted Instructional Material Using Hypertext

    DTIC Science & Technology

    1988-12-01

    reading on computers for more than 10 or 15 minutes . If it takes any longer I would rather have a piece of paper in front of me. It did provide an outline...advisor, Capt. David Umphress. I thank Dave also for the moral support he provided by agreeing to advise this thesis, and by providing timely...2.17 information using... text, graphics, video , music, voice, and animation" (Williams, 1987:109; Conklin, 1987a:32). Even so, HyperCard has been very

  17. Explicit and implicit calculations of turbulent cavity flows with and without yaw angle

    NASA Astrophysics Data System (ADS)

    Yen, Guan-Wei

    1989-08-01

    Computations were performed to simulate turbulent supersonic flows past three-dimensional deep cavities with and without yaw. Simulation of these self-sustained oscillatory flows were generated through time accurate solutions of the Reynolds averaged complete Navier-Stokes equations using two different schemes: (1) MacCormack, finite-difference; and (2) implicit, upwind, finite-volume schemes. The second scheme, which is approximately 30 percent faster, is found to produce better time accurate results. The Reynolds stresses were modeled, using the Baldwin-Lomax algebraic turbulence model with certain modifications. The computational results include instantaneous and time averaged flow properties everywhere in the computational domain. Time series analyses were performed for the instantaneous pressure values on the cavity floor. The time averaged computational results show good agreement with the experimental data along the cavity floor and walls. When the yaw angle is nonzero, there is no longer a single length scale (length-to-depth ratio) for the flow, as is the case for zero yaw angle flow. The dominant directions and inclinations of the vortices are dramatically different for this nonsymmetric flow. The vortex shedding from the cavity into the mainstream flow is captured computationally. This phenomenon, which is due to the oscillation of the shear layer, is confirmed by the solutions of both schemes.

  18. Explicit and implicit calculations of turbulent cavity flows with and without yaw angle. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Yen, Guan-Wei

    1989-01-01

    Computations were performed to simulate turbulent supersonic flows past three-dimensional deep cavities with and without yaw. Simulation of these self-sustained oscillatory flows were generated through time accurate solutions of the Reynolds averaged complete Navier-Stokes equations using two different schemes: (1) MacCormack, finite-difference; and (2) implicit, upwind, finite-volume schemes. The second scheme, which is approximately 30 percent faster, is found to produce better time accurate results. The Reynolds stresses were modeled, using the Baldwin-Lomax algebraic turbulence model with certain modifications. The computational results include instantaneous and time averaged flow properties everywhere in the computational domain. Time series analyses were performed for the instantaneous pressure values on the cavity floor. The time averaged computational results show good agreement with the experimental data along the cavity floor and walls. When the yaw angle is nonzero, there is no longer a single length scale (length-to-depth ratio) for the flow, as is the case for zero yaw angle flow. The dominant directions and inclinations of the vortices are dramatically different for this nonsymmetric flow. The vortex shedding from the cavity into the mainstream flow is captured computationally. This phenomenon, which is due to the oscillation of the shear layer, is confirmed by the solutions of both schemes.

  19. Strategy for reflector pattern calculation - Let the computer do the work

    NASA Technical Reports Server (NTRS)

    Lam, P. T.; Lee, S.-W.; Hung, C. C.; Acosta, R.

    1986-01-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. It is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  20. Strategy for reflector pattern calculation: Let the computer do the work

    NASA Technical Reports Server (NTRS)

    Lam, P. T.; Lee, S. W.; Hung, C. C.; Acousta, R.

    1985-01-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. it is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  1. Computer model to simulate testing at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.

    1995-01-01

    A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.

  2. Gigaflop (billion floating point operations per second) performance for computational electromagnetics

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Rowell, C.; Hall, W. F.; Mohammadian, A. H.; Schuh, M.; Taylor, K.

    1992-01-01

    Accurate and rapid evaluation of radar signature for alternative aircraft/store configurations would be of substantial benefit in the evolution of integrated designs that meet radar cross-section (RCS) requirements across the threat spectrum. Finite-volume time domain methods offer the possibility of modeling the whole aircraft, including penetrable regions and stores, at longer wavelengths on today's gigaflop supercomputers and at typical airborne radar wavelengths on the teraflop computers of tomorrow. A structured-grid finite-volume time domain computational fluid dynamics (CFD)-based RCS code has been developed at the Rockwell Science Center, and this code incorporates modeling techniques for general radar absorbing materials and structures. Using this work as a base, the goal of the CFD-based CEM effort is to define, implement and evaluate various code development issues suitable for rapid prototype signature prediction.

  3. Evaluation of user input methods for manipulating a tablet personal computer in sterile techniques.

    PubMed

    Yamada, Akira; Komatsu, Daisuke; Suzuki, Takeshi; Kurozumi, Masahiro; Fujinaga, Yasunari; Ueda, Kazuhiko; Kadoya, Masumi

    2017-02-01

    To determine a quick and accurate user input method for manipulating tablet personal computers (PCs) in sterile techniques. We evaluated three different manipulation methods, (1) Computer mouse and sterile system drape, (2) Fingers and sterile system drape, and (3) Digitizer stylus and sterile ultrasound probe cover with a pinhole, in terms of the central processing unit (CPU) performance, manipulation performance, and contactlessness. A significant decrease in CPU score ([Formula: see text]) and an increase in CPU temperature ([Formula: see text]) were observed when a system drape was used. The respective mean times taken to select a target image from an image series (ST) and the mean times for measuring points on an image (MT) were [Formula: see text] and [Formula: see text] s for the computer mouse method, [Formula: see text] and [Formula: see text] s for the finger method, and [Formula: see text] and [Formula: see text] s for the digitizer stylus method, respectively. The ST for the finger method was significantly longer than for the digitizer stylus method ([Formula: see text]). The MT for the computer mouse method was significantly longer than for the digitizer stylus method ([Formula: see text]). The mean success rate for measuring points on an image was significantly lower for the finger method when the diameter of the target was equal to or smaller than 8 mm than for the other methods. No significant difference in the adenosine triphosphate amount at the surface of the tablet PC was observed before, during, or after manipulation via the digitizer stylus method while wearing starch-powdered sterile gloves ([Formula: see text]). Quick and accurate manipulation of tablet PCs in sterile techniques without CPU load is feasible using a digitizer stylus and sterile ultrasound probe cover with a pinhole.

  4. Turbulent transport measurements with a laser Doppler velocimeter

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Angus, J. C.; Dunning, J. W., Jr.

    1972-01-01

    The power spectrum of phototube current from a laser Doppler velocimeter operating in the heterodyne mode has been computed. The spectrum is obtained in terms of the space time correlation function of the fluid. The spectral width and shape predicted by the theory are in agreement with experiment. For normal operating parameters the time average spectrum contains information only for times shorter than the Lagrangian integral time scale of the turbulence. To examine the long time behavior, one must use either extremely small scattering angles, much longer wavelength radiation or a different mode of signal analysis, e.g., FM detection.

  5. Comparing neuronal spike trains with inhomogeneous Poisson distribution: evaluation procedure and experimental application in cases of cyclic activity.

    PubMed

    Fiore, Lorenzo; Lorenzetti, Walter; Ratti, Giovannino

    2005-11-30

    A procedure is proposed to compare single-unit spiking activity elicited in repetitive cycles with an inhomogeneous Poisson process (IPP). Each spike sequence in a cycle is discretized and represented as a point process on a circle. The interspike interval probability density predicted for an IPP is computed on the basis of the experimental firing probability density; differences from the experimental interval distribution are assessed. This procedure was applied to spike trains which were repetitively induced by opening-closing movements of the distal article of a lobster leg. As expected, the density of short interspike intervals, less than 20-40 ms in length, was found to lie greatly below the level predicted for an IPP, reflecting the occurrence of the refractory period. Conversely, longer intervals, ranging from 20-40 to 100-120 ms, were markedly more abundant than expected; this provided evidence for a time window of increased tendency to fire again after a spike. Less consistently, a weak depression of spike generation was observed for longer intervals. A Monte Carlo procedure, implemented for comparison, produced quite similar results, but was slightly less precise and more demanding as concerns computation time.

  6. Cloud Compute for Global Climate Station Summaries

    NASA Astrophysics Data System (ADS)

    Baldwin, R.; May, B.; Cogbill, P.

    2017-12-01

    Global Climate Station Summaries are simple indicators of observational normals which include climatic data summarizations and frequency distributions. These typically are statistical analyses of station data over 5-, 10-, 20-, 30-year or longer time periods. The summaries are computed from the global surface hourly dataset. This dataset totaling over 500 gigabytes is comprised of 40 different types of weather observations with 20,000 stations worldwide. NCEI and the U.S. Navy developed these value added products in the form of hourly summaries from many of these observations. Enabling this compute functionality in the cloud is the focus of the project. An overview of approach and challenges associated with application transition to the cloud will be presented.

  7. Real-time data acquisition and alerts may reduce reaction time and improve perfusionist performance during cardiopulmonary bypass.

    PubMed

    Beck, J R; Fung, K; Lopez, H; Mongero, L B; Argenziano, M

    2015-01-01

    Delayed perfusionist identification and reaction to abnormal clinical situations has been reported to contribute to increased mortality and morbidity. The use of automated data acquisition and compliance safety alerts has been widely accepted in many industries and its use may improve operator performance. A study was conducted to evaluate the reaction time of perfusionists with and without the use of compliance alert. A compliance alert is a computer-generated pop-up banner on a pump-mounted computer screen to notify the user of clinical parameters outside of a predetermined range. A proctor monitored and recorded the time from an alert until the perfusionist recognized the parameter was outside the desired range. Group one included 10 cases utilizing compliance alerts. Group 2 included 10 cases with the primary perfusionist blinded to the compliance alerts. In Group 1, 97 compliance alerts were identified and, in group two, 86 alerts were identified. The average reaction time in the group using compliance alerts was 3.6 seconds. The average reaction time in the group not using the alerts was nearly ten times longer than the group using computer-assisted, real-time data feedback. Some believe that real-time computer data acquisition and feedback improves perfusionist performance and may allow clinicians to identify and rectify potentially dangerous situations. © The Author(s) 2014.

  8. Quantitative colorectal cancer perfusion measurement using dynamic contrast-enhanced multidetector-row computed tomography: effect of acquisition time and implications for protocols.

    PubMed

    Goh, Vicky; Halligan, Steve; Hugill, Jo-Ann; Gartner, Louise; Bartram, Clive I

    2005-01-01

    To determine the effect of acquisition time on quantitative colorectal cancer perfusion measurement. Dynamic contrast-enhanced computed tomography (CT) was performed prospectively in 10 patients with histologically proven colorectal cancer using 4-detector row CT (Lightspeed Plus; GE Healthcare Technologies, Waukesha, WI). Tumor blood flow, blood volume, mean transit time, and permeability were assessed for 3 acquisition times (45, 65, and 130 seconds). Mean values for all 4 perfusion parameters for each acquisition time were compared using the paired t test. Significant differences in permeability values were noted between acquisitions of 45 seconds and 65 and 130 seconds, respectively (P=0.02, P=0.007). There was no significant difference for values of blood volume, blood flow, and mean transit time between any of the acquisition times. Scan acquisitions of 45 seconds are too short for reliable permeability measurement in the abdomen. Longer acquisition times are required.

  9. GRAMPS: An Automated Ambulatory Geriatric Record

    PubMed Central

    Hammond, Kenric W.; King, Carol A.; Date, Vishvanath V.; Prather, Robert J.; Loo, Lawrence; Siddiqui, Khwaja

    1988-01-01

    GRAMPS (Geriatric Record and Multidisciplinary Planning System) is an interactive MUMPS system developed for VA outpatient use. It allows physicians to effectively document care in problem-oriented format with structured narrative and free text, eliminating handwritten input. We evaluated the system in a one-year controlled cohort study. When the computer, was used, appointment times averaged 8.2 minutes longer (32.6 vs. 24.4 minutes) compared to control visits with the same physicians. Computer use was associated with better quality of care as measured in the management of a common problem, hypertension, as well as decreased overall costs of care. When a faster computer was installed, data entry times improved, suggesting that slower processing had accounted for a substantial portion of the observed difference in appointment lengths. The GRAMPS system was well-accepted by providers. The modular design used in GRAMPS has been extended to medical-care applications in Nursing and Mental Health.

  10. Quantum Computation Using Optically Coupled Quantum Dot Arrays

    NASA Technical Reports Server (NTRS)

    Pradhan, Prabhakar; Anantram, M. P.; Wang, K. L.; Roychowhury, V. P.; Saini, Subhash (Technical Monitor)

    1998-01-01

    A solid state model for quantum computation has potential advantages in terms of the ease of fabrication, characterization, and integration. The fundamental requirements for a quantum computer involve the realization of basic processing units (qubits), and a scheme for controlled switching and coupling among the qubits, which enables one to perform controlled operations on qubits. We propose a model for quantum computation based on optically coupled quantum dot arrays, which is computationally similar to the atomic model proposed by Cirac and Zoller. In this model, individual qubits are comprised of two coupled quantum dots, and an array of these basic units is placed in an optical cavity. Switching among the states of the individual units is done by controlled laser pulses via near field interaction using the NSOM technology. Controlled rotations involving two or more qubits are performed via common cavity mode photon. We have calculated critical times, including the spontaneous emission and switching times, and show that they are comparable to the best times projected for other proposed models of quantum computation. We have also shown the feasibility of accessing individual quantum dots using the NSOM technology by calculating the photon density at the tip, and estimating the power necessary to perform the basic controlled operations. We are currently in the process of estimating the decoherence times for this system; however, we have formulated initial arguments which seem to indicate that the decoherence times will be comparable, if not longer, than many other proposed models.

  11. Sensitivity of chemistry-transport model simulations to the duration of chemical and transport operators: a case study with GEOS-Chem v10-01

    NASA Astrophysics Data System (ADS)

    Philip, Sajeev; Martin, Randall V.; Keller, Christoph A.

    2016-05-01

    Chemistry-transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemistry-transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to operator duration. Subsequently, we compare the species simulated with operator durations from 10 to 60 min as typically used by global chemistry-transport models, and identify the operator durations that optimize both computational expense and simulation accuracy. We find that longer continuous transport operator duration increases concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production with longer transport operator duration. Longer chemical operator duration decreases sulfate and ammonium but increases nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by up to a factor of 5 from fine (5 min) to coarse (60 min) operator duration. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, secondary inorganic aerosols, ozone and carbon monoxide with a finer temporal or spatial resolution taken as "truth". Relative simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) operator duration. Chemical operator duration twice that of the transport operator duration offers more simulation accuracy per unit computation. However, the relative simulation error from coarser spatial resolution generally exceeds that from longer operator duration; e.g., degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different operator durations in offline chemistry-transport models. We encourage chemistry-transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.

  12. Sensitivity of chemical transport model simulations to the duration of chemical and transport operators: a case study with GEOS-Chem v10-01

    NASA Astrophysics Data System (ADS)

    Philip, S.; Martin, R. V.; Keller, C. A.

    2015-11-01

    Chemical transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemical transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to temporal resolution. Subsequently, we compare the tracers simulated with operator durations from 10 to 60 min as typically used by global chemical transport models, and identify the timesteps that optimize both computational expense and simulation accuracy. We found that longer transport timesteps increase concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production at longer transport timesteps. Longer chemical timesteps decrease sulfate and ammonium but increase nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by an order of magnitude from fine (5 min) to coarse (60 min) temporal resolution. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, ozone, carbon monoxide and secondary inorganic aerosols with a finer temporal or spatial resolution taken as truth. Simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) temporal resolution. Chemical timesteps twice that of the transport timestep offer more simulation accuracy per unit computation. However, simulation error from coarser spatial resolution generally exceeds that from longer timesteps; e.g. degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different temporal resolutions in offline chemical transport models. We encourage the chemical transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.

  13. Algorithms for searching Fast radio bursts and pulsars in tight binary systems.

    NASA Astrophysics Data System (ADS)

    Zackay, Barak

    2017-01-01

    Fast radio bursts (FRB's) are an exciting, recently discovered, astrophysical transients which their origins are unknown.Currently, these bursts are believed to be coming from cosmological distances, allowing us to probe the electron content on cosmological length scales. Even though their precise localization is crucial for the determination of their origin, radio interferometers were not extensively employed in searching for them due to computational limitations.I will briefly present the Fast Dispersion Measure Transform (FDMT) algorithm,that allows to reduce the operation count in blind incoherent dedispersion by 2-3 orders of magnitude.In addition, FDMT enables to probe the unexplored domain of sub-microsecond astrophysical pulses.Pulsars in tight binary systems are among the most important astrophysical objects as they provide us our best tests of general relativity in the strong field regime.I will provide a preview to a novel algorithm that enables the detection of pulsars in short binary systems using observation times longer than an orbital period.Current pulsar search programs limit their searches for integration times shorter than a few percents of the orbital period.Until now, searching for pulsars in binary systems using observation times longer than an orbital period was considered impossible as one has to blindly enumerate all options for the Keplerian parameters, the pulsar rotation period, and the unknown DM.Using the current state of the art pulsar search techniques and all computers on the earth, such an enumeration would take longer than a Hubble time. I will demonstrate that using the new algorithm, it is possible to conduct such an enumeration on a laptop using real data of the double pulsar PSR J0737-3039.Among the other applications of this algorithm are:1) Searching for all pulsars on all sky positions in gamma ray observations of the Fermi LAT satellite.2) Blind searching for continuous gravitational wave sources emitted by pulsars with non-axis-symmetric matter distribution.Previous attempts to conduct all of the above searches contained substantial sensitivity compromises.

  14. Fast multigrid-based computation of the induced electric field for transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Laakso, Ilkka; Hirata, Akimasa

    2012-12-01

    In transcranial magnetic stimulation (TMS), the distribution of the induced electric field, and the affected brain areas, depends on the position of the stimulation coil and the individual geometry of the head and brain. The distribution of the induced electric field in realistic anatomies can be modelled using computational methods. However, existing computational methods for accurately determining the induced electric field in realistic anatomical models have suffered from long computation times, typically in the range of tens of minutes or longer. This paper presents a matrix-free implementation of the finite-element method with a geometric multigrid method that can potentially reduce the computation time to several seconds or less even when using an ordinary computer. The performance of the method is studied by computing the induced electric field in two anatomically realistic models. An idealized two-loop coil is used as the stimulating coil. Multiple computational grid resolutions ranging from 2 to 0.25 mm are used. The results show that, for macroscopic modelling of the electric field in an anatomically realistic model, computational grid resolutions of 1 mm or 2 mm appear to provide good numerical accuracy compared to higher resolutions. The multigrid iteration typically converges in less than ten iterations independent of the grid resolution. Even without parallelization, each iteration takes about 1.0 s or 0.1 s for the 1 and 2 mm resolutions, respectively. This suggests that calculating the electric field with sufficient accuracy in real time is feasible.

  15. Do you know where your fingers have been? Explicit knowledge of the spatial layout of the keyboard in skilled typists.

    PubMed

    Liu, Xianyun; Crump, Matthew J C; Logan, Gordon D

    2010-06-01

    Two experiments evaluated skilled typists' ability to report knowledge about the layout of keys on a standard keyboard. In Experiment 1, subjects judged the relative direction of letters on the computer keyboard. One group of subjects was asked to imagine the keyboard, one group was allowed to look at the keyboard, and one group was asked to type the letter pair before judging relative direction. The imagine group had larger angular error and longer response time than both the look and touch groups. In Experiment 2, subjects placed one key relative to another. Again, the imagine group had larger angular error, larger distance error, and longer response time than the other groups. The two experiments suggest that skilled typists have poor explicit knowledge of key locations. The results are interpreted in terms of a model with two hierarchical parts in the system controlling typewriting.

  16. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    NASA Astrophysics Data System (ADS)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and instantaneous streamwise velocities, but between the instantaneous temperatures and velocities averaged over certain time interval.

  17. Acceleration of FDTD mode solver by high-performance computing techniques.

    PubMed

    Han, Lin; Xi, Yanping; Huang, Wei-Ping

    2010-06-21

    A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.

  18. Prolonged Screen Viewing Times and Sociodemographic Factors among Pregnant Women: A Cross-Sectional Survey in China

    PubMed Central

    Liu, Dengyuan; Rao, Yunshuang; Zeng, Huan; Zhang, Fan; Wang, Lu; Xie, Yaojie; Sharma, Manoj; Zhao, Yong

    2018-01-01

    Objectives: This study aimed to assess the prevalence of prolonged television, computer, and mobile phone viewing times and examined related sociodemographic factors among Chinese pregnant women. Methods: In this study, a cross-sectional survey was implemented among 2400 Chinese pregnant women in 16 hospitals of 5 provinces from June to August in 2015, and the response rate of 97.76%. We excluded women with serious complications and cognitive disorders. The women were asked about their television, computer, and mobile phone viewing during pregnancy. Prolonged television watching or computer viewing was defined as spending more than two hours on television or computer viewing per day. Prolonged mobile phone viewing was watching more than one hour on mobile phone per day. Results: Among 2345 pregnant women, about 25.1% reported prolonged television viewing, 20.6% reported prolonged computer viewing, and 62.6% reported prolonged mobile phone viewing. Pregnant women with long mobile phone viewing times were likely have long TV (Estimate = 0.080, Standard Error (SE) = 0.016, p < 0.001) and computer viewing times (Estimate = 0.053, SE = 0.022, p = 0.015). Pregnant women with long TV (Estimate = 0.134, SE = 0.027, p < 0.001) and long computer viewing times (Estimate = 0.049, SE = 0.020, p = 0.015) were likely have long mobile phone viewing times. Pregnant women with long TV viewing times were less likely to have long computer viewing times (Estimate = −0.032, SE = 0.015, p = 0.035), and pregnant women with long computer viewing times were less likely have long TV viewing times (Estimate = −0.059, SE = 0.028, p = 0.035). Pregnant women in their second pregnancy had lower prolonged computer viewing times than those in their first pregnancy (Odds Ratio (OR) 0.56, 95% Confidence Interval (CI) 0.42–0.74). Pregnant women in their second pregnancy were more likely have longer prolonged mobile phone viewing times than those in their first pregnancy (OR 1.25, 95% CI 1.01–1.55). Conclusions: The high prevalence rate of prolonged TV, computer, and mobile phone viewing times was common for pregnant women in their first and second pregnancy. This study preliminarily explored the relationship between sociodemographic factors and prolonged screen time to provide some indication for future interventions related to decreasing screen-viewing times during pregnancy in China. PMID:29495439

  19. Mobile Cloud Computing with SOAP and REST Web Services

    NASA Astrophysics Data System (ADS)

    Ali, Mushtaq; Fadli Zolkipli, Mohamad; Mohamad Zain, Jasni; Anwar, Shahid

    2018-05-01

    Mobile computing in conjunction with Mobile web services drives a strong approach where the limitations of mobile devices may possibly be tackled. Mobile Web Services are based on two types of technologies; SOAP and REST, which works with the existing protocols to develop Web services. Both the approaches carry their own distinct features, yet to keep the constraint features of mobile devices in mind, the better in two is considered to be the one which minimize the computation and transmission overhead while offloading. The load transferring of mobile device to remote servers for execution called computational offloading. There are numerous approaches to implement computational offloading a viable solution for eradicating the resources constraints of mobile device, yet a dynamic method of computational offloading is always required for a smooth and simple migration of complex tasks. The intention of this work is to present a distinctive approach which may not engage the mobile resources for longer time. The concept of web services utilized in our work to delegate the computational intensive tasks for remote execution. We tested both SOAP Web services approach and REST Web Services for mobile computing. Two parameters considered in our lab experiments to test; Execution Time and Energy Consumption. The results show that RESTful Web services execution is far better than executing the same application by SOAP Web services approach, in terms of execution time and energy consumption. Conducting experiments with the developed prototype matrix multiplication app, REST execution time is about 200% better than SOAP execution approach. In case of energy consumption REST execution is about 250% better than SOAP execution approach.

  20. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wada, Takao

    2014-07-01

    A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.

  1. Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds

    NASA Technical Reports Server (NTRS)

    Jardin, Matthew R.

    2004-01-01

    A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air-traffic-control automation, thousands of wind-optimal routes may need to be computed and checked for conflicts in just a few minutes. These factors motivated the need for a more efficient wind-optimal routing algorithm.

  2. Computer versus paper--does it make any difference in test performance?

    PubMed

    Karay, Yassin; Schauber, Stefan K; Stosch, Christoph; Schüttpelz-Brauns, Katrin

    2015-01-01

    CONSTRUCT: In this study, we examine the differences in test performance between the paper-based and the computer-based version of the Berlin formative Progress Test. In this context it is the first study that allows controlling for students' prior performance. Computer-based tests make possible a more efficient examination procedure for test administration and review. Although university staff will benefit largely from computer-based tests, the question arises if computer-based tests influence students' test performance. A total of 266 German students from the 9th and 10th semester of medicine (comparable with the 4th-year North American medical school schedule) participated in the study (paper = 132, computer = 134). The allocation of the test format was conducted as a randomized matched-pair design in which students were first sorted according to their prior test results. The organizational procedure, the examination conditions, the room, and seating arrangements, as well as the order of questions and answers, were identical in both groups. The sociodemographic variables and pretest scores of both groups were comparable. The test results from the paper and computer versions did not differ. The groups remained within the allotted time, but students using the computer version (particularly the high performers) needed significantly less time to complete the test. In addition, we found significant differences in guessing behavior. Low performers using the computer version guess significantly more than low-performing students in the paper-pencil version. Participants in computer-based tests are not at a disadvantage in terms of their test results. The computer-based test required less processing time. The reason for the longer processing time when using the paper-pencil version might be due to the time needed to write the answer down, controlling for transferring the answer correctly. It is still not known why students using the computer version (particularly low-performing students) guess at a higher rate. Further studies are necessary to understand this finding.

  3. 3-D Voxel FEM Simulation of Seismic Wave Propagation in a Land-Sea Structure with Topography

    NASA Astrophysics Data System (ADS)

    Ikegami, Y.; Koketsu, K.

    2003-12-01

    We have already developed the voxel FEM (finite element method) code to simulate seismic wave propagation in a land structure with surface topography (Koketsu, Fujiwara and Ikegami, 2003). Although the conventional FEM often requires much larger memory, longer computation time and farther complicated mesh generation than the Finite Difference Method (FDM), this code consumes a similar amount of memory to FDM and spends only 1.4 times longer computation time thanks to the simplicity of voxels (hexahedron elements). The voxel FEM was successfully applied to inland earthquakes, but most earthquakes in a subduction zone occur beneath a sea, so that a simulation in a land-sea structure should be essential for waveform modeling and strong motion prediction there. We now introduce a domain of fluid elements into the model and formulate displacements in the elements using the Lagrange method. Sea-bottom motions are simulated for the simple land-sea models of Okamoto and Takenaka (1999). The simulation results agree well with their reflectivity and FDM seismograms. In order to enhance numerical stability, not only a variable mesh but also an adaptive time step is introduced. We can now choose the optimal time steps everywhere in the model based the Courant condition. This doubly variable formulation may result in inefficient parallel computing. The wave velocity in a shallow part is lower than that in a deeper part. Therefore, if the model is divided into horizontal slices and they are assigned to CPUs, a shallow slice will consist of only small elements. This can cause unbalanced loads on the CPUs. Accordingly, the model is divided into vertical slices in this study. They also reduce inter-processor communication, because a vertical cross section is usually smaller than a horizontal one. In addition, we will consider higher-order FEM formulation compatible to the fourth-order FDM. We will also present numerical examples to demonstrate the effects of a sea and surface topography on seismic waves and ground motions.

  4. Profile modification computations for LHCD experiments on PBX-M using the TSC/LSC model

    NASA Astrophysics Data System (ADS)

    Kaita, R.; Ignat, D. W.; Jardin, S. C.; Okabayashi, M.; Sun, Y. C.

    1996-02-01

    The TSC-LSC computational model of the dynamics of lower hybrid current drive has been exercised extensively in comparison with data from a Princeton Beta Experiment-Modification (PBX-M) discharge where the measured q(0) attained values slightly above unity. Several significant, but plausible, assumptions had to be introduced to keep the computation from behaving pathologically over time, producing singular profiles of plasma current density and q. Addition of a heuristic current diffusion estimate, or more exactly, a smoothing of the rf-driven current with a diffusion-like equation, greatly improved the behavior of the computation, and brought theory and measurement into reasonable agreement. The model was then extended to longer pulse lengths and higher powers to investigate performance to be expected in future PBX-M current profile modification experiments.

  5. Resampling to accelerate cross-correlation searches for continuous gravitational waves from binary systems

    NASA Astrophysics Data System (ADS)

    Meadors, Grant David; Krishnan, Badri; Papa, Maria Alessandra; Whelan, John T.; Zhang, Yuanhao

    2018-02-01

    Continuous-wave (CW) gravitational waves (GWs) call for computationally-intensive methods. Low signal-to-noise ratio signals need templated searches with long coherent integration times and thus fine parameter-space resolution. Longer integration increases sensitivity. Low-mass x-ray binaries (LMXBs) such as Scorpius X-1 (Sco X-1) may emit accretion-driven CWs at strains reachable by current ground-based observatories. Binary orbital parameters induce phase modulation. This paper describes how resampling corrects binary and detector motion, yielding source-frame time series used for cross-correlation. Compared to the previous, detector-frame, templated cross-correlation method, used for Sco X-1 on data from the first Advanced LIGO observing run (O1), resampling is about 20 × faster in the costliest, most-sensitive frequency bands. Speed-up factors depend on integration time and search setup. The speed could be reinvested into longer integration with a forecast sensitivity gain, 20 to 125 Hz median, of approximately 51%, or from 20 to 250 Hz, 11%, given the same per-band cost and setup. This paper's timing model enables future setup optimization. Resampling scales well with longer integration, and at 10 × unoptimized cost could reach respectively 2.83 × and 2.75 × median sensitivities, limited by spin-wandering. Then an O1 search could yield a marginalized-polarization upper limit reaching torque-balance at 100 Hz. Frequencies from 40 to 140 Hz might be probed in equal observing time with 2 × improved detectors.

  6. Posttuberculosis tracheobronchial stenosis: use of CT to optimize the time of silicone stent removal.

    PubMed

    Verma, Akash; Park, Hye Yun; Lim, So Yeon; Um, Sang-Won; Koh, Won-Jung; Suh, Gee Young; Chung, Man Pyo; Kwon, O Jung; Kim, Hojoong

    2012-05-01

    To evaluate whether air pockets (tracheobronchial air columns in the space between the outer surface of the stent and the adjacent airway wall) discernible at computed tomography (CT) can help optimize the time of stent removal in patients with posttuberculosis tracheobronchial stenosis (PTTS). The study was approved by the institutional review board, and informed consent was obtained from all patients. Data from 41 patients (five men, 36 women) with a median age of 39 years (range, 21-64 years) who underwent silicone stent placement owing to PTTS, followed by CT and stent removal 6-12 months after clinical stabilization, were investigated retrospectively. Two radiologists determined whether the extent of air pockets on CT scans was associated with clinical success, which was defined as maintenance of a prosthesis-free airway for more than 2 years after stent removal. Radiologic features were compared for outcome by using a Wilcoxon two-sample test or Fisher exact test. Stents were removed successfully in 31 patients (76%). Air pockets longer than 1 cm or longer than 2 cm were associated with successful stent removal (P = .04 and P = .006, respectively). The sensitivity and specificity of air pocket length in the prediction of successful stent removal were 84% and 50%, respectively, for air pockets longer than 1 cm and 68% and 70% for air pockets longer than 2 cm. The extent of air pockets at chest CT shows correlation with the success of stent removal, indicates regression of stenosis, and may help guide the optimal time for stent removal.

  7. A New Minimum Trees-Based Approach for Shape Matching with Improved Time Computing: Application to Graphical Symbols Recognition

    NASA Astrophysics Data System (ADS)

    Franco, Patrick; Ogier, Jean-Marc; Loonis, Pierre; Mullot, Rémy

    Recently we have developed a model for shape description and matching. Based on minimum spanning trees construction and specifics stages like the mixture, it seems to have many desirable properties. Recognition invariance in front shift, rotated and noisy shape was checked through median scale tests related to GREC symbol reference database. Even if extracting the topology of a shape by mapping the shortest path connecting all the pixels seems to be powerful, the construction of graph induces an expensive algorithmic cost. In this article we discuss on the ways to reduce time computing. An alternative solution based on image compression concepts is provided and evaluated. The model no longer operates in the image space but in a compact space, namely the Discrete Cosine space. The use of block discrete cosine transform is discussed and justified. The experimental results led on the GREC2003 database show that the proposed method is characterized by a good discrimination power, a real robustness to noise with an acceptable time computing.

  8. Duration of untreated illness as a predictor of treatment response and clinical course in generalized anxiety disorder.

    PubMed

    Altamura, A C; Dell'osso, Bernardo; D'Urso, Nazario; Russo, Michela; Fumagalli, Sara; Mundo, Emanuela

    2008-05-01

    The aim of the present study was to investigate the impact of the duration of untreated illness (DUI)-defined as the time elapsing between the onset of generalized anxiety disorder (GAD) and the first adequate pharmacologic treatment-on treatment response and clinical course in a sample of subjects with GAD. One hundred patients with GAD, diagnosed according to Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition-Text Revision criteria, were enrolled and their main demographic and clinical features collected. Patients were then treated with selective serotonin reuptake inhibitors or venlafaxine for 8 weeks in open-label conditions. Treatment response and other clinical variables were analyzed after dividing the sample into two groups according to DUI (DUI 12 months). When the DUI was computed with respect to the first antidepressant treatment (DUI-AD), a higher improvement (Clinical Global Impressions-Severity of Illness scale) after the pharmacologic treatment was found in the group with a shorter DUI (analysis of variance with repeated measures: time effect F=654.975, P<.001; group effect: F=4.369, P=.039). When computed with respect to the first treatment with benzodiazepines (DUI-BDZ), the two groups did not show any significant difference in treatment response (time effect: F=652.183, P<.001; group effect: F=0.009, P=.924). In addition, patients with a longer DUI (DUI-BDZ or DUI-AD) showed an earlier age at onset, a longer duration of illness and a higher rate of comorbid psychiatric disorders with onset later than GAD. Results from this preliminary study seem to suggest that a shorter DUI-AD may determine a better response to pharmacologic treatment in patients with GAD, and that a longer DUI (DUI-BDZ and DUI-AD) may be associated to a worse clinical course. Further investigation on the relationship between DUI and GAD is needed.

  9. Computer work and self-reported variables on anthropometrics, computer usage, work ability, productivity, pain, and physical activity.

    PubMed

    Madeleine, Pascal; Vangsgaard, Steffen; Hviid Andersen, Johan; Ge, Hong-You; Arendt-Nielsen, Lars

    2013-08-01

    Computer users often report musculoskeletal complaints and pain in the upper extremities and the neck-shoulder region. However, recent epidemiological studies do not report a relationship between the extent of computer use and work-related musculoskeletal disorders (WMSD).The aim of this study was to conduct an explorative analysis on short and long-term pain complaints and work-related variables in a cohort of Danish computer users. A structured web-based questionnaire including questions related to musculoskeletal pain, anthropometrics, work-related variables, work ability, productivity, health-related parameters, lifestyle variables as well as physical activity during leisure time was designed. Six hundred and ninety office workers completed the questionnaire responding to an announcement posted in a union magazine. The questionnaire outcomes, i.e., pain intensity, duration and locations as well as anthropometrics, work-related variables, work ability, productivity, and level of physical activity, were stratified by gender and correlations were obtained. Women reported higher pain intensity, longer pain duration as well as more locations with pain than men (P < 0.05). In parallel, women scored poorer work ability and ability to fulfil the requirements on productivity than men (P < 0.05). Strong positive correlations were found between pain intensity and pain duration for the forearm, elbow, neck and shoulder (P < 0.001). Moderate negative correlations were seen between pain intensity and work ability/productivity (P < 0.001). The present results provide new key information on pain characteristics in office workers. The differences in pain characteristics, i.e., higher intensity, longer duration and more pain locations as well as poorer work ability reported by women workers relate to their higher risk of contracting WMSD. Overall, this investigation confirmed the complex interplay between anthropometrics, work ability, productivity, and pain perception among computer users.

  10. RenderMan design principles

    NASA Technical Reports Server (NTRS)

    Apodaca, Tony; Porter, Tom

    1989-01-01

    The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.

  11. Skin hydration analysis by experiment and computer simulations and its implications for diapered skin.

    PubMed

    Saadatmand, M; Stone, K J; Vega, V N; Felter, S; Ventura, S; Kasting, G; Jaworska, J

    2017-11-01

    Experimental work on skin hydration is technologically challenging, and mostly limited to observations where environmental conditions are constant. In some cases, like diapered baby skin, such work is practically unfeasible, yet it is important to understand potential effects of diapering on skin condition. To overcome this challenge, in part, we developed a computer simulation model of reversible transient skin hydration effects. Skin hydration model by Li et al. (Chem Eng Sci, 138, 2015, 164) was further developed to simulate transient exposure conditions where relative humidity (RH), wind velocity, air, and skin temperature can be any function of time. Computer simulations of evaporative water loss (EWL) decay after different occlusion times were compared with experimental data to calibrate the model. Next, we used the model to investigate EWL and SC thickness in different diapering scenarios. Key results from the experimental work were: (1) For occlusions by RH=100% and free water longer than 30 minutes the absorbed amount of water is almost the same; (2) Longer occlusion times result in higher water absorption by the SC. The EWL decay and skin water content predictions were in agreement with experimental data. Simulations also revealed that skin under occlusion hydrates mainly because the outflux is blocked, not because it absorbs water from the environment. Further, simulations demonstrated that hydration level is sensitive to time, RH and/or free water on skin. In simulated diapering scenarios, skin maintained hydration content very close to the baseline conditions without a diaper for the entire duration of a 24 hours period. Different diapers/diaper technologies are known to have different profiles in terms of their ability to provide wetness protection, which can result in consumer-noticeable differences in wetness. Simulation results based on published literature using data from a number of different diapers suggest that diapered skin hydrates within ranges considered reversible. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. An approach to collective behavior in cell cultures: modeling and analysis of ECIS data

    NASA Astrophysics Data System (ADS)

    Rabson, David; Lafalce, Evan; Lovelady, Douglas; Lo, Chun-Min

    2011-03-01

    We review recent results in which statistical measures of noise in ECIS data distinguished healthy cell cultures from cancerous or poisoned ones: after subtracting the ``signal,'' the 1 /fα noise in the healthy cultures shows longer short-time and long-time correlations. We discuss application of an artificial neural network to detect the cancer signal, and we demonstrate a computational model of cell-cell communication that produces signals similar to those of the experimental data. The simulation is based on the q -state Potts model with inspiration from the Bak-Tang-Wiesenfeld sand-pile model. We view the level of organization larger than cells but smaller than organs or tissues as a kind of ``mesoscopic'' biological physics, in which few-body interactions dominate, and the experiments and computational model as ways of exploring this regime.

  13. Bypassing the Kohn-Sham equations with machine learning.

    PubMed

    Brockherde, Felix; Vogt, Leslie; Li, Li; Tuckerman, Mark E; Burke, Kieron; Müller, Klaus-Robert

    2017-10-11

    Last year, at least 30,000 scientific papers used the Kohn-Sham scheme of density functional theory to solve electronic structure problems in a wide variety of scientific fields. Machine learning holds the promise of learning the energy functional via examples, bypassing the need to solve the Kohn-Sham equations. This should yield substantial savings in computer time, allowing larger systems and/or longer time-scales to be tackled, but attempts to machine-learn this functional have been limited by the need to find its derivative. The present work overcomes this difficulty by directly learning the density-potential and energy-density maps for test systems and various molecules. We perform the first molecular dynamics simulation with a machine-learned density functional on malonaldehyde and are able to capture the intramolecular proton transfer process. Learning density models now allows the construction of accurate density functionals for realistic molecular systems.Machine learning allows electronic structure calculations to access larger system sizes and, in dynamical simulations, longer time scales. Here, the authors perform such a simulation using a machine-learned density functional that avoids direct solution of the Kohn-Sham equations.

  14. Predictors of screen viewing time in young Singaporean children: the GUSTO cohort.

    PubMed

    Bernard, Jonathan Y; Padmapriya, Natarajan; Chen, Bozhi; Cai, Shirong; Tan, Kok Hian; Yap, Fabian; Shek, Lynette; Chong, Yap-Seng; Gluckman, Peter D; Godfrey, Keith M; Kramer, Michael S; Saw, Seang Mei; Müller-Riemenschneider, Falk

    2017-09-05

    Higher screen viewing time (SVT) in childhood has been associated with adverse health outcomes, but the predictors of SVT in early childhood are poorly understood. We examined the sociodemographic and behavioral predictors of total and device-specific SVT in a Singaporean cohort. At ages 2 and 3 years, SVT of 910 children was reported by their parents. Interviewer-administered questionnaires assessed SVT on weekdays and weekends for television, computer, and hand-held devices. Multivariable linear mixed-effect models were used to examine the associations of total and device-specific SVT at ages 2 and 3 with predictors, including children's sex, ethnicity, birth order, family income, and parental age, education, BMI, and television viewing time. At age 2, children's total SVT averaged 2.4 ± 2.2 (mean ± SD) hours/day, including 1.6 ± 1.6 and 0.7 ± 1.0 h/day for television and hand-held devices, respectively. At age 3, hand-held device SVT was 0.3 (95% CI: 0.2, 0.4) hours/day higher, while no increases were observed for other devices. SVT tracked moderately from 2 to 3 years (r = 0.49, p < 0.0001). Compared to Chinese children, Malay and Indian children spent 1.04 (0.66, 1.41) and 0.54 (0.15, 0.94) more hours/day watching screens, respectively. Other predictors of longer SVT were younger maternal age, lower maternal education, and longer parental television time. In our cohort, the main predictors of longer children's SVT were Malay and Indian ethnicity, younger maternal age, lower education and longer parental television viewing time. Our study may help target populations for future interventions in Asia, but also in other technology-centered societies. This ongoing study was first registered on July 1, 2010 on NCT01174875 as. Retrospectively registered.

  15. Computer printing and filing of microbiology reports. 2. Evaluation and comparison with a manual system, and comparison of two manual systems.

    PubMed Central

    Goodwin, C S

    1976-01-01

    A manual system of microbiology reporting with a National Cash Register (NCR) form with printed names of bacteria and antiboitics required less time to compose reports than a previous manual system that involved rubber stamps and handwriting on plain report sheets. The NCR report cost 10-28 pence and, compared with a computer system, it had the advantages of simplicity and familarity, and reports were not delayed by machine breakdown, operator error, or data being incorrectly submitted. A computer reporting system for microbiology resulted in more accurate reports costing 17-97 pence each, faster and more accurate filing and recall of reports, and a greater range of analyses of reports that was valued particularly by the control-of-infection staff. Composition of computer-readable reports by technicians on Port-a-punch cards took longer than composing NCR reports. Enquiries for past results were more quickly answered from computer printouts of reports and a day book in alphabetical order. PMID:939810

  16. Protected quantum computing: interleaving gate operations with dynamical decoupling sequences.

    PubMed

    Zhang, Jingfu; Souza, Alexandre M; Brandao, Frederico Dias; Suter, Dieter

    2014-02-07

    Implementing precise operations on quantum systems is one of the biggest challenges for building quantum devices in a noisy environment. Dynamical decoupling attenuates the destructive effect of the environmental noise, but so far, it has been used primarily in the context of quantum memories. Here, we experimentally demonstrate a general scheme for combining dynamical decoupling with quantum logical gate operations using the example of an electron-spin qubit of a single nitrogen-vacancy center in diamond. We achieve process fidelities >98% for gate times that are 2 orders of magnitude longer than the unprotected dephasing time T2.

  17. A general mass term for bigravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cusin, Giulia; Durrer, Ruth; Guarato, Pietro

    2016-04-01

    We introduce a new formalism to study perturbations of Hassan-Rosen bigravity theory, around general backgrounds for the two dynamical metrics. In particular, we derive the general expression for the mass term of the perturbations and we explicitly compute it for cosmological settings. We study tensor perturbations in a specific branch of bigravity using this formalism. We show that the tensor sector is affected by a late-time instability, which sets in when the mass matrix is no longer positive definite.

  18. Petascale self-consistent electromagnetic computations using scalable and accurate algorithms for complex structures

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Abell, D.; Amundson, J.; Bruhwiler, D. L.; Busby, R.; Carlsson, J. A.; Dimitrov, D. A.; Kashdan, E.; Messmer, P.; Nieter, C.; Smithe, D. N.; Spentzouris, P.; Stoltz, P.; Trines, R. M.; Wang, H.; Werner, G. R.

    2006-09-01

    As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly.

  19. Temporal integration at consecutive processing stages in the auditory pathway of the grasshopper.

    PubMed

    Wirtssohn, Sarah; Ronacher, Bernhard

    2015-04-01

    Temporal integration in the auditory system of locusts was quantified by presenting single clicks and click pairs while performing intracellular recordings. Auditory neurons were studied at three processing stages, which form a feed-forward network in the metathoracic ganglion. Receptor neurons and most first-order interneurons ("local neurons") encode the signal envelope, while second-order interneurons ("ascending neurons") tend to extract more complex, behaviorally relevant sound features. In different neuron types of the auditory pathway we found three response types: no significant temporal integration (some ascending neurons), leaky energy integration (receptor neurons and some local neurons), and facilitatory processes (some local and ascending neurons). The receptor neurons integrated input over very short time windows (<2 ms). Temporal integration on longer time scales was found at subsequent processing stages, indicative of within-neuron computations and network activity. These different strategies, realized at separate processing stages and in parallel neuronal pathways within one processing stage, could enable the grasshopper's auditory system to evaluate longer time windows and thus to implement temporal filters, while at the same time maintaining a high temporal resolution. Copyright © 2015 the American Physiological Society.

  20. Improved Algorithms Speed It Up for Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazi, A

    2005-09-20

    Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leadermore » for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics.« less

  1. Approximate, computationally efficient online learning in Bayesian spiking neurons.

    PubMed

    Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André

    2014-03-01

    Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.

  2. Computer-assisted Behavioral Therapy and Contingency Management for Cannabis Use Disorder

    PubMed Central

    Budney, Alan J.; Stanger, Catherine; Tilford, J. Mick; Scherer, Emily; Brown, Pamela C.; Li, Zhongze; Li, Zhigang; Walker, Denise

    2015-01-01

    Computer-assisted behavioral treatments hold promise for enhancing access to and reducing costs of treatments for substance use disorders. This study assessed the efficacy of a computer-assisted version of an efficacious, multicomponent treatment for cannabis use disorders (CUD), i.e., motivational enhancement therapy, cognitive-behavioral therapy, and abstinence-based contingency-management (MET/CBT/CM). An initial cost comparison was also performed. Seventy-five adult participants, 59% African Americans, seeking treatment for CUD received either, MET only (BRIEF), therapist-delivered MET/CBT/CM (THERAPIST), or computer-delivered MET/CBT/CM (COMPUTER). During treatment, the THERAPIST and COMPUTER conditions engendered longer durations of continuous cannabis abstinence than BRIEF (p < .05), but did not differ from each other. Abstinence rates and reduction in days of use over time were maintained in COMPUTER at least as well as in THERAPIST. COMPUTER averaged approximately $130 (p < .05) less per case than THERAPIST in therapist costs, which offset most of the costs of CM. Results add to promising findings that illustrate potential for computer-assisted delivery methods to enhance access to evidence-based care, reduce costs, and possibly improve outcomes. The observed maintenance effects and the cost findings require replication in larger clinical trials. PMID:25938629

  3. A Web of Resources for Introductory Computer Science.

    ERIC Educational Resources Information Center

    Rebelsky, Samuel A.

    As the field of Computer Science has grown, the syllabus of the introductory Computer Science course has changed significantly. No longer is it a simple introduction to programming or a tutorial on computer concepts and applications. Rather, it has become a survey of the field of Computer Science, touching on a wide variety of topics from digital…

  4. Sitting Time, Physical Activity and Sleep by Work Type and Pattern—The Australian Longitudinal Study on Women’s Health

    PubMed Central

    Clark, Bronwyn K.; Kolbe-Alexander, Tracy L.; Duncan, Mitch J.; Brown, Wendy

    2017-01-01

    Data from the Australian Longitudinal Study on Women’s Health were used to examine how work was associated with time spent sleeping, sitting and in physical activity (PA), in working women. Young (31–36 years; 2009) and mid-aged (59–64 years; 2010) women reported sleep (categorised as shorter ≤6 h/day and longer ≥8 h/day) and sitting time (work, transport, television, non-work computer, and other; summed for total sitting time) on the most recent work and non-work day; and moderate and vigorous PA (categorised as meeting/not meeting guidelines) in the previous week. Participants reported occupation (manager/professional; clerical/sales; trades/transport/labourer), work hours (part-time; full-time) and work pattern (shift/night; not shift/night). The odds of shorter sleep on work days was higher in both cohorts for women who worked shift or night hours. Longer sitting time on work days, made up primarily of sitting for work, was found for managers/professionals, clerical/sales and full-time workers. In the young cohort, clerical/sales workers and in the mid-aged cohort, full-time workers were less likely to meet PA guidelines. These results suggest multiple behaviour interventions tailored to work patterns and occupational category may be useful to improve the sleep, sitting and activity of working women. PMID:28287446

  5. Sitting Time, Physical Activity and Sleep by Work Type and Pattern-The Australian Longitudinal Study on Women's Health.

    PubMed

    Clark, Bronwyn K; Kolbe-Alexander, Tracy L; Duncan, Mitch J; Brown, Wendy

    2017-03-10

    Data from the Australian Longitudinal Study on Women's Health were used to examine how work was associated with time spent sleeping, sitting and in physical activity (PA), in working women. Young (31-36 years; 2009) and mid-aged (59-64 years; 2010) women reported sleep (categorised as shorter ≤6 h/day and longer ≥8 h/day) and sitting time (work, transport, television, non-work computer, and other; summed for total sitting time) on the most recent work and non-work day; and moderate and vigorous PA (categorised as meeting/not meeting guidelines) in the previous week. Participants reported occupation (manager/professional; clerical/sales; trades/transport/labourer), work hours (part-time; full-time) and work pattern (shift/night; not shift/night). The odds of shorter sleep on work days was higher in both cohorts for women who worked shift or night hours. Longer sitting time on work days, made up primarily of sitting for work, was found for managers/professionals, clerical/sales and full-time workers. In the young cohort, clerical/sales workers and in the mid-aged cohort, full-time workers were less likely to meet PA guidelines. These results suggest multiple behaviour interventions tailored to work patterns and occupational category may be useful to improve the sleep, sitting and activity of working women.

  6. The nonequilibrium quantum many-body problem as a paradigm for extreme data science

    NASA Astrophysics Data System (ADS)

    Freericks, J. K.; Nikolić, B. K.; Frieder, O.

    2014-12-01

    Generating big data pervades much of physics. But some problems, which we call extreme data problems, are too large to be treated within big data science. The nonequilibrium quantum many-body problem on a lattice is just such a problem, where the Hilbert space grows exponentially with system size and rapidly becomes too large to fit on any computer (and can be effectively thought of as an infinite-sized data set). Nevertheless, much progress has been made with computational methods on this problem, which serve as a paradigm for how one can approach and attack extreme data problems. In addition, viewing these physics problems from a computer-science perspective leads to new approaches that can be tried to solve more accurately and for longer times. We review a number of these different ideas here.

  7. Transient Heat Conduction Simulation around Microprocessor Die

    NASA Astrophysics Data System (ADS)

    Nishi, Koji

    This paper explains about fundamental formula of calculating power consumption of CMOS (Complementary Metal-Oxide-Semiconductor) devices and its voltage and temperature dependency, then introduces equation for estimating power consumption of the microprocessor for notebook PC (Personal Computer). The equation is applied to heat conduction simulation with simplified thermal model and evaluates in sub-millisecond time step calculation. In addition, the microprocessor has two major heat conduction paths; one is from the top of the silicon die via thermal solution and the other is from package substrate and pins via PGA (Pin Grid Array) socket. Even though the dominant factor of heat conduction is the former path, the latter path - from package substrate and pins - plays an important role in transient heat conduction behavior. Therefore, this paper tries to focus the path from package substrate and pins, and to investigate more accurate method of estimating heat conduction paths of the microprocessor. Also, cooling performance expression of heatsink fan is one of key points to assure result with practical accuracy, while finer expression requires more computation resources which results in longer computation time. Then, this paper discusses the expression to minimize computation workload with a practical accuracy of the result.

  8. Extreme-Scale Stochastic Particle Tracing for Uncertain Unsteady Flow Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Hanqi; He, Wenbin; Seo, Sangmin

    2016-11-13

    We present an efficient and scalable solution to estimate uncertain transport behaviors using stochastic flow maps (SFM,) for visualizing and analyzing uncertain unsteady flows. SFM computation is extremely expensive because it requires many Monte Carlo runs to trace densely seeded particles in the flow. We alleviate the computational cost by decoupling the time dependencies in SFMs so that we can process adjacent time steps independently and then compose them together for longer time periods. Adaptive refinement is also used to reduce the number of runs for each location. We then parallelize over tasks—packets of particles in our design—to achieve highmore » efficiency in MPI/thread hybrid programming. Such a task model also enables CPU/GPU coprocessing. We show the scalability on two supercomputers, Mira (up to 1M Blue Gene/Q cores) and Titan (up to 128K Opteron cores and 8K GPUs), that can trace billions of particles in seconds.« less

  9. Efficient Delaunay Tessellation through K-D Tree Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Peterka, Tom

    Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluatemore » the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.« less

  10. FPGA-Based High-Performance Embedded Systems for Adaptive Edge Computing in Cyber-Physical Systems: The ARTICo³ Framework.

    PubMed

    Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo

    2018-06-08

    Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.

  11. Bulk viscosity of the Lennard-Jones fluid for a wide range of states computed by equilibrium molecular dynamics

    NASA Astrophysics Data System (ADS)

    Hoheisel, C.; Vogelsang, R.; Schoen, M.

    1987-12-01

    Accurate data for the bulk viscosity ηv have been obtained by molecular dynamics calculations. Many thermodynamic states of the Lennard-Jones fluid were considered. The Green-Kubo integrand of ηv is analyzed in terms of partial correlation functions constituting the total one. These partial functions behave rather differently from those found for the shear viscosity or the thermal conductivity. Generally the total autocorrelation function of ηv shows a steeper initial decay and a more pronounced long time form than those of the shear viscosity or the thermal conductivity. For states near transition to solid phases, like the pseudotriple point of argon, the Green-Kubo integrand of ηv has a significantly longer ranged time behavior than that of the shear viscosity. Hence, for the latter states, a systematic error is expected for ηv using equilibrium molecular dynamics for its computation.

  12. Multicriteria meta-heuristics for AGV dispatching control based on computational intelligence.

    PubMed

    Naso, David; Turchiano, Biagio

    2005-04-01

    In many manufacturing environments, automated guided vehicles are used to move the processed materials between various pickup and delivery points. The assignment of vehicles to unit loads is a complex problem that is often solved in real-time with simple dispatching rules. This paper proposes an automated guided vehicles dispatching approach based on computational intelligence. We adopt a fuzzy multicriteria decision strategy to simultaneously take into account multiple aspects in every dispatching decision. Since the typical short-term view of dispatching rules is one of the main limitations of such real-time assignment heuristics, we also incorporate in the multicriteria algorithm a specific heuristic rule that takes into account the empty-vehicle travel on a longer time-horizon. Moreover, we also adopt a genetic algorithm to tune the weights associated to each decision criteria in the global decision algorithm. The proposed approach is validated by means of a comparison with other dispatching rules, and with other recently proposed multicriteria dispatching strategies also based on computational Intelligence. The analysis of the results obtained by the proposed dispatching approach in both nominal and perturbed operating conditions (congestions, faults) confirms its effectiveness.

  13. Transient response to three-phase faults on a wind turbine generator. Ph.D. Thesis - Toledo Univ.

    NASA Technical Reports Server (NTRS)

    Gilbert, L. J.

    1978-01-01

    In order to obtain a measure of its responses to short circuits a large horizontal axis wind turbine generator was modeled and its performance was simulated on a digital computer. Simulation of short circuit faults on the synchronous alternator of a wind turbine generator, without resort to the classical assumptions generally made for that analysis, indicates that maximum clearing times for the system tied to an infinite bus are longer than the typical clearing times for equivalent capacity conventional machines. Also, maximum clearing times are independent of tower shadow and wind shear. Variation of circuit conditions produce the modifications in the transient response predicted by analysis.

  14. Computer use at work is associated with self-reported depressive and anxiety disorder.

    PubMed

    Kim, Taeshik; Kang, Mo-Yeol; Yoo, Min-Sang; Lee, Dongwook; Hong, Yun-Chul

    2016-01-01

    With the development of technology, extensive use of computers in the workplace is prevalent and increases efficiency. However, computer users are facing new harmful working conditions with high workloads and longer hours. This study aimed to investigate the association between computer use at work and self-reported depressive and anxiety disorder (DAD) in a nationally representative sample of South Korean workers. This cross-sectional study was based on the third Korean Working Conditions Survey (2011), and 48,850 workers were analyzed. Information about computer use and DAD was obtained from a self-administered questionnaire. We investigated the relation between computer use at work and DAD using logistic regression. The 12-month prevalence of DAD in computer-using workers was 1.46 %. After adjustment for socio-demographic factors, the odds ratio for DAD was higher in workers using computers more than 75 % of their workday (OR 1.69, 95 % CI 1.30-2.20) than in workers using computers less than 50 % of their shift. After stratifying by working hours, computer use for over 75 % of the work time was significantly associated with increased odds of DAD in 20-39, 41-50, 51-60, and over 60 working hours per week. After stratifying by occupation, education, and job status, computer use for more than 75 % of the work time was related with higher odds of DAD in sales and service workers, those with high school and college education, and those who were self-employed and employers. A high proportion of computer use at work may be associated with depressive and anxiety disorder. This finding suggests the necessity of a work guideline to help the workers suffering from high computer use at work.

  15. Design and analysis considerations for deployment mechanisms in a space environment

    NASA Technical Reports Server (NTRS)

    Vorlicek, P. L.; Gore, J. V.; Plescia, C. T.

    1982-01-01

    On the second flight of the INTELSAT V spacecraft the time required for successful deployment of the north solar array was longer than originally predicted. The south solar array deployed as predicted. As a result of the difference in deployment times a series of experiments was conducted to locate the cause of the difference. Deployment rate sensitivity to hinge friction and temperature levels was investigated. A digital computer simulation of the deployment was created to evaluate the effects of parameter changes on deployment. Hinge design was optimized for nominal solar array deployment time for future INTELSAT V satellites. The nominal deployment times of both solar arrays on the third flight of INTELSAT V confirms the validity of the simulation and design optimization.

  16. Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo

    With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.

  17. Targeting an efficient target-to-target interval for P300 speller brain–computer interfaces

    PubMed Central

    Sellers, Eric W.; Wang, Xingyu

    2013-01-01

    Longer target-to-target intervals (TTI) produce greater P300 event-related potential amplitude, which can increase brain–computer interface (BCI) classification accuracy and decrease the number of flashes needed for accurate character classification. However, longer TTIs requires more time for each trial, which will decrease the information transfer rate of BCI. In this paper, a P300 BCI using a 7 × 12 matrix explored new flash patterns (16-, 18- and 21-flash pattern) with different TTIs to assess the effects of TTI on P300 BCI performance. The new flash patterns were designed to minimize TTI, decrease repetition blindness, and examine the temporal relationship between each flash of a given stimulus by placing a minimum of one (16-flash pattern), two (18-flash pattern), or three (21-flash pattern) non-target flashes between each target flashes. Online results showed that the 16-flash pattern yielded the lowest classification accuracy among the three patterns. The results also showed that the 18-flash pattern provides a significantly higher information transfer rate (ITR) than the 21-flash pattern; both patterns provide high ITR and high accuracy for all subjects. PMID:22350331

  18. Attentional bias in excessive Internet gamers: Experimental investigations using an addiction Stroop and a visual probe.

    PubMed

    Jeromin, Franziska; Nyenhuis, Nele; Barke, Antonia

    2016-03-01

    Background and aims Internet Gaming Disorder is included in the Diagnostic and statistical manual of mental disorders (5 th edition) as a disorder that merits further research. The diagnostic criteria are based on those for Substance Use Disorder and Gambling Disorder. Excessive gamblers and persons with Substance Use Disorder show attentional biases towards stimuli related to their addictions. We investigated whether excessive Internet gamers show a similar attentional bias, by using two established experimental paradigms. Methods We measured reaction times of excessive Internet gamers and non-gamers (N = 51, 23.7 ± 2.7 years) by using an addiction Stroop with computer-related and neutral words, as well as a visual probe with computer-related and neutral pictures. Mixed design analyses of variance with the between-subjects factor group (gamer/non-gamer) and the within-subjects factor stimulus type (computer-related/neutral) were calculated for the reaction times as well as for valence and familiarity ratings of the stimulus material. Results In the addiction Stroop, an interaction for group × word type was found: Only gamers showed longer reaction times to computer-related words compared to neutral words, thus exhibiting an attentional bias. In the visual probe, no differences in reaction time between computer-related and neutral pictures were found in either group, but the gamers were faster overall. Conclusions An attentional bias towards computer-related stimuli was found in excessive Internet gamers, by using an addiction Stroop but not by using a visual probe. A possible explanation for the discrepancy could lie in the fact that the visual probe may have been too easy for the gamers.

  19. Attentional bias in excessive Internet gamers: Experimental investigations using an addiction Stroop and a visual probe

    PubMed Central

    Jeromin, Franziska; Nyenhuis, Nele; Barke, Antonia

    2016-01-01

    Background and aims Internet Gaming Disorder is included in the Diagnostic and statistical manual of mental disorders (5th edition) as a disorder that merits further research. The diagnostic criteria are based on those for Substance Use Disorder and Gambling Disorder. Excessive gamblers and persons with Substance Use Disorder show attentional biases towards stimuli related to their addictions. We investigated whether excessive Internet gamers show a similar attentional bias, by using two established experimental paradigms. Methods We measured reaction times of excessive Internet gamers and non-gamers (N = 51, 23.7 ± 2.7 years) by using an addiction Stroop with computer-related and neutral words, as well as a visual probe with computer-related and neutral pictures. Mixed design analyses of variance with the between-subjects factor group (gamer/non-gamer) and the within-subjects factor stimulus type (computer-related/neutral) were calculated for the reaction times as well as for valence and familiarity ratings of the stimulus material. Results In the addiction Stroop, an interaction for group × word type was found: Only gamers showed longer reaction times to computer-related words compared to neutral words, thus exhibiting an attentional bias. In the visual probe, no differences in reaction time between computer-related and neutral pictures were found in either group, but the gamers were faster overall. Conclusions An attentional bias towards computer-related stimuli was found in excessive Internet gamers, by using an addiction Stroop but not by using a visual probe. A possible explanation for the discrepancy could lie in the fact that the visual probe may have been too easy for the gamers. PMID:28092198

  20. 76 FR 79609 - Federal Acquisition Regulation; Clarification of Standards for Computer Generation of Forms

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-22

    ... Regulation; Clarification of Standards for Computer Generation of Forms AGENCY: Department of Defense (DoD... American National Standards Institute X12, as the valid standard to use for computer-generated forms. FAR... optional forms on their computers. In addition to clarifying that FIPS 161 is no longer in use, public...

  1. Development of and Adherence to a Computer-Based Gamified Environment Designed to Promote Health and Wellbeing in Older People with Mild Cognitive Impairment.

    PubMed

    Scase, Mark; Marandure, Blessing; Hancox, Jennie; Kreiner, Karl; Hanke, Sten; Kropf, Johannes

    2017-01-01

    The older population of Europe is increasing and there has been a corresponding increase in long term care costs. This project sought to promote active ageing by delivering tasks via a tablet computer to participants aged 65-80 with mild cognitive impairment. An age-appropriate gamified environment was developed and adherence to this solution was assessed through an intervention. The gamified environment was developed through focus groups. Mixed methods were used in the intervention with the time spent engaging with applications recorded supplemented by participant interviews to gauge adherence. There were two groups of participants: one living in a retirement village and the other living separately across a city. The retirement village participants engaged in more than three times the number of game sessions compared to the other group possibly because of different social arrangements between the groups. A gamified environment can help older people engage in computer-based applications. However, social community factors influence adherence in a longer term intervention.

  2. Experimental investigation of stress-inducing properties of system response times.

    PubMed

    Kuhmann, W

    1989-03-01

    System response times are regarded as a major stressor in human-computer interaction. In two earlier studies short (2s) and long (8s) response times were found to have differential effects on psychological, subjective, and performance variables, but results did not favour either response time. Therefore, in another laboratory study with 48 subjects in four independent groups working at a stimulated computer workplace, system response times of 2, 4, 6 and 8s were introduced in the same error detection task as used before, during 3 training and 5 working trials of 20 min each, and the same physiological, subjective and performance measures were obtained. There were no global effects on physiological variables, possibly due to low work load as a result of missing time pressure, but subjective and performance variables clearly favoured the longer system response times. When task periods and response time-periods were analysed separately, a shift of electrodermal activity could be observed from task- to response time-periods during the course of trials in the 8s condition. This did not appear in any other condition, which points to psychophysiological excitement that develops when system response times are too long, thus providing support for the concept of optimal system response times.

  3. Effects of standing on typing task performance and upper limb discomfort, vascular and muscular indicators.

    PubMed

    Fedorowich, Larissa M; Côté, Julie N

    2018-10-01

    Standing is a popular alternative to traditionally seated computer work. However, no studies have described how standing impacts both upper body muscular and vascular outcomes during a computer typing task. Twenty healthy adults completed two 90-min simulated work sessions, seated or standing. Upper limb discomfort, electromyography (EMG) from eight upper body muscles, typing performance and neck/shoulder and forearm blood flow were collected. Results showed significantly less upper body discomfort and higher typing speed during standing. Lower Trapezius EMG amplitude was higher during standing, but this postural difference decreased with time (interaction effect), and its variability was 68% higher during standing compared to sitting. There were no effects on blood flow. Results suggest that standing computer work may engage shoulder girdle stabilizers while reducing discomfort and improving performance. Studies are needed to identify how standing affects more complex computer tasks over longer work bouts in symptomatic workers. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. NASA geodynamics program investigations summaries: A supplement to the NASA geodynamics program overview

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The development of a time series of global atmospheric motion and mass fields through April 1984 to compare with changes in length of day and polar motion was investigated. Earth rotation was studied and the following topics are discussed: (1) computation of atmospheric angular momentum through April 1984; (2) comparisons of psi sub values with variations in length of day obtained by several groups utilizing B.I.H., lunar laser ranging, VLBI, or Lageos measurements; (3) computation of atmospheric excitation of polar motion using daily fields of atmospheric winds and pressures for a short test period. Daily calculations may be extended over a longer period to examine the forcing of the annual and Chandler wobbles, in addition to higher frequency nutations.

  5. ROSAT in-orbit attitude measurement recovery

    NASA Astrophysics Data System (ADS)

    Kaffer, L.; Boeinghoff, A.; Bruederle, E.; Schrempp, W.; Wullstein, P.

    After about 7 months of nearly perfect Attitude Measurement and Control System (AMCS) functioning, the ROSAT mission was influenced by gyro degradations which complicated the operation and after one year the nominal mission could no longer be maintained. The reestablishment of the nominal mission by the redesign of the attitude measurement using inertial reference generation from coarse Sun sensor and magnetometer together with a new star acquisition procedure is described. This success was only possible because sufficient reprogramming provisions in the onboard computer were available. The new software now occupies nearly the complete Random Access Memory (RAM) area and increases the computation time from about 50 msec to 300 msec per 1 sec cycle. This proves that deficiencies of the hardware can be overcome by a more intelligent software.

  6. A proposed framework on hybrid feature selection techniques for handling high dimensional educational data

    NASA Astrophysics Data System (ADS)

    Shahiri, Amirah Mohamed; Husain, Wahidah; Rashid, Nur'Aini Abd

    2017-10-01

    Huge amounts of data in educational datasets may cause the problem in producing quality data. Recently, data mining approach are increasingly used by educational data mining researchers for analyzing the data patterns. However, many research studies have concentrated on selecting suitable learning algorithms instead of performing feature selection process. As a result, these data has problem with computational complexity and spend longer computational time for classification. The main objective of this research is to provide an overview of feature selection techniques that have been used to analyze the most significant features. Then, this research will propose a framework to improve the quality of students' dataset. The proposed framework uses filter and wrapper based technique to support prediction process in future study.

  7. Computational Participation: Understanding Coding as an Extension of Literacy Instruction

    ERIC Educational Resources Information Center

    Burke, Quinn; O'Byrne, W. Ian; Kafai, Yasmin B.

    2016-01-01

    Understanding the computational concepts on which countless digital applications run offers learners the opportunity to no longer simply read such media but also become more discerning end users and potentially innovative "writers" of new media themselves. To think computationally--to solve problems, to design systems, and to process and…

  8. Clustering of low-valence particles: structure and kinetics.

    PubMed

    Markova, Olga; Alberts, Jonathan; Munro, Edwin; Lenne, Pierre-François

    2014-08-01

    We compute the structure and kinetics of two systems of low-valence particles with three or six freely oriented bonds in two dimensions. The structure of clusters formed by trivalent particles is complex with loops and holes, while hexavalent particles self-organize into regular and compact structures. We identify the elementary structures which compose the clusters of trivalent particles. At initial stages of clustering, the clusters of trivalent particles grow with a power-law time dependence. Yet at longer times fusion and fission of clusters equilibrates and clusters form a heterogeneous phase with polydispersed sizes. These results emphasize the role of valence in the kinetics and stability of finite-size clusters.

  9. Pushing the frontiers of first-principles based computer simulations of chemical and biological systems.

    PubMed

    Brunk, Elizabeth; Ashari, Negar; Athri, Prashanth; Campomanes, Pablo; de Carvalho, F Franco; Curchod, Basile F E; Diamantis, Polydefkis; Doemer, Manuel; Garrec, Julian; Laktionov, Andrey; Micciarelli, Marco; Neri, Marilisa; Palermo, Giulia; Penfold, Thomas J; Vanni, Stefano; Tavernelli, Ivano; Rothlisberger, Ursula

    2011-01-01

    The Laboratory of Computational Chemistry and Biochemistry is active in the development and application of first-principles based simulations of complex chemical and biochemical phenomena. Here, we review some of our recent efforts in extending these methods to larger systems, longer time scales and increased accuracies. Their versatility is illustrated with a diverse range of applications, ranging from the determination of the gas phase structure of the cyclic decapeptide gramicidin S, to the study of G protein coupled receptors, the interaction of transition metal based anti-cancer agents with protein targets, the mechanism of action of DNA repair enzymes, the role of metal ions in neurodegenerative diseases and the computational design of dye-sensitized solar cells. Many of these projects are done in collaboration with experimental groups from the Institute of Chemical Sciences and Engineering (ISIC) at the EPFL.

  10. Improvement of Thermal Interruption Capability in Self-blast Interrupting Chamber for New 245kV-50kA GCB

    NASA Astrophysics Data System (ADS)

    Shinkai, Takeshi; Koshiduka, Tadashi; Mori, Tadashi; Uchii, Toshiyuki; Tanaka, Tsutomu; Ikeda, Hisatoshi

    Current zero measurements are performed for 245kV-50kA-60Hz short line fault (L90) interruption tests with a self-blast interrupting chamber (double volume system) which has the interrupting capability up to 245kV-50kA-50Hz L90. Lower L90 interruption capability is observed for longer arcing time although very high pressure rise is obtained. It may be caused by higher blowing temperature and lower blowing density for longer arcing time. Interruption criteria and a optimization method of the chamber design are discussed to improve L90 interruption capability with it. The new chambers are designed at 245kV-50kA-60Hz to improve gas density in thermal volume for long arcing time. 245kV-50kA-60Hz L90 interruptions are performed with the new chamber. The suggested optimization method is an efficient tool for the self-blast interrupting chamber design although study of computing methods is required to calculate arc conductance around current zero as a direct criterion for L90 interruption capability with higher accuracy.

  11. Visualization of Unsteady Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The current compute environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array and the J90 cluster) provide the required computation bandwidth for CFD calculations of transient problems. If we follow the traditional computational analysis steps for CFD (and we wish to construct an interactive visualizer) we need to be aware of the following: (1) Disk space requirements. A single snap-shot must contain at least the values (primitive variables) stored at the appropriate locations within the mesh. For most simple 3D Euler solvers that means 5 floating point words. Navier-Stokes solutions with turbulence models may contain 7 state-variables. (2) Disk speed vs. Computational speeds. The time required to read the complete solution of a saved time frame from disk is now longer than the compute time for a set number of iterations from an explicit solver. Depending, on the hardware and solver an iteration of an implicit code may also take less time than reading the solution from disk. If one examines the performance improvements in the last decade or two, it is easy to see that depending on disk performance (vs. CPU improvement) may not be the best method for enhancing interactivity. (3) Cluster and Parallel Machine I/O problems. Disk access time is much worse within current parallel machines and cluster of workstations that are acting in concert to solve a single problem. In this case we are not trying to read the volume of data, but are running the solver and the solver outputs the solution. These traditional network interfaces must be used for the file system. (4) Numerics of particle traces. Most visualization tools can work upon a single snap shot of the data but some visualization tools for transient problems require dealing with time.

  12. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  13. Combined orbits and clocks from the IGS 2nd reprocessing

    NASA Astrophysics Data System (ADS)

    Griffiths, J.; Ray, J.

    2016-12-01

    In early 2015, the Analysis Centers (ACs) of the International GNSS Service (IGS) completed their second reanalysis of the full history of globally distributed GPS and GLONASS data collected since 1994. The suite of reprocessed AC solutions includes daily product files containing station positions, Earth rotation parameters, satellite orbits and clocks. This second reprocessing—or repro2—provided the IGS contribution to ITRF2014; it follows the successful first reprocessing, which provided the IGS input for ITRF2008. For this poster, we will discuss the newly combined repro2 GPS orbits and clocks. We also revisit our previous analysis of orbit day-boundary discontinuities with several significant changes and improvements: 1) Orbit discontinuities for the contributing ACs were studied in addition to those for the IGS repro2 combined orbits. (2) Apart from homogeneous reprocessing with updated analysis models, the main difference compared to the IGS Final operational products is that NOAA/NGS inputs were not submitted for the IGS reprocessing, yet they contribute heavily in the operational orbits in recent years. (3) Also, during spring 2016, the ESA modified their orbit model so that it is no longer consistent with the one used for reprocessing. A much longer span of orbits was available now, up to 11.2 years for some individual satellites, which allows a far better resolution of spectral features. 4) The procedure to compute orbit discontinuities has been further refined to account for extrapolation edge effects, improved geopotential fields, and to allow for spectral analysis of a longer time series of jumps. The satellite position time series used are complete enough that linear interpolation is necessary for only sparse gaps. So the key results are based on standard FFT power spectra (stacked over the available constellation and lightly smoothed). However, we have also computed Lomb-Scargle periodgrams to provide higher frequency resolution of some spectral peaks and to permit tests of the effect of excluding eclipse periods.

  14. Computer work and self-reported variables on anthropometrics, computer usage, work ability, productivity, pain, and physical activity

    PubMed Central

    2013-01-01

    Background Computer users often report musculoskeletal complaints and pain in the upper extremities and the neck-shoulder region. However, recent epidemiological studies do not report a relationship between the extent of computer use and work-related musculoskeletal disorders (WMSD). The aim of this study was to conduct an explorative analysis on short and long-term pain complaints and work-related variables in a cohort of Danish computer users. Methods A structured web-based questionnaire including questions related to musculoskeletal pain, anthropometrics, work-related variables, work ability, productivity, health-related parameters, lifestyle variables as well as physical activity during leisure time was designed. Six hundred and ninety office workers completed the questionnaire responding to an announcement posted in a union magazine. The questionnaire outcomes, i.e., pain intensity, duration and locations as well as anthropometrics, work-related variables, work ability, productivity, and level of physical activity, were stratified by gender and correlations were obtained. Results Women reported higher pain intensity, longer pain duration as well as more locations with pain than men (P < 0.05). In parallel, women scored poorer work ability and ability to fulfil the requirements on productivity than men (P < 0.05). Strong positive correlations were found between pain intensity and pain duration for the forearm, elbow, neck and shoulder (P < 0.001). Moderate negative correlations were seen between pain intensity and work ability/productivity (P < 0.001). Conclusions The present results provide new key information on pain characteristics in office workers. The differences in pain characteristics, i.e., higher intensity, longer duration and more pain locations as well as poorer work ability reported by women workers relate to their higher risk of contracting WMSD. Overall, this investigation confirmed the complex interplay between anthropometrics, work ability, productivity, and pain perception among computer users. PMID:23915209

  15. A Computational Investigation of Sooting Limits of Spherical Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Lecoustre, V. R.; Chao, B. H.; Sunderland, P. B.; Urban, D. L.; Stocker, D. P.; Axelbaum, R. L.

    2007-01-01

    Limiting conditions for soot particle inception in spherical diffusion flames were investigated numerically. The flames were modeled using a one-dimensional, time accurate diffusion flame code with detailed chemistry and transport and an optically thick radiation model. Seventeen normal and inverse flames were considered, covering a wide range of stoichiometric mixture fraction, adiabatic flame temperature, and residence time. These flames were previously observed to reach their sooting limits after 2 s of microgravity. Sooting-limit diffusion flames with residence times longer than 200 ms were found to have temperatures near 1190 K where C/O = 0.6, whereas flames with shorter residence times required increased temperatures. Acetylene was found to be a reasonable surrogate for soot precursor species in these flames, having peak mole fractions of about 0.01.

  16. Eclipse-Free-Time Assessment Tool for IRIS

    NASA Technical Reports Server (NTRS)

    Eagle, David

    2012-01-01

    IRIS_EFT is a scientific simulation that can be used to perform an Eclipse-Free- Time (EFT) assessment of IRIS (Infrared Imaging Surveyor) mission orbits. EFT is defined to be those time intervals longer than one day during which the IRIS spacecraft is not in the Earth s shadow. Program IRIS_EFT implements a special perturbation of orbital motion to numerically integrate Cowell's form of the system of differential equations. Shadow conditions are predicted by embedding this integrator within Brent s method for finding the root of a nonlinear equation. The IRIS_EFT software models the effects of the following types of orbit perturbations on the long-term evolution and shadow characteristics of IRIS mission orbits. (1) Non-spherical Earth gravity, (2) Atmospheric drag, (3) Point-mass gravity of the Sun, and (4) Point-mass gravity of the Moon. The objective of this effort was to create an in-house computer program that would perform eclipse-free-time analysis. of candidate IRIS spacecraft mission orbits in an accurate and timely fashion. The software is a suite of Fortran subroutines and data files organized as a "computational" engine that is used to accurately predict the long-term orbit evolution of IRIS mission orbits while searching for Earth shadow conditions.

  17. A Lightweight Protocol for Secure Video Streaming

    PubMed Central

    Morkevicius, Nerijus; Bagdonas, Kazimieras

    2018-01-01

    The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing “Fog Node-End Device” layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard. PMID:29757988

  18. A Lightweight Protocol for Secure Video Streaming.

    PubMed

    Venčkauskas, Algimantas; Morkevicius, Nerijus; Bagdonas, Kazimieras; Damaševičius, Robertas; Maskeliūnas, Rytis

    2018-05-14

    The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing "Fog Node-End Device" layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard.

  19. Simulating Microbial Community Patterning Using Biocellion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Seung-Hwa; Kahan, Simon H.; Momeni, Babak

    2014-04-17

    Mathematical modeling and computer simulation are important tools for understanding complex interactions between cells and their biotic and abiotic environment: similarities and differences between modeled and observed behavior provide the basis for hypothesis forma- tion. Momeni et al. [5] investigated pattern formation in communities of yeast strains engaging in different types of ecological interactions, comparing the predictions of mathematical modeling and simulation to actual patterns observed in wet-lab experiments. However, simu- lations of millions of cells in a three-dimensional community are ex- tremely time-consuming. One simulation run in MATLAB may take a week or longer, inhibiting exploration of the vastmore » space of parameter combinations and assumptions. Improving the speed, scale, and accu- racy of such simulations facilitates hypothesis formation and expedites discovery. Biocellion is a high performance software framework for ac- celerating discrete agent-based simulation of biological systems with millions to trillions of cells. Simulations of comparable scale and accu- racy to those taking a week of computer time using MATLAB require just hours using Biocellion on a multicore workstation. Biocellion fur- ther accelerates large scale, high resolution simulations using cluster computers by partitioning the work to run on multiple compute nodes. Biocellion targets computational biologists who have mathematical modeling backgrounds and basic C++ programming skills. This chap- ter describes the necessary steps to adapt the original Momeni et al.'s model to the Biocellion framework as a case study.« less

  20. Television screen time, but not computer use and reading time, is associated with cardio-metabolic biomarkers in a multiethnic Asian population: a cross-sectional study.

    PubMed

    Nang, Ei Ei Khaing; Salim, Agus; Wu, Yi; Tai, E Shyong; Lee, Jeannette; Van Dam, Rob M

    2013-05-30

    Recent evidence shows that sedentary behaviour may be an independent risk factor for cardiovascular diseases, diabetes, cancers and all-cause mortality. However, results are not consistent and different types of sedentary behaviour might have different effects on health. Thus the aim of this study was to evaluate the association between television screen time, computer/reading time and cardio-metabolic biomarkers in a multiethnic urban Asian population. We also sought to understand the potential mediators of this association. The Singapore Prospective Study Program (2004-2007), was a cross-sectional population-based study in a multiethnic population in Singapore. We studied 3305 Singaporean adults of Chinese, Malay and Indian ethnicity who did not have pre-existing diseases and conditions that could affect their physical activity. Multiple linear regression analysis was used to assess the association of television screen time and computer/reading time with cardio-metabolic biomarkers [blood pressure, lipids, glucose, adiponectin, C reactive protein and homeostasis model assessment of insulin resistance (HOMA-IR)]. Path analysis was used to examine the role of mediators of the observed association. Longer television screen time was significantly associated with higher systolic blood pressure, total cholesterol, triglycerides, C reactive protein, HOMA-IR, and lower adiponectin after adjustment for potential socio-demographic and lifestyle confounders. Dietary factors and body mass index, but not physical activity, were potential mediators that explained most of these associations between television screen time and cardio-metabolic biomarkers. The associations of television screen time with triglycerides and HOMA-IR were only partly explained by dietary factors and body mass index. No association was observed between computer/ reading time and worse levels of cardio-metabolic biomarkers. In this urban Asian population, television screen time was associated with worse levels of various cardio-metabolic risk factors. This may reflect detrimental effects of television screen time on dietary habits rather than replacement of physical activity.

  1. Interface induced spin-orbit interaction in silicon quantum dots and prospects of scalability

    NASA Astrophysics Data System (ADS)

    Ferdous, Rifat; Wai, Kok; Veldhorst, Menno; Hwang, Jason; Yang, Henry; Klimeck, Gerhard; Dzurak, Andrew; Rahman, Rajib

    A scalable quantum computing architecture requires reproducibility over key qubit properties, like resonance frequency, coherence time etc. Randomness in these properties would necessitate individual knowledge of each qubit in a quantum computer. Spin qubits hosted in Silicon (Si) quantum dots (QD) is promising as a potential building block for a large-scale quantum computer, because of their longer coherence times. The Stark shift of the electron g-factor in these QDs has been used to selectively address multiple qubits. From atomistic tight-binding studies we investigated the effect of interface non-ideality on the Stark shift of the g-factor in a Si QD. We find that based on the location of a monoatomic step at the interface with respect to the dot center both the sign and magnitude of the Stark shift change. Thus the presence of interface steps in these devices will cause variability in electron g-factor and its Stark shift based on the location of the qubit. This behavior will also cause varying sensitivity to charge noise from one qubit to another, which will randomize the dephasing times T2*. This predicted device-to-device variability is experimentally observed recently in three qubits fabricated at a Si/Si02 interface, which validates the issues discussed.

  2. Coupled thermal/chemical/mechanical modeling of energetic materials in ALE3D

    NASA Technical Reports Server (NTRS)

    Nichols, A. L.; Couch, R.; Maltby, J. D.; McCallen, R. C.; Otero, I.

    1996-01-01

    We must improve our ability to model the response of energetic materials to thermal stimuli and the processes involved in the energetic response. We have developed and used a time step option to efficiently and accurately compute the hours that the energetic material can take to react. Since on these longer film scales, materials can be expected to have significant motion, it is even more important to provide high-order advection for all components, including the chemical species. We show an example cook-off problem to illustrate these capabilities.

  3. Bio-ontologies: current trends and future directions

    PubMed Central

    Bodenreider, Olivier; Stevens, Robert

    2006-01-01

    In recent years, as a knowledge-based discipline, bioinformatics has been made more computationally amenable. After its beginnings as a technology advocated by computer scientists to overcome problems of heterogeneity, ontology has been taken up by biologists themselves as a means to consistently annotate features from genotype to phenotype. In medical informatics, artifacts called ontologies have been used for a longer period of time to produce controlled lexicons for coding schemes. In this article, we review the current position in ontologies and how they have become institutionalized within biomedicine. As the field has matured, the much older philosophical aspects of ontology have come into play. With this and the institutionalization of ontology has come greater formality. We review this trend and what benefits it might bring to ontologies and their use within biomedicine. PMID:16899495

  4. A convenient and accurate parallel Input/Output USB device for E-Prime.

    PubMed

    Canto, Rosario; Bufalari, Ilaria; D'Ausilio, Alessandro

    2011-03-01

    Psychological and neurophysiological experiments require the accurate control of timing and synchrony for Input/Output signals. For instance, a typical Event-Related Potential (ERP) study requires an extremely accurate synchronization of stimulus delivery with recordings. This is typically done via computer software such as E-Prime, and fast communications are typically assured by the Parallel Port (PP). However, the PP is an old and disappearing technology that, for example, is no longer available on portable computers. Here we propose a convenient USB device enabling parallel I/O capabilities. We tested this device against the PP on both a desktop and a laptop machine in different stress tests. Our data demonstrate the accuracy of our system, which suggests that it may be a good substitute for the PP with E-Prime.

  5. Understanding Preprocedure Patient Flow in IR.

    PubMed

    Zafar, Abdul Mueed; Suri, Rajeev; Nguyen, Tran Khanh; Petrash, Carson Cope; Fazal, Zanira

    2016-08-01

    To quantify preprocedural patient flow in interventional radiology (IR) and to identify potential contributors to preprocedural delays. An administrative dataset was used to compute time intervals required for various preprocedural patient-flow processes. These time intervals were compared across on-time/delayed cases and inpatient/outpatient cases by Mann-Whitney U test. Spearman ρ was used to assess any correlation of the rank of a procedure on a given day and the procedure duration to the preprocedure time. A linear-regression model of preprocedure time was used to further explore potential contributing factors. Any identified reason(s) for delay were collated. P < .05 was considered statistically significant. Of the total 1,091 cases, 65.8% (n = 718) were delayed. Significantly more outpatient cases started late compared with inpatient cases (81.4% vs 45.0%; P < .001, χ(2) test). The multivariate linear regression model showed outpatient status, length of delay in arrival, and longer procedure times to be significantly associated with longer preprocedure times. Late arrival of patients (65.9%), unavailability of physicians (18.4%), and unavailability of procedure room (13.0%) were the three most frequently identified reasons for delay. The delay was multifactorial in 29.6% of cases (n = 213). Objective measurement of preprocedural IR patient flow demonstrated considerable waste and highlighted high-yield areas of possible improvement. A data-driven approach may aid efficient delivery of IR care. Copyright © 2016 SIR. Published by Elsevier Inc. All rights reserved.

  6. Using the computer in the clinical consultation; setting the stage, reviewing, recording, and taking actions: multi-channel video study.

    PubMed

    Kumarapeli, Pushpa; de Lusignan, Simon

    2013-06-01

    Electronic patient record (EPR) systems are widely used. This study explores the context and use of systems to provide insights into improving their use in clinical practice. We used video to observe 163 consultations by 16 clinicians using four EPR brands. We made a visual study of the consultation room and coded interactions between clinician, patient, and computer. Few patients (6.9%, n=12) declined to participate. Patients looked at the computer twice as much (47.6 s vs 20.6 s, p<0.001) when it was within their gaze. A quarter of consultations were interrupted (27.6%, n=45); and in half the clinician left the room (12.3%, n=20). The core consultation takes about 87% of the total session time; 5% of time is spent pre-consultation, reading the record and calling the patient in; and 8% of time is spent post-consultation, largely entering notes. Consultations with more than one person and where prescribing took place were longer (R(2) adj=22.5%, p<0.001). The core consultation can be divided into 61% of direct clinician-patient interaction, of which 15% is examination, 25% computer use with no patient involvement, and 14% simultaneous clinician-computer-patient interplay. The proportions of computer use are similar between consultations (mean=40.6%, SD=13.7%). There was more data coding in problem-orientated EPR systems, though clinicians often used vague codes. The EPR system is used for a consistent proportion of the consultation and should be designed to facilitate multi-tasking. Clinicians who want to promote screen sharing should change their consulting room layout.

  7. Automated microdensitometer for digitizing astronomical plates

    NASA Technical Reports Server (NTRS)

    Angilello, J.; Chiang, W. H.; Elmegreen, D. M.; Segmueller, A.

    1984-01-01

    A precision microdensitometer was built under control of an IBM S/1 time-sharing computer system. The instrument's spatial resolution is better than 20 microns. A raster scan of an area of 10x10 sq mm (500x500 raster points) takes 255 minutes. The reproducibility is excellent and the stability is good over a period of 30 hours, which is significantly longer than the time required for most scans. The intrinsic accuracy of the instrument was tested using Kodak standard filters, and it was found to be better than 3%. A comparative accuracy was tested measuring astronomical plates of galaxies for which absolute photoelectric photometry data were available. The results showed an accuracy excellent for astronomical applications.

  8. Coherent Coupled Qubits for Quantum Annealing

    NASA Astrophysics Data System (ADS)

    Weber, Steven J.; Samach, Gabriel O.; Hover, David; Gustavsson, Simon; Kim, David K.; Melville, Alexander; Rosenberg, Danna; Sears, Adam P.; Yan, Fei; Yoder, Jonilyn L.; Oliver, William D.; Kerman, Andrew J.

    2017-07-01

    Quantum annealing is an optimization technique which potentially leverages quantum tunneling to enhance computational performance. Existing quantum annealers use superconducting flux qubits with short coherence times limited primarily by the use of large persistent currents Ip. Here, we examine an alternative approach using qubits with smaller Ip and longer coherence times. We demonstrate tunable coupling, a basic building block for quantum annealing, between two flux qubits with small (approximately 50-nA) persistent currents. Furthermore, we characterize qubit coherence as a function of coupler setting and investigate the effect of flux noise in the coupler loop on qubit coherence. Our results provide insight into the available design space for next-generation quantum annealers with improved coherence.

  9. Graphical Interface for the Study of Gas-Phase Reaction Kinetics: Cyclopentene Vapor Pyrolysis

    NASA Astrophysics Data System (ADS)

    Marcotte, Ronald E.; Wilson, Lenore D.

    2001-06-01

    The undergraduate laboratory experiment on the pyrolysis of gaseous cyclopentene has been modernized to improve safety, speed, and precision and to better reflect the current practice of physical chemistry. It now utilizes virtual instrument techniques to create a graphical computer interface for the collection and display of experimental data. An electronic pressure gauge has replaced the mercury manometer formerly needed in proximity to the 500 °C pyrolysis oven. Students have much better real-time information available to them and no longer require multiple lab periods to get rate constants and acceptable Arrhenius parameters. The time saved on manual data collection is used to give the students a tour of the computer interfacing hardware and software and a hands-on introduction to gas-phase reagent preparation using a research-grade high-vacuum system. This includes loading the sample, degassing it by the freeze-pump-thaw technique, handling liquid nitrogen and working through the logic necessary for each reconfiguration of the diffusion pump section and the submanifolds.

  10. Example-Based Super-Resolution Fluorescence Microscopy.

    PubMed

    Jia, Shu; Han, Boran; Kutz, J Nathan

    2018-04-23

    Capturing biological dynamics with high spatiotemporal resolution demands the advancement in imaging technologies. Super-resolution fluorescence microscopy offers spatial resolution surpassing the diffraction limit to resolve near-molecular-level details. While various strategies have been reported to improve the temporal resolution of super-resolution imaging, all super-resolution techniques are still fundamentally limited by the trade-off associated with the longer image acquisition time that is needed to achieve higher spatial information. Here, we demonstrated an example-based, computational method that aims to obtain super-resolution images using conventional imaging without increasing the imaging time. With a low-resolution image input, the method provides an estimate of its super-resolution image based on an example database that contains super- and low-resolution image pairs of biological structures of interest. The computational imaging of cellular microtubules agrees approximately with the experimental super-resolution STORM results. This new approach may offer potential improvements in temporal resolution for experimental super-resolution fluorescence microscopy and provide a new path for large-data aided biomedical imaging.

  11. Atomistic and coarse-grained computer simulations of raft-like lipid mixtures.

    PubMed

    Pandit, Sagar A; Scott, H Larry

    2007-01-01

    Computer modeling can provide insights into the existence, structure, size, and thermodynamic stability of localized raft-like regions in membranes. However, the challenges in the construction and simulation of accurate models of heterogeneous membranes are great. The primary obstacle in modeling the lateral organization within a membrane is the relatively slow lateral diffusion rate for lipid molecules. Microsecond or longer time-scales are needed to fully model the formation and stability of a raft in a membra ne. Atomistic simulations currently are not able to reach this scale, but they do provide quantitative information on the intermolecular forces and correlations that are involved in lateral organization. In this chapter, the steps needed to carry out and analyze atomistic simulations of hydrated lipid bilayers having heterogeneous composition are outlined. It is then shown how the data from a molecular dynamics simulation can be used to construct a coarse-grained model for the heterogeneous bilayer that can predict the lateral organization and stability of rafts at up to millisecond time-scales.

  12. Time-dependent transport of energetic particles in magnetic turbulence: computer simulations versus analytical theory

    NASA Astrophysics Data System (ADS)

    Arendt, V.; Shalchi, A.

    2018-06-01

    We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.

  13. Accelerating separable footprint (SF) forward and back projection on GPU

    NASA Astrophysics Data System (ADS)

    Xie, Xiaobin; McGaffin, Madison G.; Long, Yong; Fessler, Jeffrey A.; Wen, Minhua; Lin, James

    2017-03-01

    Statistical image reconstruction (SIR) methods for X-ray CT can improve image quality and reduce radiation dosages over conventional reconstruction methods, such as filtered back projection (FBP). However, SIR methods require much longer computation time. The separable footprint (SF) forward and back projection technique simplifies the calculation of intersecting volumes of image voxels and finite-size beams in a way that is both accurate and efficient for parallel implementation. We propose a new method to accelerate the SF forward and back projection on GPU with NVIDIA's CUDA environment. For the forward projection, we parallelize over all detector cells. For the back projection, we parallelize over all 3D image voxels. The simulation results show that the proposed method is faster than the acceleration method of the SF projectors proposed by Wu and Fessler.13 We further accelerate the proposed method using multiple GPUs. The results show that the computation time is reduced approximately proportional to the number of GPUs.

  14. 20 CFR 229.41 - When a spouse can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE When... annuity rate under the overall minimum. A spouse's inclusion in the computation of the overall minimum...

  15. Provider interaction with the electronic health record: the effects on patient-centered communication in medical encounters.

    PubMed

    Street, Richard L; Liu, Lin; Farber, Neil J; Chen, Yunan; Calvitti, Alan; Zuest, Danielle; Gabuzda, Mark T; Bell, Kristin; Gray, Barbara; Rick, Steven; Ashfaq, Shazia; Agha, Zia

    2014-09-01

    The computer with the electronic health record (EHR) is an additional 'interactant' in the medical consultation, as clinicians must simultaneously or in alternation engage patient and computer to provide medical care. Few studies have examined how clinicians' EHR workflow (e.g., gaze, keyboard activity, and silence) influences the quality of their communication, the patient's involvement in the encounter, and conversational control of the visit. Twenty-three primary care providers (PCPs) from USA Veterans Administration (VA) primary care clinics participated in the study. Up to 6 patients per PCP were recruited. The proportion of time PCPs spent gazing at the computer was captured in real time via video-recording. Mouse click/scrolling activity was captured through Morae, a usability software that logs mouse clicks and scrolling activity. Conversational silence was coded as the proportion of time in the visit when PCP and patient were not talking. After the visit, patients completed patient satisfaction measures. Trained coders independently viewed videos of the interactions and rated the degree to which PCPs were patient-centered (informative, supportive, partnering) and patients were involved in the consultation. Conversational control was measured as the proportion of time the PCP held the floor compared to the patient. The final sample included 125 consultations. PCPs who spent more time in the consultation gazing at the computer and whose visits had more conversational silence were rated lower in patient-centeredness. PCPs controlled more of the talk time in the visits that also had longer periods of mutual silence. PCPs were rated as having less effective communication when they spent more time looking at the computer and when there was more periods of silence in the consultation. Because PCPs increasingly are using the EHR in their consultations, more research is needed to determine effective ways that they can verbally engage patients while simultaneously managing data in the EHR. EHR activity consumes an increasing proportion of clinicians' time during consultations. To ensure effective communication with their patients, clinicians may benefit from using communication strategies that maintain the flow of conversation when working with the computer, as well as from learning EHR management skills that prevent extended periods of gaze at computer and long periods of silence. Next-generation EHR design must address better usability and clinical workflow integration, including facilitating patient-clinician communication. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Provider interaction with the electronic health record: The effects on patient-centered communication in medical encounters

    PubMed Central

    Street, Richard L.; Liu, Lin; Farber, Neil J.; Chen, Yunan; Calvitti, Alan; Zuest, Danielle; Gabuzda, Mark T.; Bell, Kristin; Gray, Barbara; Rick, Steven; Ashfaq, Shazia; Agha, Zia

    2015-01-01

    Objective The computer with the electronic health record (EHR) is an additional ‘interactant’ in the medical consultation, as clinicians must simultaneously or in alternation engage patient and computer to provide medical care. Few studies have examined how clinicians' EHR workflow (e.g., gaze, keyboard activity, and silence) influences the quality of their communication, the patient's involvement in the encounter, and conversational control of the visit. Methods Twenty-three primary care providers (PCPs) from USA Veterans Administration (VA) primary care clinics participated in the study. Up to 6 patients per PCP were recruited. The proportion of time PCPs spent gazing at the computer was captured in real time via video-recording. Mouse click/scrolling activity was captured through Morae, a usability software that logs mouse clicks and scrolling activity. Conversational silence was coded as the proportion of time in the visit when PCP and patient were not talking. After the visit, patients completed patient satisfaction measures. Trained coders independently viewed videos of the interactions and rated the degree to which PCPs were patient-centered (informative, supportive, partnering) and patients were involved in the consultation. Conversational control was measured as the proportion of time the PCP held the floor compared to the patient. Results The final sample included 125 consultations. PCPs who spent more time in the consultation gazing at the computer and whose visits had more conversational silence were rated lower inpatient-centeredness. PCPs controlled more of the talk time in the visits that also had longer periods of mutual silence. Conclusions PCPs were rated as having less effective communication when they spent more time looking at the computer and when there was more periods of silence in the consultation. Because PCPs increasingly are using the EHR in their consultations, more research is needed to determine effective ways that they can verbally engage patients while simultaneously managing data in the EHR. Practice implications EHR activity consumes an increasing proportion of clinicians' time during consultations. To ensure effective communication with their patients, clinicians may benefit from using communication strategies that maintain the flow of conversation when working with the computer, as well as from learning EHR management skills that prevent extended periods of gaze at computer and long periods of silence. Next-generation EHR design must address better usability and clinical workflow integration, including facilitating patient-clinician communication. PMID:24882086

  17. Octree-based, GPU implementation of a continuous cellular automaton for the simulation of complex, evolving surfaces

    NASA Astrophysics Data System (ADS)

    Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.

    2011-03-01

    Presently, dynamic surface-based models are required to contain increasingly larger numbers of points and to propagate them over longer time periods. For large numbers of surface points, the octree data structure can be used as a balance between low memory occupation and relatively rapid access to the stored data. For evolution rules that depend on neighborhood states, extended simulation periods can be obtained by using simplified atomistic propagation models, such as the Cellular Automata (CA). This method, however, has an intrinsic parallel updating nature and the corresponding simulations are highly inefficient when performed on classical Central Processing Units (CPUs), which are designed for the sequential execution of tasks. In this paper, a series of guidelines is presented for the efficient adaptation of octree-based, CA simulations of complex, evolving surfaces into massively parallel computing hardware. A Graphics Processing Unit (GPU) is used as a cost-efficient example of the parallel architectures. For the actual simulations, we consider the surface propagation during anisotropic wet chemical etching of silicon as a computationally challenging process with a wide-spread use in microengineering applications. A continuous CA model that is intrinsically parallel in nature is used for the time evolution. Our study strongly indicates that parallel computations of dynamically evolving surfaces simulated using CA methods are significantly benefited by the incorporation of octrees as support data structures, substantially decreasing the overall computational time and memory usage.

  18. Notch filtering the nuclear environment of a spin qubit.

    PubMed

    Malinowski, Filip K; Martins, Frederico; Nissen, Peter D; Barnes, Edwin; Cywiński, Łukasz; Rudner, Mark S; Fallahi, Saeed; Gardner, Geoffrey C; Manfra, Michael J; Marcus, Charles M; Kuemmeth, Ferdinand

    2017-01-01

    Electron spins in gate-defined quantum dots provide a promising platform for quantum computation. In particular, spin-based quantum computing in gallium arsenide takes advantage of the high quality of semiconducting materials, reliability in fabricating arrays of quantum dots and accurate qubit operations. However, the effective magnetic noise arising from the hyperfine interaction with uncontrolled nuclear spins in the host lattice constitutes a major source of decoherence. Low-frequency nuclear noise, responsible for fast (10 ns) inhomogeneous dephasing, can be removed by echo techniques. High-frequency nuclear noise, recently studied via echo revivals, occurs in narrow-frequency bands related to differences in Larmor precession of the three isotopes 69 Ga, 71 Ga and 75 As (refs 15,16,17). Here, we show that both low- and high-frequency nuclear noise can be filtered by appropriate dynamical decoupling sequences, resulting in a substantial enhancement of spin qubit coherence times. Using nuclear notch filtering, we demonstrate a spin coherence time (T 2 ) of 0.87 ms, five orders of magnitude longer than typical exchange gate times, and exceeding the longest coherence times reported to date in Si/SiGe gate-defined quantum dots.

  19. Implementation of real-time digital signal processing systems

    NASA Technical Reports Server (NTRS)

    Narasimha, M.; Peterson, A.; Narayan, S.

    1978-01-01

    Special purpose hardware implementation of DFT Computers and digital filters is considered in the light of newly introduced algorithms and IC devices. Recent work by Winograd on high-speed convolution techniques for computing short length DFT's, has motivated the development of more efficient algorithms, compared to the FFT, for evaluating the transform of longer sequences. Among these, prime factor algorithms appear suitable for special purpose hardware implementations. Architectural considerations in designing DFT computers based on these algorithms are discussed. With the availability of monolithic multiplier-accumulators, a direct implementation of IIR and FIR filters, using random access memories in place of shift registers, appears attractive. The memory addressing scheme involved in such implementations is discussed. A simple counter set-up to address the data memory in the realization of FIR filters is also described. The combination of a set of simple filters (weighting network) and a DFT computer is shown to realize a bank of uniform bandpass filters. The usefulness of this concept in arriving at a modular design for a million channel spectrum analyzer, based on microprocessors, is discussed.

  20. Radiolabeling, whole-body single photon emission computed tomography/computed tomography imaging, and pharmacokinetics of carbon nanohorns in mice

    PubMed Central

    Zhang, Minfang; Jasim, Dhifaf A; Ménard-Moyon, Cécilia; Nunes, Antonio; Iijima, Sumio; Bianco, Alberto; Yudasaka, Masako; Kostarelos, Kostas

    2016-01-01

    In this work, we report that the biodistribution and excretion of carbon nanohorns (CNHs) in mice are dependent on their size and functionalization. Small-sized CNHs (30–50 nm; S-CNHs) and large-sized CNHs (80–100 nm; L-CNHs) were chemically functionalized and radiolabeled with [111In]-diethylenetriaminepentaacetic acid and intravenously injected into mice. Their tissue distribution profiles at different time points were determined by single photon emission computed tomography/computed tomography. The results showed that the S-CNHs circulated longer in blood, while the L-CNHs accumulated faster in major organs like the liver and spleen. Small amounts of S-CNHs- and L-CNHs were excreted in urine within the first few hours postinjection, followed by excretion of smaller quantities within the next 48 hours in both urine and feces. The kinetics of excretion for S-CNHs were more rapid than for L-CNHs. Both S-CNH and L-CNH material accumulated mainly in the liver and spleen; however, S-CNH accumulation in the spleen was more prominent than in the liver. PMID:27524892

  1. Late night activity regarding stroke codes: LuNAR strokes.

    PubMed

    Tafreshi, Gilda; Raman, Rema; Ernstrom, Karin; Rapp, Karen; Meyer, Brett C

    2012-08-01

    There is diurnal variation for cardiac arrest and sudden cardiac death. Stroke may show a similar pattern. We assessed whether strokes presenting during a particular time of day or night are more likely of vascular etiology. To compare emergency department stroke codes arriving between 22:00 and 8:00 hours (LuNAR strokes) vs. others (n-LuNAR strokes). The purpose was to determine if late night strokes are more likely to be true strokes or warrant acute tissue plasminogen activator evaluations. We reviewed prospectively collected cases in the University of California, San Diego Stroke Team database gathered over a four-year period. Stroke codes at six emergency departments were classified based on arrival time. Those arriving between 22:00 and 8:00 hours were classified as LuNAR stroke codes, the remainder were classified as 'n-LuNAR'. Patients were further classified as intracerebral hemorrhage, acute ischemic stroke not receiving tissue plasminogen activator, acute ischemic stroke receiving tissue plasminogen activator, transient ischemic attack, and nonstroke. Categorical outcomes were compared using Fisher's Exact test. Continuous outcomes were compared using Wilcoxon's Rank-sum test. A total of 1607 patients were included in our study, of which, 299 (19%) were LuNAR code strokes. The overall median NIHSS was five, higher in the LuNAR group (n-LuNAR 5, LuNAR 7; P=0·022). There was no overall differences in patient diagnoses between LuNAR and n-LuNAR strokes (P=0·169) or diagnosis of acute ischemic stroke receiving tissue plasminogen activator (n-LuNAR 191 (14·6%), LuNAR 42 (14·0%); P=0·86). Mean arrival to computed tomography scan time was longer during LuNAR hours (n-LuNAR 54·9±76·3 min, LuNAR 62·5±87·7 min; P=0·027). There was no significant difference in 90-day mortality (n-LuNAR 15·0%, LuNAR 13·2%; P=0·45). Our stroke center experience showed no difference in diagnosis of acute ischemic stroke between day and night stroke codes. This similarity was further supported in similar rates of tissue plasminogen activator administration. Late night strokes may warrant a more rapid stroke specialist evaluation due to the longer time elapsed from symptom onset and the longer time to computed tomography scan. © 2011 The Authors. International Journal of Stroke © 2011 World Stroke Organization.

  2. Brownian dynamics simulations on a hypersphere in 4-space

    NASA Astrophysics Data System (ADS)

    Nissfolk, Jarl; Ekholm, Tobias; Elvingson, Christer

    2003-10-01

    We describe an algorithm for performing Brownian dynamics simulations of particles diffusing on S3, a hypersphere in four dimensions. The system is chosen due to recent interest in doing computer simulations in a closed space where periodic boundary conditions can be avoided. We specifically address the question how to generate a random walk on the 3-sphere, starting from the solution of the corresponding diffusion equation, and we also discuss an efficient implementation based on controlled approximations. Since S3 is a closed manifold (space), the average square displacement during a random walk is no longer proportional to the elapsed time, as in R3. Instead, its time rate of change is continuously decreasing, and approaches zero as time becomes large. We show, however, that the effective diffusion coefficient can still be obtained from the time dependence of the square displacement.

  3. Profiling users in the UNIX os environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dao, V N P; Vemuri, R; Templeton, S J

    2000-09-29

    This paper presents results obtained by using a method of profiling a user based on the login host, the login time, the command set, and the command set execution time of the profiled user. It is assumed that the user is logging onto a UNIX host on a computer network. The paper concentrates on two areas: short-term and long-term profiling. In short-term profiling the focus is on profiling the user at a given session where user characteristics do not change much. In long-term profiling, the duration of observation is over a much longer period of time. The latter is moremore » challenging because of a phenomenon called concept or profile drift. Profile drift occurs when a user logs onto a host for an extended period of time (over several sessions).« less

  4. Young children pause on phrase boundaries in self-paced music listening: The role of harmonic cues.

    PubMed

    Kragness, Haley E; Trainor, Laurel J

    2018-05-01

    Proper segmentation of auditory streams is essential for understanding music. Many cues, including meter, melodic contour, and harmony, influence adults' perception of musical phrase boundaries. To date, no studies have examined young children's musical grouping in a production task. We used a musical self-pacing method to investigate (1) whether dwell times index young children's musical phrase grouping and, if so, (2) whether children dwell longer on phrase boundaries defined by harmonic cues specifically. In Experiment 1, we asked 3-year-old children to self-pace through chord progressions from Bach chorales (sequences in which metrical, harmonic, and melodic contour grouping cues aligned) by pressing a computer key to present each chord in the sequence. Participants dwelled longer on chords in the 8th position, which corresponded to phrase endings. In Experiment 2, we tested 3-, 4-, and 7-year-old children's sensitivity to harmonic cues to phrase grouping when metrical regularity cues and melodic contour cues were misaligned with the harmonic phrase boundaries. In this case, 7 and 4 year olds but not 3 year olds dwelled longer on harmonic phrase boundaries, suggesting that the influence of harmonic cues on phrase boundary perception develops substantially between 3 and 4 years of age in Western children. Overall, we show that the musical dwell time method is child-friendly and can be used to investigate various aspects of young children's musical understanding, including phrase grouping and harmonic knowledge. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  5. References and benchmarks for pore-scale flow simulated using micro-CT images of porous media and digital rocks

    NASA Astrophysics Data System (ADS)

    Saxena, Nishank; Hofmann, Ronny; Alpak, Faruk O.; Berg, Steffen; Dietderich, Jesse; Agarwal, Umang; Tandon, Kunj; Hunter, Sander; Freeman, Justin; Wilson, Ove Bjorn

    2017-11-01

    We generate a novel reference dataset to quantify the impact of numerical solvers, boundary conditions, and simulation platforms. We consider a variety of microstructures ranging from idealized pipes to digital rocks. Pore throats of the digital rocks considered are large enough to be well resolved with state-of-the-art micro-computerized tomography technology. Permeability is computed using multiple numerical engines, 12 in total, including, Lattice-Boltzmann, computational fluid dynamics, voxel based, fast semi-analytical, and known empirical models. Thus, we provide a measure of uncertainty associated with flow computations of digital media. Moreover, the reference and standards dataset generated is the first of its kind and can be used to test and improve new fluid flow algorithms. We find that there is an overall good agreement between solvers for idealized cross-section shape pipes. As expected, the disagreement increases with increase in complexity of the pore space. Numerical solutions for pipes with sinusoidal variation of cross section show larger variability compared to pipes of constant cross-section shapes. We notice relatively larger variability in computed permeability of digital rocks with coefficient of variation (of up to 25%) in computed values between various solvers. Still, these differences are small given other subsurface uncertainties. The observed differences between solvers can be attributed to several causes including, differences in boundary conditions, numerical convergence criteria, and parameterization of fundamental physics equations. Solvers that perform additional meshing of irregular pore shapes require an additional step in practical workflows which involves skill and can introduce further uncertainty. Computation times for digital rocks vary from minutes to several days depending on the algorithm and available computational resources. We find that more stringent convergence criteria can improve solver accuracy but at the expense of longer computation time.

  6. Goal-Directed and Habit-Like Modulations of Stimulus Processing during Reinforcement Learning.

    PubMed

    Luque, David; Beesley, Tom; Morris, Richard W; Jack, Bradley N; Griffiths, Oren; Whitford, Thomas J; Le Pelley, Mike E

    2017-03-15

    Recent research has shown that perceptual processing of stimuli previously associated with high-value rewards is automatically prioritized even when rewards are no longer available. It has been hypothesized that such reward-related modulation of stimulus salience is conceptually similar to an "attentional habit." Recording event-related potentials in humans during a reinforcement learning task, we show strong evidence in favor of this hypothesis. Resistance to outcome devaluation (the defining feature of a habit) was shown by the stimulus-locked P1 component, reflecting activity in the extrastriate visual cortex. Analysis at longer latencies revealed a positive component (corresponding to the P3b, from 550-700 ms) sensitive to outcome devaluation. Therefore, distinct spatiotemporal patterns of brain activity were observed corresponding to habitual and goal-directed processes. These results demonstrate that reinforcement learning engages both attentional habits and goal-directed processes in parallel. Consequences for brain and computational models of reinforcement learning are discussed. SIGNIFICANCE STATEMENT The human attentional network adapts to detect stimuli that predict important rewards. A recent hypothesis suggests that the visual cortex automatically prioritizes reward-related stimuli, driven by cached representations of reward value; that is, stimulus-response habits. Alternatively, the neural system may track the current value of the predicted outcome. Our results demonstrate for the first time that visual cortex activity is increased for reward-related stimuli even when the rewarding event is temporarily devalued. In contrast, longer-latency brain activity was specifically sensitive to transient changes in reward value. Therefore, we show that both habit-like attention and goal-directed processes occur in the same learning episode at different latencies. This result has important consequences for computational models of reinforcement learning. Copyright © 2017 the authors 0270-6474/17/373009-09$15.00/0.

  7. 20 CFR 229.43 - When a divorced spouse can no longer be included in computing an annuity under the overall minimum.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... included in computing an annuity under the overall minimum. A divorced spouse's inclusion in the... spouse becomes entitled to a retirement or disability benefit under the Social Security Act based upon a...

  8. Variation in Patients' Travel Times among Imaging Examination Types at a Large Academic Health System.

    PubMed

    Rosenkrantz, Andrew B; Liang, Yu; Duszak, Richard; Recht, Michael P

    2017-08-01

    Patients' willingness to travel farther distances for certain imaging services may reflect their perceptions of the degree of differentiation of such services. We compare patients' travel times for a range of imaging examinations performed across a large academic health system. We searched the NYU Langone Medical Center Enterprise Data Warehouse to identify 442,990 adult outpatient imaging examinations performed over a recent 3.5-year period. Geocoding software was used to estimate typical driving times from patients' residences to imaging facilities. Variation in travel times was assessed among examination types. The mean expected travel time was 29.2 ± 20.6 minutes, but this varied significantly (p < 0.001) among examination types. By modality, travel times were shortest for ultrasound (26.8 ± 18.9) and longest for positron emission tomography-computed tomography (31.9 ± 21.5). For magnetic resonance imaging, travel times were shortest for musculoskeletal extremity (26.4 ± 19.2) and spine (28.6 ± 21.0) examinations and longest for prostate (35.9 ± 25.6) and breast (32.4 ± 22.3) examinations. For computed tomography, travel times were shortest for a range of screening examinations [colonography (25.5 ± 20.8), coronary artery calcium scoring (26.1 ± 19.2), and lung cancer screening (26.4 ± 14.9)] and longest for angiography (32.0 ± 22.6). For ultrasound, travel times were shortest for aortic aneurysm screening (22.3 ± 18.4) and longest for breast (30.1 ± 19.2) examinations. Overall, men (29.9 ± 21.6) had longer (p < 0.001) travel times than women (27.8 ± 20.3); this difference persisted for each modality individually (p ≤ 0.006). Patients' willingness to travel longer times for certain imaging examination types (particularly breast and prostate imaging) supports the role of specialized services in combating potential commoditization of imaging services. Disparities in travel times by gender warrant further investigation. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  9. Fluidity in the Networked Society--Self-Initiated learning as a Digital Literacy Competence

    ERIC Educational Resources Information Center

    Levinsen, Karin Tweddell

    2011-01-01

    In the globalized economies e-permeation has become a basic condition in our everyday lives. ICT can no longer be understood solely as artefacts and tools and computer-related literacy are no longer restricted to the ability to operate digital tools for specific purposes. The network society, and therefore also eLearning are characterized by…

  10. Efficacy of Stent-Retriever Thrombectomy in Magnetic Resonance Imaging Versus Computed Tomographic Perfusion-Selected Patients in SWIFT PRIME Trial (Solitaire FR With the Intention for Thrombectomy as Primary Endovascular Treatment for Acute Ischemic Stroke).

    PubMed

    Menjot de Champfleur, Nicolas; Saver, Jeffrey L; Goyal, Mayank; Jahan, Reza; Diener, Hans-Christoph; Bonafe, Alain; Levy, Elad I; Pereira, Vitor M; Cognard, Christophe; Yavagal, Dileep R; Albers, Gregory W

    2017-06-01

    The majority of patients enrolled in SWIFT PRIME trial (Solitaire FR With the Intention for Thrombectomy as Primary Endovascular Treatment for Acute Ischemic Stroke) had computed tomographic perfusion (CTP) imaging before randomization; 34 patients were randomized after magnetic resonance imaging (MRI). Patients with middle cerebral artery and distal carotid occlusions were randomized to treatment with tPA (tissue-type plasminogen activator) alone or tPA+stentriever thrombectomy. The primary outcome was the distribution of the modified Rankin Scale score at 90 days. Patients with the target mismatch profile for enrollment were identified on MRI and CTP. MRI selection was performed in 34 patients; CTP in 139 patients. Baseline National Institutes of Health Stroke Scale score was 17 in both groups. Target mismatch profile was present in 95% (MRI) versus 83% (CTP). A higher percentage of the MRI group was transferred from an outside hospital ( P =0.02), and therefore, the time from stroke onset to randomization was longer in the MRI group ( P =0.003). Time from emergency room arrival to randomization did not differ in CTP versus MRI-selected patients. Baseline ischemic core volumes were similar in both groups. Reperfusion rates (>90%/TICI [Thrombolysis in Cerebral Infarction] score 3) did not differ in the stentriever-treated patients in the MRI versus CTP groups. The primary efficacy analysis (90-day mRS score) demonstrated a statistically significant benefit in both subgroups (MRI, P =0.02; CTP, P =0.01). Infarct growth was reduced in the stentriever-treated group in both MRI and CTP groups. Time to randomization was significantly longer in MRI-selected patients; however, site arrival to randomization times were not prolonged, and the benefits of endovascular therapy were similar. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01657461. © 2017 American Heart Association, Inc.

  11. A framework for building real-time expert systems

    NASA Technical Reports Server (NTRS)

    Lee, S. Daniel

    1991-01-01

    The Space Station Freedom is an example of complex systems that require both traditional and artificial intelligence (AI) real-time methodologies. It was mandated that Ada should be used for all new software development projects. The station also requires distributed processing. Catastrophic failures on the station can cause the transmission system to malfunction for a long period of time, during which ground-based expert systems cannot provide any assistance to the crisis situation on the station. This is even more critical for other NASA projects that would have longer transmission delays (e.g., the lunar base, Mars missions, etc.). To address these issues, a distributed agent architecture (DAA) is proposed that can support a variety of paradigms based on both traditional real-time computing and AI. The proposed testbed for DAA is an autonomous power expert (APEX) which is a real-time monitoring and diagnosis expert system for the electrical power distribution system of the space station.

  12. Short-Time Nonlinear Effects in the Exciton-Polariton System

    NASA Astrophysics Data System (ADS)

    Guevara, Cristi D.; Shipman, Stephen P.

    2018-04-01

    In the exciton-polariton system, a linear dispersive photon field is coupled to a nonlinear exciton field. Short-time analysis of the lossless system shows that, when the photon field is excited, the time required for that field to exhibit nonlinear effects is longer than the time required for the nonlinear Schrödinger equation, in which the photon field itself is nonlinear. When the initial condition is scaled by ɛ ^α , it is found that the relative error committed by omitting the nonlinear term in the exciton-polariton system remains within ɛ for all times up to t=Cɛ ^β , where β =(1-α (p-1))/(p+2). This is in contrast to β =1-α (p-1) for the nonlinear Schrödinger equation. The result is proved for solutions in H^s(R^n) for s>n/2. Numerical computations indicate that the results are sharp and also hold in L^2(R^n).

  13. Comparison of computer-assisted surgery with conventional technique for the treatment of axial distal phalanx fractures in horses: an in vitro study.

    PubMed

    Andritzky, Juliane; Rossol, Melanie; Lischer, Christoph; Auer, Joerg A

    2005-01-01

    To compare the precision obtained with computer-assisted screw insertion for treatment of mid-sagittal articular fractures of the distal phalanx (P3) with results achieved with a conventional technique. In vitro experimental study. Thirty-two cadaveric equine limbs. Four groups of 8 limbs were studied. Either 1 or 2 screws were inserted perpendicular to an imaginary axial fracture of P3 using computer-assisted surgery (CAS) or conventional technique. Screw insertion time, predetermined screw length, inserted screw length, fit of the screw, and errors in placement were recorded. CAS technique took 15-20 minutes longer but resulted in greater precision of screw length and placement compared with the conventional technique. Improved precision in screw insertion with CAS makes insertion of 2 screws possible for repair of mid-sagittal P3 fractures. CAS although expensive improves precision in screw insertion into P3 and consequently should yield improved clinical outcome.

  14. Tensor network method for reversible classical computation

    NASA Astrophysics Data System (ADS)

    Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.

    2018-03-01

    We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.

  15. On configurational forces for gradient-enhanced inelasticity

    NASA Astrophysics Data System (ADS)

    Floros, Dimosthenis; Larsson, Fredrik; Runesson, Kenneth

    2018-04-01

    In this paper we discuss how configurational forces can be computed in an efficient and robust manner when a constitutive continuum model of gradient-enhanced viscoplasticity is adopted, whereby a suitably tailored mixed variational formulation in terms of displacements and micro-stresses is used. It is demonstrated that such a formulation produces sufficient regularity to overcome numerical difficulties that are notorious for a local constitutive model. In particular, no nodal smoothing of the internal variable fields is required. Moreover, the pathological mesh sensitivity that has been reported in the literature for a standard local model is no longer present. Numerical results in terms of configurational forces are shown for (1) a smooth interface and (2) a discrete edge crack. The corresponding configurational forces are computed for different values of the intrinsic length parameter. It is concluded that the convergence of the computed configurational forces with mesh refinement depends strongly on this parameter value. Moreover, the convergence behavior for the limit situation of rate-independent plasticity is unaffected by the relaxation time parameter.

  16. Fast synthesis of topographic mask effects based on rigorous solutions

    NASA Astrophysics Data System (ADS)

    Yan, Qiliang; Deng, Zhijie; Shiely, James

    2007-10-01

    Topographic mask effects can no longer be ignored at technology nodes of 45 nm, 32 nm and beyond. As feature sizes become comparable to the mask topographic dimensions and the exposure wavelength, the popular thin mask model breaks down, because the mask transmission no longer follows the layout. A reliable mask transmission function has to be derived from Maxwell equations. Unfortunately, rigorous solutions of Maxwell equations are only manageable for limited field sizes, but impractical for full-chip optical proximity corrections (OPC) due to the prohibitive runtime. Approximation algorithms are in demand to achieve a balance between acceptable computation time and tolerable errors. In this paper, a fast algorithm is proposed and demonstrated to model topographic mask effects for OPC applications. The ProGen Topographic Mask (POTOMAC) model synthesizes the mask transmission functions out of small-sized Maxwell solutions from a finite-difference-in-time-domain (FDTD) engine, an industry leading rigorous simulator of topographic mask effect from SOLID-E. The integral framework presents a seamless solution to the end user. Preliminary results indicate the overhead introduced by POTOMAC is contained within the same order of magnitude in comparison to the thin mask approach.

  17. Hydrodynamic Simulations of Protoplanetary Disks with GIZMO

    NASA Astrophysics Data System (ADS)

    Rice, Malena; Laughlin, Greg

    2018-01-01

    Over the past several decades, the field of computational fluid dynamics has rapidly advanced as the range of available numerical algorithms and computationally feasible physical problems has expanded. The development of modern numerical solvers has provided a compelling opportunity to reconsider previously obtained results in search for yet undiscovered effects that may be revealed through longer integration times and more precise numerical approaches. In this study, we compare the results of past hydrodynamic disk simulations with those obtained from modern analytical resources. We focus our study on the GIZMO code (Hopkins 2015), which uses meshless methods to solve the homogeneous Euler equations of hydrodynamics while eliminating problems arising as a result of advection between grid cells. By comparing modern simulations with prior results, we hope to provide an improved understanding of the impact of fluid mechanics upon the evolution of protoplanetary disks.

  18. Aero-Thermo-Structural Design Optimization of Internally Cooled Turbine Blades

    NASA Technical Reports Server (NTRS)

    Dulikravich, G. S.; Martin, T. J.; Dennis, B. H.; Lee, E.; Han, Z.-X.

    1999-01-01

    A set of robust and computationally affordable inverse shape design and automatic constrained optimization tools have been developed for the improved performance of internally cooled gas turbine blades. The design methods are applicable to the aerodynamics, heat transfer, and thermoelasticity aspects of the turbine blade. Maximum use of the existing proven disciplinary analysis codes is possible with this design approach. Preliminary computational results demonstrate possibilities to design blades with minimized total pressure loss and maximized aerodynamic loading. At the same time, these blades are capable of sustaining significantly higher inlet hot gas temperatures while requiring remarkably lower coolant mass flow rates. These results suggest that it is possible to design internally cooled turbine blades that will cost less to manufacture, will have longer life span, and will perform as good, if not better than, film cooled turbine blades.

  19. Source Stacking for Numerical Wavefield Computations - Application to Global Scale Seismic Mantle Tomography

    NASA Astrophysics Data System (ADS)

    MacLean, L. S.; Romanowicz, B. A.; French, S.

    2015-12-01

    Seismic wavefield computations using the Spectral Element Method are now regularly used to recover tomographic images of the upper mantle and crust at the local, regional, and global scales (e.g. Fichtner et al., GJI, 2009; Tape et al., Science 2010; Lekic and Romanowicz, GJI, 2011; French and Romanowicz, GJI, 2014). However, the heaviness of the computations remains a challenge, and contributes to limiting the resolution of the produced images. Using source stacking, as suggested by Capdeville et al. (GJI,2005), can considerably speed up the process by reducing the wavefield computations to only one per each set of N sources. This method was demonstrated through synthetic tests on low frequency datasets, and therefore should work for global mantle tomography. However, the large amplitudes of surface waves dominates the stacked seismograms and these cases can no longer be separated by windowing in the time domain. We have developed a processing approach that helps address this issue and demonstrate its usefulness through a series of synthetic tests performed at long periods (T >60 s) on toy upper mantle models. The summed synthetics are computed using the CSEM code (Capdeville et al., 2002). As for the inverse part of the procedure, we use a quasi-Newton method, computing Frechet derivatives and Hessian using normal mode perturbation theory.

  20. A simplified implementation of edge detection in MATLAB is faster and more sensitive than fast fourier transform for actin fiber alignment quantification.

    PubMed

    Kemeny, Steven Frank; Clyne, Alisa Morss

    2011-04-01

    Fiber alignment plays a critical role in the structure and function of cells and tissues. While fiber alignment quantification is important to experimental analysis and several different methods for quantifying fiber alignment exist, many studies focus on qualitative rather than quantitative analysis perhaps due to the complexity of current fiber alignment methods. Speed and sensitivity were compared in edge detection and fast Fourier transform (FFT) for measuring actin fiber alignment in cells exposed to shear stress. While edge detection using matrix multiplication was consistently more sensitive than FFT, image processing time was significantly longer. However, when MATLAB functions were used to implement edge detection, MATLAB's efficient element-by-element calculations and fast filtering techniques reduced computation cost 100 times compared to the matrix multiplication edge detection method. The new computation time was comparable to the FFT method, and MATLAB edge detection produced well-distributed fiber angle distributions that statistically distinguished aligned and unaligned fibers in half as many sample images. When the FFT sensitivity was improved by dividing images into smaller subsections, processing time grew larger than the time required for MATLAB edge detection. Implementation of edge detection in MATLAB is simpler, faster, and more sensitive than FFT for fiber alignment quantification.

  1. Stimulated echo diffusion tensor imaging (STEAM-DTI) with varying diffusion times as a probe of breast tissue.

    PubMed

    Teruel, Jose R; Cho, Gene Y; Moccaldi Rt, Melanie; Goa, Pål E; Bathen, Tone F; Feiweier, Thorsten; Kim, Sungheon G; Moy, Linda; Sigmund, Eric E

    2017-01-01

    To explore the application of diffusion tensor imaging (DTI) for breast tissue and breast pathologies using a stimulated-echo acquisition mode (STEAM) with variable diffusion times. In this Health Insurance Portability and Accountability Act-compliant study, approved by the local institutional review board, eight patients and six healthy volunteers underwent an MRI examination at 3 Tesla including STEAM-DTI with several diffusion times ranging from 68.5 to 902.5 ms. A DTI model was fitted to the data for each diffusion time, and parametric maps of mean diffusivity, fractional anisotropy, axial diffusivity, and radial diffusivity were computed for healthy fibroglandular tissue (FGT) and lesions. The median value of radial diffusivity for FGT was fitted to a linear decay to obtain an estimation of the surface-to-volume ratio, from which the radial diameter was calculated. For healthy FGT, radial diffusivity presented a linear decay with the square root of the diffusion time resulting in a range of estimated radial diameters from 202 to 496 µm, while axial diffusivity presented a nearly time-independent diffusion. Residual fat signal was reduced at longer diffusion times due to the shorter T1 of fat. Residual fat signal to the overall signal in the healthy volunteers' FGT was found to range from 2.39% to 2.55% (shortest mixing time), and from 0.40% to 0.51% (longest mixing time) for the b500 images. The use of variable diffusion times may provide an in vivo noninvasive tool to probe diffusion lengths in breast tissue and breast pathology, and might aid by improving fat suppression at longer diffusion times. 2 J. Magn. Reson. Imaging 2017;45:84-93. © 2016 International Society for Magnetic Resonance in Medicine.

  2. A Kepler study of starspot lifetimes with respect to light-curve amplitude and spectral type

    NASA Astrophysics Data System (ADS)

    Giles, Helen A. C.; Collier Cameron, Andrew; Haywood, Raphaëlle D.

    2017-12-01

    Wide-field high-precision photometric surveys such as Kepler have produced reams of data suitable for investigating stellar magnetic activity of cooler stars. Starspot activity produces quasi-sinusoidal light curves whose phase and amplitude vary as active regions grow and decay over time. Here we investigate, first, whether there is a correlation between the size of starspots - assumed to be related to the amplitude of the sinusoid - and their decay time-scale and, secondly, whether any such correlation depends on the stellar effective temperature. To determine this, we computed the auto-correlation functions of the light curves of samples of stars from Kepler and fitted them with apodised periodic functions. The light-curve amplitudes, representing spot size, were measured from the root-mean-squared scatter of the normalized light curves. We used a Monte Carlo Markov Chain to measure the periods and decay time-scales of the light curves. The results show a correlation between the decay time of starspots and their inferred size. The decay time also depends strongly on the temperature of the star. Cooler stars have spots that last much longer, in particular for stars with longer rotational periods. This is consistent with current theories of diffusive mechanisms causing starspot decay. We also find that the Sun is not unusually quiet for its spectral type - stars with solar-type rotation periods and temperatures tend to have (comparatively) smaller starspots than stars with mid-G or later spectral types.

  3. Fault-Tolerant, Real-Time, Multi-Core Computer System

    NASA Technical Reports Server (NTRS)

    Gostelow, Kim P.

    2012-01-01

    A document discusses a fault-tolerant, self-aware, low-power, multi-core computer for space missions with thousands of simple cores, achieving speed through concurrency. The proposed machine decides how to achieve concurrency in real time, rather than depending on programmers. The driving features of the system are simple hardware that is modular in the extreme, with no shared memory, and software with significant runtime reorganizing capability. The document describes a mechanism for moving ongoing computations and data that is based on a functional model of execution. Because there is no shared memory, the processor connects to its neighbors through a high-speed data link. Messages are sent to a neighbor switch, which in turn forwards that message on to its neighbor until reaching the intended destination. Except for the neighbor connections, processors are isolated and independent of each other. The processors on the periphery also connect chip-to-chip, thus building up a large processor net. There is no particular topology to the larger net, as a function at each processor allows it to forward a message in the correct direction. Some chip-to-chip connections are not necessarily nearest neighbors, providing short cuts for some of the longer physical distances. The peripheral processors also provide the connections to sensors, actuators, radios, science instruments, and other devices with which the computer system interacts.

  4. Resolving prokaryotic taxonomy without rRNA: longer oligonucleotide word lengths improve genome and metagenome taxonomic classification.

    PubMed

    Alsop, Eric B; Raymond, Jason

    2013-01-01

    Oligonucleotide signatures, especially tetranucleotide signatures, have been used as method for homology binning by exploiting an organism's inherent biases towards the use of specific oligonucleotide words. Tetranucleotide signatures have been especially useful in environmental metagenomics samples as many of these samples contain organisms from poorly classified phyla which cannot be easily identified using traditional homology methods, including NCBI BLAST. This study examines oligonucleotide signatures across 1,424 completed genomes from across the tree of life, substantially expanding upon previous work. A comprehensive analysis of mononucleotide through nonanucleotide word lengths suggests that longer word lengths substantially improve the classification of DNA fragments across a range of sizes of relevance to high throughput sequencing. We find that, at present, heptanucleotide signatures represent an optimal balance between prediction accuracy and computational time for resolving taxonomy using both genomic and metagenomic fragments. We directly compare the ability of tetranucleotide and heptanucleotide world lengths (tetranucleotide signatures are the current standard for oligonucleotide word usage analyses) for taxonomic binning of metagenome reads. We present evidence that heptanucleotide word lengths consistently provide more taxonomic resolving power, particularly in distinguishing between closely related organisms that are often present in metagenomic samples. This implies that longer oligonucleotide word lengths should replace tetranucleotide signatures for most analyses. Finally, we show that the application of longer word lengths to metagenomic datasets leads to more accurate taxonomic binning of DNA scaffolds and have the potential to substantially improve taxonomic assignment and assembly of metagenomic data.

  5. 78 FR 39617 - Data Practices, Computer III Further Remand: BOC Provision of Enhanced Services

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-02

    ... Docket No. 10-132; FCC 13-69] Data Practices, Computer III Further Remand: BOC Provision of Enhanced... eliminates comparably efficient interconnection (CEI) and open network architecture (ONA) narrowband... disseminates data, including by altering or eliminating collections that are no longer useful or necessary to...

  6. 20 CFR 229.42 - When a child can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate... second month after the month the child's disability ends, if the child is 18 years old or older, and not...

  7. Factors Influencing Adoption of Ubiquitous Internet amongst Students

    ERIC Educational Resources Information Center

    Juned, Mohammad; Adil, Mohd

    2015-01-01

    Weiser's (1991) conceptualisation of a world wherein human's interaction with computer technology would no longer be limited to conventional input and output devices, has now been translated into a reality with human's constant interaction with multiple interconnected computers and sensors embedded in rooms, furniture, clothes, tools, and other…

  8. Secondhand Is First-Rate

    ERIC Educational Resources Information Center

    Demski, Jennifer

    2010-01-01

    The idea of reusing a computer may bring to mind struggling with a dusty old Commodore 64 that's bogged down by the previous user's data. But that association no longer sticks. In this article, the author discusses how refurbished computers can offer better value and performance than new units, while lessening IT's environmental footprint.

  9. Using the computer in the clinical consultation; setting the stage, reviewing, recording, and taking actions: multi-channel video study

    PubMed Central

    Kumarapeli, Pushpa; de Lusignan, Simon

    2013-01-01

    Background and objective Electronic patient record (EPR) systems are widely used. This study explores the context and use of systems to provide insights into improving their use in clinical practice. Methods We used video to observe 163 consultations by 16 clinicians using four EPR brands. We made a visual study of the consultation room and coded interactions between clinician, patient, and computer. Few patients (6.9%, n=12) declined to participate. Results Patients looked at the computer twice as much (47.6 s vs 20.6 s, p<0.001) when it was within their gaze. A quarter of consultations were interrupted (27.6%, n=45); and in half the clinician left the room (12.3%, n=20). The core consultation takes about 87% of the total session time; 5% of time is spent pre-consultation, reading the record and calling the patient in; and 8% of time is spent post-consultation, largely entering notes. Consultations with more than one person and where prescribing took place were longer (R2 adj=22.5%, p<0.001). The core consultation can be divided into 61% of direct clinician–patient interaction, of which 15% is examination, 25% computer use with no patient involvement, and 14% simultaneous clinician–computer–patient interplay. The proportions of computer use are similar between consultations (mean=40.6%, SD=13.7%). There was more data coding in problem-orientated EPR systems, though clinicians often used vague codes. Conclusions The EPR system is used for a consistent proportion of the consultation and should be designed to facilitate multi-tasking. Clinicians who want to promote screen sharing should change their consulting room layout. PMID:23242763

  10. Travel time and concurrent-schedule choice: retrospective versus prospective control.

    PubMed

    Davison, M; Elliffe, D

    2000-01-01

    Six pigeons were trained on concurrent variable-interval schedules in which two different travel times between alternatives, 4.5 and 0.5 s, were randomly arranged. In Part 1, the next travel time was signaled while the subjects were responding on each alternative. Generalized matching analyses of performance in the presence of the two travel-time signals showed significantly higher response and time sensitivity when the longer travel time was signaled compared to when the shorter time was signaled. When the data were analyzed as a function of the previous travel time, there were no differences in sensitivity. Dwell times on the alternatives were consistently longer in the presence of the stimulus that signaled the longer travel time than they were in the presence of the stimulus that signaled the shorter travel time. These results are in accord with a recent quantitative account of the effects of travel time. In Part 2, no signals indicating the next travel time were given. When these data were analyzed as a function of the previous travel time, time-allocation sensitivity after the 4.5-s travel time was significantly greater than that after the 0.5-s travel time, but no such difference was found for response allocation. Dwell times were also longer when the previous travel time had been longer.

  11. Neural correlates of mathematical problem solving.

    PubMed

    Lin, Chun-Ling; Jung, Melody; Wu, Ying Choon; She, Hsiao-Ching; Jung, Tzyy-Ping

    2015-03-01

    This study explores electroencephalography (EEG) brain dynamics associated with mathematical problem solving. EEG and solution latencies (SLs) were recorded as 11 neurologically healthy volunteers worked on intellectually challenging math puzzles that involved combining four single-digit numbers through basic arithmetic operators (addition, subtraction, division, multiplication) to create an arithmetic expression equaling 24. Estimates of EEG spectral power were computed in three frequency bands - θ (4-7 Hz), α (8-13 Hz) and β (14-30 Hz) - over a widely distributed montage of scalp electrode sites. The magnitude of power estimates was found to change in a linear fashion with SLs - that is, relative to a base of power spectrum, theta power increased with longer SLs, while alpha and beta power tended to decrease. Further, the topographic distribution of spectral fluctuations was characterized by more pronounced asymmetries along the left-right and anterior-posterior axes for solutions that involved a longer search phase. These findings reveal for the first time the topography and dynamics of EEG spectral activities important for sustained solution search during arithmetical problem solving.

  12. Utilization of Short-Simulations for Tuning High-Resolution Climate Model

    NASA Astrophysics Data System (ADS)

    Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.

    2016-12-01

    Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (< 10 days ) and longer ( 1 year) Perturbed Parameters Ensemble (PPE) simulations at low resolution to identify model feature sensitivity to parameter changes. The CAPT tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in greater detail once an educated set of parameter choice is selected. Limitations on using short-term simulations for tuning climate model are also discussed.

  13. Simulation of stochastic diffusion via first exit times

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lötstedt, Per, E-mail: perl@it.uu.se; Meinecke, Lina, E-mail: lina.meinecke@it.uu.se

    2015-11-01

    In molecular biology it is of interest to simulate diffusion stochastically. In the mesoscopic model we partition a biological cell into unstructured subvolumes. In each subvolume the number of molecules is recorded at each time step and molecules can jump between neighboring subvolumes to model diffusion. The jump rates can be computed by discretizing the diffusion equation on that unstructured mesh. If the mesh is of poor quality, due to a complicated cell geometry, standard discretization methods can generate negative jump coefficients, which no longer allows the interpretation as the probability to jump between the subvolumes. We propose a methodmore » based on the mean first exit time of a molecule from a subvolume, which guarantees positive jump coefficients. Two approaches to exit times, a global and a local one, are presented and tested in simulations on meshes of different quality in two and three dimensions.« less

  14. Cancel and rethink in the Wason selection task: further evidence for the heuristic-analytic dual process theory.

    PubMed

    Wada, Kazushige; Nittono, Hiroshi

    2004-06-01

    The reasoning process in the Wason selection task was examined by measuring card inspection times in the letter-number and drinking-age problems. 24 students were asked to solve the problems presented on a computer screen. Only the card touched with a mouse pointer was visible, and the total exposure time of each card was measured. Participants were allowed to cancel their previous selections at any time. Although rethinking was encouraged, the cards once selected were rarely cancelled (10% of the total selections). Moreover, most of the cancelled cards were reselected (89% of the total cancellations). Consistent with previous findings, inspection times were longer for selected cards than for nonselected cards. These results suggest that card selections are determined largely by initial heuristic processes and rarely reversed by subsequent analytic processes. The present study gives further support for the heuristic-analytic dual process theory.

  15. Simulation of stochastic diffusion via first exit times

    PubMed Central

    Lötstedt, Per; Meinecke, Lina

    2015-01-01

    In molecular biology it is of interest to simulate diffusion stochastically. In the mesoscopic model we partition a biological cell into unstructured subvolumes. In each subvolume the number of molecules is recorded at each time step and molecules can jump between neighboring subvolumes to model diffusion. The jump rates can be computed by discretizing the diffusion equation on that unstructured mesh. If the mesh is of poor quality, due to a complicated cell geometry, standard discretization methods can generate negative jump coefficients, which no longer allows the interpretation as the probability to jump between the subvolumes. We propose a method based on the mean first exit time of a molecule from a subvolume, which guarantees positive jump coefficients. Two approaches to exit times, a global and a local one, are presented and tested in simulations on meshes of different quality in two and three dimensions. PMID:26600600

  16. Weather Knowledge: No Longer the Privilege of Meteorologists and Weather Services - Information and the Overturning of the Gods

    NASA Astrophysics Data System (ADS)

    Leon, V. C.

    2006-05-01

    The advances in communications technology, sharing of data and information, are enabling the development of knowledge that was impossible a decade ago. A prime example is Meteorology students, regardless of their location, are now able to access and use massive amounts of current and historic hydro-meteorological data. This ability was the province of national weather services with their so expensive equipment in the not too distant past. Now, one only needs inexpensive personal computers and access to the Internet (with the help and vision of groups like Unidata) to study phenomena that affect society. There is no longer a need to operate expensive ground stations to be able to analyze satellite imagery, etc. Investigations of atmospheric phenomena are no longer restricted to students of Meteorology. Learners in diverse disciplines and increasingly amateurs are joining a vibrantly expanding community. There was a time when a medical doctor was a god. Now, as technology has allowed us to become better informed, we are increasingly capable of questioning diagnoses and making truly informed decisions. This talk will reflect the author's experience, thoughts, and some perspectives for the future, on "the extension of free and open information sharing in the pursuit of incubating international collaborations".

  17. Examining Hurricane Track Length and Stage Duration Since 1980

    NASA Astrophysics Data System (ADS)

    Fandrich, K. M.; Pennington, D.

    2017-12-01

    Each year, tropical systems impact thousands of people worldwide. Current research shows a correlation between the intensity and frequency of hurricanes and the changing climate. However, little is known about other prominent hurricane features. This includes information about hurricane track length (the total distance traveled from tropical depression through a hurricane's final category assignment) and how this distance may have changed with time. Also unknown is the typical duration of a hurricane stage, such as tropical storm to category one, and if the time spent in each stage has changed in recent decades. This research aims to examine changes in hurricane stage duration and track lengths for the 319 storms in NOAA's National Ocean Service Hurricane Reanalysis dataset that reached Category 2 - 5 from 1980 - 2015. Based on evident ocean warming, it is hypothesized that a general increase in track length with time will be detected, thus modern hurricanes are traveling a longer distance than past hurricanes. It is also expected that stage durations are decreasing with time so that hurricanes mature faster than in past decades. For each storm, coordinates are acquired at 4-times daily intervals throughout its duration and track lengths are computed for each 6-hour period. Total track lengths are then computed and storms are analyzed graphically and statistically by category for temporal track length changes. The stage durations of each storm are calculated as the time difference between two consecutive stages. Results indicate that average track lengths for Cat 2 and 3 hurricanes are increasing through time. These findings show that these hurricanes are traveling a longer distance than earlier Cat 2 and 3 hurricanes. In contrast, average track lengths for Cat 4 and 5 hurricanes are decreasing through time, showing less distance traveled than earlier decades. Stage durations for all Cat 2, 4 and 5 storms decrease through the decades but Cat 3 storms show a positive increase though time. This compliments the results of the track length analysis indicating that as storms intensify faster, they are doing so over a shorter distance. It is expected that this research could be used to improve hurricane track forecasting and provide information about the effects of climate change on tropical systems and the tropical environment.

  18. Adiabatic invariants in stellar dynamics. 1: Basic concepts

    NASA Technical Reports Server (NTRS)

    Weinberg, Martin D.

    1994-01-01

    The adiabatic criterion, widely used in astronomical dynamics, is based on the harmonic oscillator. It asserts that the change in action under a slowly varying perturbation is exponentially small. Recent mathematical results that precisely define the conditions for invariance show that this model does not apply in general. In particular, a slowly varying perturbation may cause significant evolution stellar dynamical systems even if its time scale is longer than any internal orbital time scale. This additional 'heating' may have serious implications for the evolution of star clusters and dwarf galaxies which are subject to long-term environmental forces. The mathematical developments leading to these results are reviewed, and the conditions for applicability to and further implications for stellar systems are discussed. Companion papers present a computational method for a general time-dependent disturbance and detailed example.

  19. Concept of Operations Evaluation for Using Remote-Guidance Ultrasound for Exploration Spaceflight.

    PubMed

    Hurst, Victor W; Peterson, Sean; Garcia, Kathleen; Ebert, Douglas; Ham, David; Amponsah, David; Dulchavsky, Scott

    2015-12-01

    Remote-guidance (RG) techniques aboard the International Space Station (ISS) have enabled astronauts to collect diagnostic-level ultrasound (US) images. Exploration-class missions will likely require nonformally trained sonographers to operate with greater autonomy given longer communication delays (> 6 s for missions beyond the Moon) and blackouts. Training requirements for autonomous collection of US images by non-US experts are being determined. Novice US operators were randomly assigned to one of three groups to collect standardized US images while drawing expertise from A) RG only, B) a computer training tool only, or C) both RG and a computer training tool. Images were assessed for quality and examination duration. All operators were given a 10-min standardized generic training session in US scanning. The imaging task included: 1) bone fracture assessment in a phantom and 2) Focused Assessment with Sonography in Trauma (FAST) examination in a healthy volunteer. A human factors questionnaire was also completed. Mean time for group B during FAST was shorter (20.4 vs. 22.7 min) than time for the other groups. Image quality scoring was lower than in groups A or C, but all groups produced images of acceptable diagnostic quality. RG produces US images of higher quality than those produced with only computer-based instruction. Extended communication delays in exploration missions will eliminate the option of real-time guidance, thus requiring autonomous operation. The computer program used appears effective and could be a model for future digital US expertise banks. Terrestrially, it also provides adequate self-training and mentoring mechanisms.

  20. Friction at ice-Ih / water interfaces

    NASA Astrophysics Data System (ADS)

    Louden, Patrick B.; Gezelter, J. Daniel

    We present evidence that the prismatic and secondary prism facets of ice-Ih crystals possess structural features that alter the effective hydrophilicity of the ice / water interface. This is shown through molecular dynamics simulations of solid-liquid friction, where the prismatic { 10 1 0 } , secondary prism { 11 2 0 } , basal { 0001 } , and pyramidal { 20 2 1 } facets are drawn through liquid water. We find that the two prismatic facets exhibit differential solid-liquid friction coefficients when compared with the basal and pyramidal facets. These results are complemented by a model solid/liquid interface with tunable hydrophilicity. These simulations provide evidence that the two prismatic faces have a significantly smaller effective surface area in contact with the liquid water. The ice / water interfacial widths for all four crystal facets are similar (using both structural and dynamic measures), and were found to be independent of the shear rate. Additionally, decomposition of orientational time correlation functions show position-dependence for the short- and longer-time decay components close to the interface. Support for this project was provided by the National Science Foundation under Grant CHE-1362211. Computational time was provided by the Center for Research Computing (CRC) at the University of Notre Dame.

  1. Monte Carlo simulation of electrothermal atomization on a desktop personal computer

    NASA Astrophysics Data System (ADS)

    Histen, Timothy E.; Güell, Oscar A.; Chavez, Iris A.; Holcombea, James A.

    1996-07-01

    Monte Carlo simulations have been applied to electrothermal atomization (ETA) using a tubular atomizer (e.g. graphite furnace) because of the complexity in the geometry, heating, molecular interactions, etc. The intense computational time needed to accurately model ETA often limited its effective implementation to the use of supercomputers. However, with the advent of more powerful desktop processors, this is no longer the case. A C-based program has been developed and can be used under Windows TM or DOS. With this program, basic parameters such as furnace dimensions, sample placement, furnace heating and kinetic parameters such as activation energies for desorption and adsorption can be varied to show the absorbance profile dependence on these parameters. Even data such as time-dependent spatial distribution of analyte inside the furnace can be collected. The DOS version also permits input of external temperaturetime data to permit comparison of simulated profiles with experimentally obtained absorbance data. The run-time versions are provided along with the source code. This article is an electronic publication in Spectrochimica Acta Electronica (SAE), the electronic section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by a diskette with a program (PC format), data files and text files.

  2. The Association of Daytime Maternal Napping and Exercise With Nighttime Sleep in First-Time Mothers Between 3 and 6 Months Postpartum.

    PubMed

    Lillis, Teresa A; Hamilton, Nancy A; Pressman, Sarah D; Khou, Christina S

    2016-10-19

    This study investigated the relationship of daytime maternal napping, exercise, caffeine, and alcohol intake to objective and subjective sleep indices. Sixty healthy, nondepressed, first-time mothers between 3 and 6 months postpartum. Seven consecutive days of online behavior diaries, sleep diaries, and wrist actigraphy, collecting Total Sleep Time (TST), Sleep Onset Latency (SOL), and Wake After Sleep Onset (WASO). After controlling for infant age, employment status, infant feeding method, and infant sleeping location, mixed linear models showed that longer average exercise durations were associated with longer average TST, and longer average nap durations were associated with longer average WASO durations. Significant within-person differences in TST and SOL were also observed, such that, on days when participants exercised and napped longer than average, their respective TST and SOL durations that night were longer. Shorter nap durations and longer exercise durations were associated with longer TST, shorter SOL, and reduced WASO. Even small changes in daily exercise and napping behaviors could lead to reliable improvements in postpartum maternal sleep.

  3. Mathematical String Sculptures: A Case Study in Computationally-Enhanced Mathematical Crafts

    ERIC Educational Resources Information Center

    Eisenberg, Michael

    2007-01-01

    Mathematical string sculptures constitute an extremely beautiful realm of mathematical crafts. This snapshot begins with a description of a marvelous (and no longer manufactured) toy called Space Spider, which provided a framework with which children could experiment with string sculptures. Using a computer-controlled laser cutter to create frames…

  4. Enhancing Instruction through Software Infusion.

    ERIC Educational Resources Information Center

    Sia, Archie P.

    The presence of the computer in the classroom is no longer considered an oddity; it has become an ordinary resource for teachers to use for the enhancement of instruction. This paper presents an examination of software infusion, i.e., the use of computer software to enrich instruction in an academic curriculum. The process occurs when a chosen…

  5. Evaluating the risk of appendiceal perforation when using ultrasound as the initial diagnostic imaging modality in children with suspected appendicitis.

    PubMed

    Alerhand, Stephen; Meltzer, James; Tay, Ee Tein

    2017-08-01

    Ultrasound scan has gained attention for diagnosing appendicitis due to its avoidance of ionizing radiation. However, studies show that ultrasound scan carries inferior sensitivity to computed tomography scan. A non-diagnostic ultrasound scan could increase the time to diagnosis and appendicectomy, particularly if follow-up computed tomography scan is needed. Some studies suggest that delaying appendicectomy increases the risk of perforation. To investigate the risk of appendiceal perforation when using ultrasound scan as the initial diagnostic imaging modality in children with suspected appendicitis. We retrospectively reviewed 1411 charts of children ≤17 years old diagnosed with appendicitis at two urban academic medical centers. Patients who underwent ultrasound scan first were compared to those who underwent computed tomography scan first. In the sub-group analysis, patients who only received ultrasound scan were compared to those who received initial ultrasound scan followed by computed tomography scan. Main outcome measures were appendiceal perforation rate and time from triage to appendicectomy. In 720 children eligible for analysis, there was no significant difference in perforation rate between those who had initial ultrasound scan and those who had initial computed tomography scan (7.3% vs. 8.9%, p = 0.44), nor in those who had ultrasound scan only and those who had initial ultrasound scan followed by computed tomography scan (8.0% vs. 5.6%, p = 0.42). Those patients who had ultrasound scan first had a shorter triage-to-incision time than those who had computed tomography scan first (9.2 (IQR: 5.9, 14.0) vs. 10.2 (IQR: 7.3, 14.3) hours, p = 0.03), whereas those who had ultrasound scan followed by computed tomography scan took longer than those who had ultrasound scan only (7.8 (IQR: 5.3, 11.6) vs. 15.1 (IQR: 10.6, 20.6), p < 0.001). Children < 12 years old receiving ultrasound scan first had lower perforation rate (p = 0.01) and shorter triage-to-incision time (p = 0.003). Children with suspected appendicitis receiving ultrasound scan as the initial diagnostic imaging modality do not have increased risk of perforation compared to those receiving computed tomography scan first. We recommend that children <12 years of age receive ultrasound scan first.

  6. Perforation of the IVC: rule rather than exception after longer indwelling times for the Günther Tulip and Celect retrievable filters.

    PubMed

    Durack, Jeremy C; Westphalen, Antonio C; Kekulawela, Stephanie; Bhanu, Shiv B; Avrin, David E; Gordon, Roy L; Kerlan, Robert K

    2012-04-01

    This study was designed to assess the incidence, magnitude, and impact upon retrievability of vena caval perforation by Günther Tulip and Celect conical inferior vena cava (IVC) filters on computed tomographic (CT) imaging. Günther Tulip and Celect IVC filters placed between July 2007 and May 2009 were identified from medical records. Of 272 IVC filters placed, 50 (23 Günther Tulip, 46%; 27 Celect, 54%) were retrospectively assessed on follow-up abdominal CT scans performed for reasons unrelated to the filter. Computed tomography scans were examined for evidence of filter perforation through the vena caval wall, tilt, or pericaval tissue injury. Procedure records were reviewed to determine whether IVC filter retrieval was attempted and successful. Perforation of at least one filter component through the IVC was observed in 43 of 50 (86%) filters on CT scans obtained between 1 and 880 days after filter placement. All filters imaged after 71 days showed some degree of vena caval perforation, often as a progressive process. Filter tilt was seen in 20 of 50 (40%) filters, and all tilted filters also demonstrated vena caval perforation. Transjugular removal was attempted in 12 of 50 (24%) filters and was successful in 11 of 12 (92%). Longer indwelling times usually result in vena caval perforation by retrievable Günther Tulip and Celect IVC filters. Although infrequently reported in the literature, clinical sequelae from IVC filter components breaching the vena cava can be significant. We advocate filter retrieval as early as clinically indicated and increased attention to the appearance of IVC filters on all follow-up imaging studies.

  7. Perforation of the IVC: Rule Rather Than Exception After Longer Indwelling Times for the Guenther Tulip and Celect Retrievable Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durack, Jeremy C., E-mail: jeremy.durack@ucsf.edu; Westphalen, Antonio C.; Kekulawela, Stephanie

    Purpose: This study was designed to assess the incidence, magnitude, and impact upon retrievability of vena caval perforation by Guenther Tulip and Celect conical inferior vena cava (IVC) filters on computed tomographic (CT) imaging. Methods: Guenther Tulip and Celect IVC filters placed between July 2007 and May 2009 were identified from medical records. Of 272 IVC filters placed, 50 (23 Guenther Tulip, 46%; 27 Celect, 54%) were retrospectively assessed on follow-up abdominal CT scans performed for reasons unrelated to the filter. Computed tomography scans were examined for evidence of filter perforation through the vena caval wall, tilt, or pericaval tissuemore » injury. Procedure records were reviewed to determine whether IVC filter retrieval was attempted and successful. Results: Perforation of at least one filter component through the IVC was observed in 43 of 50 (86%) filters on CT scans obtained between 1 and 880 days after filter placement. All filters imaged after 71 days showed some degree of vena caval perforation, often as a progressive process. Filter tilt was seen in 20 of 50 (40%) filters, and all tilted filters also demonstrated vena caval perforation. Transjugular removal was attempted in 12 of 50 (24%) filters and was successful in 11 of 12 (92%). Conclusions: Longer indwelling times usually result in vena caval perforation by retrievable Guenther Tulip and Celect IVC filters. Although infrequently reported in the literature, clinical sequelae from IVC filter components breaching the vena cava can be significant. We advocate filter retrieval as early as clinically indicated and increased attention to the appearance of IVC filters on all follow-up imaging studies.« less

  8. Signatures of hypermassive neutron star lifetimes on r-process nucleosynthesis in the disc ejecta from neutron star mergers

    NASA Astrophysics Data System (ADS)

    Lippuner, Jonas; Fernández, Rodrigo; Roberts, Luke F.; Foucart, Francois; Kasen, Daniel; Metzger, Brian D.; Ott, Christian D.

    2017-11-01

    We investigate the nucleosynthesis of heavy elements in the winds ejected by accretion discs formed in neutron star mergers. We compute the element formation in disc outflows from hypermassive neutron star (HMNS) remnants of variable lifetime, including the effect of angular momentum transport in the disc evolution. We employ long-term axisymmetric hydrodynamic disc simulations to model the ejecta, and compute r-process nucleosynthesis with tracer particles using a nuclear reaction network containing ∼8000 species. We find that the previously known strong correlation between HMNS lifetime, ejected mass and average electron fraction in the outflow is directly related to the amount of neutrino irradiation on the disc, which dominates mass ejection at early times in the form of a neutrino-driven wind. Production of lanthanides and actinides saturates at short HMNS lifetimes (≲10 ms), with additional ejecta contributing to a blue optical kilonova component for longer-lived HMNSs. We find good agreement between the abundances from the disc outflow alone and the solar r-process distribution only for short HMNS lifetimes (≲10 ms). For longer lifetimes, the rare-earth and third r-process peaks are significantly underproduced compared to the solar pattern, requiring additional contributions from the dynamical ejecta. The nucleosynthesis signature from a spinning black hole (BH) can only overlap with that from an HMNS of moderate lifetime (≲60 ms). Finally, we show that angular momentum transport not only contributes with a late-time outflow component, but that it also enhances the neutrino-driven component by moving material to shallower regions of the gravitational potential, in addition to providing additional heating.

  9. Decision support in psychiatry – a comparison between the diagnostic outcomes using a computerized decision support system versus manual diagnosis

    PubMed Central

    Bergman, Lars G; Fors, Uno GH

    2008-01-01

    Background Correct diagnosis in psychiatry may be improved by novel diagnostic procedures. Computerized Decision Support Systems (CDSS) are suggested to be able to improve diagnostic procedures, but some studies indicate possible problems. Therefore, it could be important to investigate CDSS systems with regard to their feasibility to improve diagnostic procedures as well as to save time. Methods This study was undertaken to compare the traditional 'paper and pencil' diagnostic method SCID1 with the computer-aided diagnostic system CB-SCID1 to ascertain processing time and accuracy of diagnoses suggested. 63 clinicians volunteered to participate in the study and to solve two paper-based cases using either a CDSS or manually. Results No major difference between paper and pencil and computer-supported diagnosis was found. Where a difference was found it was in favour of paper and pencil. For example, a significantly shorter time was found for paper and pencil for the difficult case, as compared to computer support. A significantly higher number of correct diagnoses were found in the diffilt case for the diagnosis 'Depression' using the paper and pencil method. Although a majority of the clinicians found the computer method supportive and easy to use, it took a longer time and yielded fewer correct diagnoses than with paper and pencil. Conclusion This study could not detect any major difference in diagnostic outcome between traditional paper and pencil methods and computer support for psychiatric diagnosis. Where there were significant differences, traditional paper and pencil methods were better than the tested CDSS and thus we conclude that CDSS for diagnostic procedures may interfere with diagnosis accuracy. A limitation was that most clinicians had not previously used the CDSS system under study. The results of this study, however, confirm that CDSS development for diagnostic purposes in psychiatry has much to deal with before it can be used for routine clinical purposes. PMID:18261222

  10. Music and Sound in Time Processing of Children with ADHD

    PubMed Central

    Carrer, Luiz Rogério Jorgensen

    2015-01-01

    ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families’ lives. Music, with its playful, spontaneous, affective, motivational, temporal, and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article, we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants of age 6–14 years, recruited at NANI-UNIFESP/SP, subdivided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant’s performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds, and time estimation with music. Results: (1) performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms) were statistically lower than that of control group (p < 0.05); (2) in the task comparing musical excerpts of the same duration (7 s), ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD. PMID:26441688

  11. Music and Sound in Time Processing of Children with ADHD.

    PubMed

    Carrer, Luiz Rogério Jorgensen

    2015-01-01

    ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families' lives. Music, with its playful, spontaneous, affective, motivational, temporal, and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article, we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants of age 6-14 years, recruited at NANI-UNIFESP/SP, subdivided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant's performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds, and time estimation with music. (1) performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms) were statistically lower than that of control group (p < 0.05); (2) in the task comparing musical excerpts of the same duration (7 s), ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD.

  12. Helicopter time-domain electromagnetic numerical simulation based on Leapfrog ADI-FDTD

    NASA Astrophysics Data System (ADS)

    Guan, S.; Ji, Y.; Li, D.; Wu, Y.; Wang, A.

    2017-12-01

    We present a three-dimension (3D) Alternative Direction Implicit Finite-Difference Time-Domain (Leapfrog ADI-FDTD) method for the simulation of helicopter time-domain electromagnetic (HTEM) detection. This method is different from the traditional explicit FDTD, or ADI-FDTD. Comparing with the explicit FDTD, leapfrog ADI-FDTD algorithm is no longer limited by Courant-Friedrichs-Lewy(CFL) condition. Thus, the time step is longer. Comparing with the ADI-FDTD, we reduce the equations from 12 to 6 and .the Leapfrog ADI-FDTD method will be easier for the general simulation. First, we determine initial conditions which are adopted from the existing method presented by Wang and Tripp(1993). Second, we derive Maxwell equation using a new finite difference equation by Leapfrog ADI-FDTD method. The purpose is to eliminate sub-time step and retain unconditional stability characteristics. Third, we add the convolution perfectly matched layer (CPML) absorbing boundary condition into the leapfrog ADI-FDTD simulation and study the absorbing effect of different parameters. Different absorbing parameters will affect the absorbing ability. We find the suitable parameters after many numerical experiments. Fourth, We compare the response with the 1-Dnumerical result method for a homogeneous half-space to verify the correctness of our algorithm.When the model contains 107*107*53 grid points, the conductivity is 0.05S/m. The results show that Leapfrog ADI-FDTD need less simulation time and computer storage space, compared with ADI-FDTD. The calculation speed decreases nearly four times, memory occupation decreases about 32.53%. Thus, this algorithm is more efficient than the conventional ADI-FDTD method for HTEM detection, and is more precise than that of explicit FDTD in the late time.

  13. An efficient user-oriented method for calculating compressible flow in an about three-dimensional inlets. [panel method

    NASA Technical Reports Server (NTRS)

    Hess, J. L.; Mack, D. P.; Stockman, N. O.

    1979-01-01

    A panel method is used to calculate incompressible flow about arbitrary three-dimensional inlets with or without centerbodies for four fundamental flow conditions: unit onset flows parallel to each of the coordinate axes plus static operation. The computing time is scarcely longer than for a single solution. A linear superposition of these solutions quite rigorously gives incompressible flow about the inlet for any angle of attack, angle of yaw, and mass flow rate. Compressibility is accounted for by applying a well-proven correction to the incompressible flow. Since the computing times for the combination and the compressibility correction are small, flows at a large number of inlet operating conditions are obtained rather cheaply. Geometric input is aided by an automatic generating program. A number of graphical output features are provided to aid the user, including surface streamline tracing and automatic generation of curves of curves of constant pressure, Mach number, and flow inclination at selected inlet cross sections. The inlet method and use of the program are described. Illustrative results are presented.

  14. Ab initio molecular dynamics simulation of LiBr association in water

    NASA Astrophysics Data System (ADS)

    Izvekov, Sergei; Philpott, Michael R.

    2000-12-01

    A computationally economical scheme which unifies the density functional description of an ionic solute and the classical description of a solvent was developed. The density functional part of the scheme comprises Car-Parrinello and related formalisms. The substantial saving in the computer time is achieved by performing the ab initio molecular dynamics of the solute electronic structure in a relatively small basis set constructed from lowest energy Kohn-Sham orbitals calculated for a single anion in vacuum, instead of using plane wave basis. The methodology permits simulation of an ionic solution for longer time scales while keeping accuracy in the prediction of the solute electronic structure. As an example the association of the Li+-Br- ion-pair system in water is studied. The results of the combined molecular dynamics simulation are compared with that obtained from the classical simulation with ion-ion interaction described by the pair potential of Born-Huggins-Mayer type. The comparison reveals an important role played by the polarization of the Br- ion in the dynamics of ion pair association.

  15. High-Fidelity Single-Shot Toffoli Gate via Quantum Control.

    PubMed

    Zahedinejad, Ehsan; Ghosh, Joydip; Sanders, Barry C

    2015-05-22

    A single-shot Toffoli, or controlled-controlled-not, gate is desirable for classical and quantum information processing. The Toffoli gate alone is universal for reversible computing and, accompanied by the Hadamard gate, forms a universal gate set for quantum computing. The Toffoli gate is also a key ingredient for (nontopological) quantum error correction. Currently Toffoli gates are achieved by decomposing into sequentially implemented single- and two-qubit gates, which require much longer times and yields lower overall fidelities compared to a single-shot implementation. We develop a quantum-control procedure to construct a single-shot Toffoli gate for three nearest-neighbor-coupled superconducting transmon systems such that the fidelity is 99.9% and is as fast as an entangling two-qubit gate under the same realistic conditions. The gate is achieved by a nongreedy quantum control procedure using our enhanced version of the differential evolution algorithm.

  16. A computationally efficient modelling of laminar separation bubbles

    NASA Technical Reports Server (NTRS)

    Maughmer, Mark D.

    1988-01-01

    The goal of this research is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. To this end, a model of the bubble is under development and will be incorporated in the analysis section of the Eppler and Somers program. As a first step in this direction, an existing bubble model was inserted into the program. It was decided to address the problem of the short bubble before attempting the prediction of the long bubble. In the second place, an integral boundary-layer method is believed more desirable than a finite difference approach. While these two methods achieve similar prediction accuracy, finite-difference methods tend to involve significantly longer computer run times than the integral methods. Finally, as the boundary-layer analysis in the Eppler and Somers program employs the momentum and kinetic energy integral equations, a short-bubble model compatible with these equations is most preferable.

  17. Acceleration of the Particle Swarm Optimization for Peierls-Nabarro modeling of dislocations in conventional and high-entropy alloys

    NASA Astrophysics Data System (ADS)

    Pei, Zongrui; Eisenbach, Markus

    2017-06-01

    Dislocations are among the most important defects in determining the mechanical properties of both conventional alloys and high-entropy alloys. The Peierls-Nabarro model supplies an efficient pathway to their geometries and mobility. The difficulty in solving the integro-differential Peierls-Nabarro equation is how to effectively avoid the local minima in the energy landscape of a dislocation core. Among the other methods to optimize the dislocation core structures, we choose the algorithm of Particle Swarm Optimization, an algorithm that simulates the social behaviors of organisms. By employing more particles (bigger swarm) and more iterative steps (allowing them to explore for longer time), the local minima can be effectively avoided. But this would require more computational cost. The advantage of this algorithm is that it is readily parallelized in modern high computing architecture. We demonstrate the performance of our parallelized algorithm scales linearly with the number of employed cores.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivasseau, Vincent, E-mail: vincent.rivasseau@th.u-psud.fr, E-mail: adrian.tanasa@ens-lyon.org; Tanasa, Adrian, E-mail: vincent.rivasseau@th.u-psud.fr, E-mail: adrian.tanasa@ens-lyon.org

    The Loop Vertex Expansion (LVE) is a quantum field theory (QFT) method which explicitly computes the Borel sum of Feynman perturbation series. This LVE relies in a crucial way on symmetric tree weights which define a measure on the set of spanning trees of any connected graph. In this paper we generalize this method by defining new tree weights. They depend on the choice of a partition of a set of vertices of the graph, and when the partition is non-trivial, they are no longer symmetric under permutation of vertices. Nevertheless we prove they have the required positivity property tomore » lead to a convergent LVE; in fact we formulate this positivity property precisely for the first time. Our generalized tree weights are inspired by the Brydges-Battle-Federbush work on cluster expansions and could be particularly suited to the computation of connected functions in QFT. Several concrete examples are explicitly given.« less

  19. Quantum information processing with superconducting circuits: a review.

    PubMed

    Wendin, G

    2017-10-01

    During the last ten years, superconducting circuits have passed from being interesting physical devices to becoming contenders for near-future useful and scalable quantum information processing (QIP). Advanced quantum simulation experiments have been shown with up to nine qubits, while a demonstration of quantum supremacy with fifty qubits is anticipated in just a few years. Quantum supremacy means that the quantum system can no longer be simulated by the most powerful classical supercomputers. Integrated classical-quantum computing systems are already emerging that can be used for software development and experimentation, even via web interfaces. Therefore, the time is ripe for describing some of the recent development of superconducting devices, systems and applications. As such, the discussion of superconducting qubits and circuits is limited to devices that are proven useful for current or near future applications. Consequently, the centre of interest is the practical applications of QIP, such as computation and simulation in Physics and Chemistry.

  20. Quantum information processing with superconducting circuits: a review

    NASA Astrophysics Data System (ADS)

    Wendin, G.

    2017-10-01

    During the last ten years, superconducting circuits have passed from being interesting physical devices to becoming contenders for near-future useful and scalable quantum information processing (QIP). Advanced quantum simulation experiments have been shown with up to nine qubits, while a demonstration of quantum supremacy with fifty qubits is anticipated in just a few years. Quantum supremacy means that the quantum system can no longer be simulated by the most powerful classical supercomputers. Integrated classical-quantum computing systems are already emerging that can be used for software development and experimentation, even via web interfaces. Therefore, the time is ripe for describing some of the recent development of superconducting devices, systems and applications. As such, the discussion of superconducting qubits and circuits is limited to devices that are proven useful for current or near future applications. Consequently, the centre of interest is the practical applications of QIP, such as computation and simulation in Physics and Chemistry.

  1. The Relationship of Obesity to Increasing Health-Care Burden in the Setting of Orthopaedic Polytrauma.

    PubMed

    Licht, Heather; Murray, Mark; Vassaur, John; Jupiter, Daniel C; Regner, Justin L; Chaput, Christopher D

    2015-11-18

    With the rise of obesity in the American population, there has been a proportionate increase of obesity in the trauma population. The purpose of this study was to use a computed tomography-based measurement of adiposity to determine if obesity is associated with an increased burden to the health-care system in patients with orthopaedic polytrauma. A prospective comprehensive trauma database at a level-I trauma center was utilized to identify 301 patients with polytrauma who had orthopaedic injuries and intensive care unit admission from 2006 to 2011. Routine thoracoabdominal computed tomographic scans allowed for measurement of the truncal adiposity volume. The truncal three-dimensional reconstruction body mass index was calculated from the computed tomography-based volumes based on a previously validated algorithm. A truncal three-dimensional reconstruction body mass index of <30 kg/m(2) denoted non-obese patients and ≥ 30 kg/m(2) denoted obese patients. The need for orthopaedic surgical procedure, in-hospital mortality, length of stay, hospital charges, and discharge disposition were compared between the two groups. Of the 301 patients, 21.6% were classified as obese (truncal three-dimensional reconstruction body mass index of ≥ 30 kg/m(2)). Higher truncal three-dimensional reconstruction body mass index was associated with longer hospital length of stay (p = 0.02), more days spent in the intensive care unit (p = 0.03), more frequent discharge to a long-term care facility (p < 0.0002), higher rate of orthopaedic surgical intervention (p < 0.01), and increased total hospital charges (p < 0.001). Computed tomographic scans, routinely obtained at the time of admission, can be utilized to calculate truncal adiposity and to investigate the impact of obesity on patients with polytrauma. Obese patients were found to have higher total hospital charges, longer hospital stays, discharge to a continuing-care facility, and a higher rate of orthopaedic surgical intervention. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.

  2. Quantitative ROESY analysis of computational models: structural studies of citalopram and β-cyclodextrin complexes by (1) H-NMR and computational methods.

    PubMed

    Ali, Syed Mashhood; Shamim, Shazia

    2015-07-01

    Complexation of racemic citalopram with β-cyclodextrin (β-CD) in aqueous medium was investigated to determine atom-accurate structure of the inclusion complexes. (1) H-NMR chemical shift change data of β-CD cavity protons in the presence of citalopram confirmed the formation of 1 : 1 inclusion complexes. ROESY spectrum confirmed the presence of aromatic ring in the β-CD cavity but whether one of the two or both rings was not clear. Molecular mechanics and molecular dynamic calculations showed the entry of fluoro-ring from wider side of β-CD cavity as the most favored mode of inclusion. Minimum energy computational models were analyzed for their accuracy in atomic coordinates by comparison of calculated and experimental intermolecular ROESY peak intensities, which were not found in agreement. Several least energy computational models were refined and analyzed till calculated and experimental intensities were compatible. The results demonstrate that computational models of CD complexes need to be analyzed for atom-accuracy and quantitative ROESY analysis is a promising method. Moreover, the study also validates that the quantitative use of ROESY is feasible even with longer mixing times if peak intensity ratios instead of absolute intensities are used. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Modeling the transport of nitrogen in an NPP-2006 reactor circuit

    NASA Astrophysics Data System (ADS)

    Stepanov, O. E.; Galkin, I. Yu.; Sledkov, R. M.; Melekh, S. S.; Strebnev, N. A.

    2016-07-01

    Efficient radiation protection of the public and personnel requires detecting an accident-initiating event quickly. Specifically, if a heat-exchange tube in a steam generator is ruptured, the 16N radioactive nitrogen isotope, which contributes to a sharp increase in the steam activity before the turbine, may serve as the signaling component. This isotope is produced in the core coolant and is transported along the circulation circuit. The aim of the present study was to model the transport of 16N in the primary and the secondary circuits of a VVER-1000 reactor facility (RF) under nominal operation conditions. KORSAR/GP and RELAP5/Mod.3.2 codes were used to perform the calculations. Computational models incorporating the major components of the primary and the secondary circuits of an NPP-2006 RF were constructed. These computational models were subjected to cross-verification, and the calculation results were compared to the experimental data on the distribution of the void fraction over the steam generator height. The models were proven to be valid. It was found that the time of nitrogen transport from the core to the heat-exchange tube leak was no longer than 1 s under RF operation at a power level of 100% N nom with all primary circuit pumps activated. The time of nitrogen transport from the leak to the γ-radiation detection unit under the same operating conditions was no longer than 9 s, and the nitrogen concentration in steam was no less than 1.4% (by mass) of its concentration at the reactor outlet. These values were obtained using conservative approaches to estimating the leak flow and the transport time, but the radioactive decay of nitrogen was not taken into account. Further research concerned with the calculation of thermohydraulic processes should be focused on modeling the transport of nitrogen under RF operation with some primary circuit pumps deactivated.

  4. Does It Matter Whether One Takes a Test on an iPad or a Desktop Computer?

    ERIC Educational Resources Information Center

    Ling, Guangming

    2016-01-01

    To investigate possible iPad related mode effect, we tested 403 8th graders in Indiana, Maryland, and New Jersey under three mode conditions through random assignment: a desktop computer, an iPad alone, and an iPad with an external keyboard. All students had used an iPad or computer for six months or longer. The 2-hour test included reading, math,…

  5. A MATLAB-based graphical user interface program for computing functionals of the geopotential up to ultra-high degrees and orders

    NASA Astrophysics Data System (ADS)

    Bucha, Blažej; Janák, Juraj

    2013-07-01

    We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariances matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.

  6. The multimedia computer for low-literacy patient education: a pilot project of cancer risk perceptions.

    PubMed

    Wofford, J L; Currin, D; Michielutte, R; Wofford, M M

    2001-04-20

    Inadequate reading literacy is a major barrier to better educating patients. Despite its high prevalence, practical solutions for detecting and overcoming low literacy in a busy clinical setting remain elusive. In exploring the potential role for the multimedia computer in improving office-based patient education, we compared the accuracy of information captured from audio-computer interviewing of patients with that obtained from subsequent verbal questioning. Adult medicine clinic, urban community health center Convenience sample of patients awaiting clinic appointments (n = 59). Exclusion criteria included obvious psychoneurologic impairment or primary language other than English. A multimedia computer presentation that used audio-computer interviewing with localized imagery and voices to elicit responses to 4 questions on prior computer use and cancer risk perceptions. Three patients refused or were unable to interact with the computer at all, and 3 patients required restarting the presentation from the beginning but ultimately completed the computerized survey. Of the 51 evaluable patients (72.5% African-American, 66.7% female, mean age 47.5 [+/- 18.1]), the mean time in the computer presentation was significantly longer with older age and with no prior computer use but did not differ by gender or race. Despite a high proportion of no prior computer use (60.8%), there was a high rate of agreement (88.7% overall) between audio-computer interviewing and subsequent verbal questioning. Audio-computer interviewing is feasible in this urban community health center. The computer offers a partial solution for overcoming literacy barriers inherent in written patient education materials and provides an efficient means of data collection that can be used to better target patients' educational needs.

  7. Evolution of a standard microprocessor-based space computer

    NASA Technical Reports Server (NTRS)

    Fernandez, M.

    1980-01-01

    An existing in inventory computer hardware/software package (B-1 RFS/ECM) was repackaged and applied to multiple missile/space programs. Concurrent with the application efforts, low risk modifications were made to the computer from program to program to take advantage of newer, advanced technology and to meet increasingly more demanding requirements (computational and memory capabilities, longer life, and fault tolerant autonomy). It is concluded that microprocessors hold promise in a number of critical areas for future space computer applications. However, the benefits of the DoD VHSIC Program are required and the old proliferation problem must be revised.

  8. Dose-time relationships for post-irradiation cutaneous telangiectasia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, L.; Ubaldi, S.E.

    1977-01-01

    Seventy-five patients who had received electron beam radiation a year or more previously were studied. The irradiated skin portals were photographed and late reactions graded in terms of the number and severity of telangiectatic lesions observed. The skin dose, number of fractions, overall treatment time and irradiated volume were recorded in each case. A Strandqvist-type iso-effect line was derived for this response. A multi-probit search program also was used to derive best-fitting cell population kinetic parameters for the same data. From these parameters a comprehensive iso-effect table could be computed for a wide range of treatment schedules including daily treatmentmore » as well as fractionation at shorter and longer intervals; this provided a useful set of normal tissue tolerance limits for late effects.« less

  9. Axial to transverse energy mixing dynamics in octupole-based magnetostatic antihydrogen traps

    NASA Astrophysics Data System (ADS)

    Zhong, M.; Fajans, J.; Zukor, A. F.

    2018-05-01

    The nature of the trajectories of antihydrogen atoms confined in an octupole minimum-B trap is of great importance for upcoming spectroscopy, cooling, and gravity experiments. Of particular interest is the mixing time between the axial and transverse energies for the antiatoms. Here, using computer simulations, we establish that almost all trajectories are chaotic, and then quantify the characteristic mixing time between the axial and transverse energies. We find that there are two classes of trajectories: for trajectories whose axial energy is higher than about 20% of the total energy, the axial energy substantially mixes within about 10 s, whereas for trajectories whose axial energy is lower than about 10% of the total energy, the axial energy remains nearly constant for 1000 s or longer.

  10. Facilitating Analysis of Multiple Partial Data Streams

    NASA Technical Reports Server (NTRS)

    Maimone, Mark W.; Liebersbach, Robert R.

    2008-01-01

    Robotic Operations Automation: Mechanisms, Imaging, Navigation report Generation (ROAMING) is a set of computer programs that facilitates and accelerates both tactical and strategic analysis of time-sampled data especially the disparate and often incomplete streams of Mars Explorer Rover (MER) telemetry data described in the immediately preceding article. As used here, tactical refers to the activities over a relatively short time (one Martian day in the original MER application) and strategic refers to a longer time (the entire multi-year MER missions in the original application). Prior to installation, ROAMING must be configured with the types of data of interest, and parsers must be modified to understand the format of the input data (many example parsers are provided, including for general CSV files). Thereafter, new data from multiple disparate sources are automatically resampled into a single common annotated spreadsheet stored in a readable space-separated format, and these data can be processed or plotted at any time scale. Such processing or plotting makes it possible to study not only the details of a particular activity spanning only a few seconds, but also longer-term trends. ROAMING makes it possible to generate mission-wide plots of multiple engineering quantities [e.g., vehicle tilt as in Figure 1(a), motor current, numbers of images] that, heretofore could be found only in thousands of separate files. ROAMING also supports automatic annotation of both images and graphs. In the MER application, labels given to terrain features by rover scientists and engineers are automatically plotted in all received images based on their associated camera models (see Figure 2), times measured in seconds are mapped to Mars local time, and command names or arbitrary time-labeled events can be used to label engineering plots, as in Figure 1(b).

  11. PDAs in Teacher Education: A Case Study Examining Mobile Technology Integration

    ERIC Educational Resources Information Center

    Franklin, Teresa; Sexton, Colleen; Lu, Young; Ma, Hongyan

    2007-01-01

    The classroom computer is no longer confined to a box on the desk. Mobile handheld computing devices have evolved into powerful and affordable learning tools. Handheld technologies are changing the way people access and work with information. The use of Personal Digital Assistants (PDAs or handhelds) has been an evolving part of the business world…

  12. Surgical problems and complex procedures: issues for operative time in robotic totally endoscopic coronary artery bypass grafting.

    PubMed

    Wiedemann, Dominik; Bonaros, Nikolaos; Schachner, Thomas; Weidinger, Felix; Lehr, Eric J; Vesely, Mark; Bonatti, Johannes

    2012-03-01

    Robotically assisted totally endoscopic coronary artery bypass grafting (TECAB) is a viable option for closed chest coronary surgery, but it requires learning curves and longer operative times. This study evaluated the effect of extended operation times on the outcome of patients undergoing TECAB. From 2001 to 2009, 325 patients underwent TECAB with the da Vinci telemanipulation system. Correlations between operative times and preoperative, intraoperative, and early postoperative parameters were investigated. Receiver operating characteristic analysis was used to define the threshold of the procedure duration above which intensive care unit stay and ventilation time were prolonged. Demographic data, intraoperative and postoperative parameters, and survival data were compared. Patients with prolonged operative times more often underwent multivessel revascularization (P < .001) and beating-heart TECAB (P =.023). Other preoperative parameters were not associated with longer operative times. Incidences of technical difficulties and conversions (P < .001) were higher among patients with longer operative times. Prolonged intensive care unit stay, mechanical ventilation, hospital stay, and with requirement of blood products were associated with longer operative times. Receiver operating characteristic analysis showed operative times >445 minutes and >478 minutes to predict prolonged (>48 hours) intensive care unit stay and mechanical ventilation, respectively. Patients with procedures >478 minutes had longer hospital stays and higher perioperative morbidity and mortality. Kaplan-Meier analysis revealed decreased survival among patients with operative times >478 minutes. Multivessel revascularization and conversions lead to prolonged operative times in totally endoscopic coronary artery bypass grafting. Longer operative times significantly influence early postoperative and midterm outcomes. Copyright © 2012 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  13. Statistical Inference for Big Data Problems in Molecular Biophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramanathan, Arvind; Savol, Andrej; Burger, Virginia

    2012-01-01

    We highlight the role of statistical inference techniques in providing biological insights from analyzing long time-scale molecular simulation data. Technologi- cal and algorithmic improvements in computation have brought molecular simu- lations to the forefront of techniques applied to investigating the basis of living systems. While these longer simulations, increasingly complex reaching petabyte scales presently, promise a detailed view into microscopic behavior, teasing out the important information has now become a true challenge on its own. Mining this data for important patterns is critical to automating therapeutic intervention discovery, improving protein design, and fundamentally understanding the mech- anistic basis of cellularmore » homeostasis.« less

  14. Optimal Trajectories For Orbital Transfers Using Low And Medium Thrust Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Cobb, Shannon S.

    1992-01-01

    For many problems it is reasonable to expect that the minimum time solution is also the minimum fuel solution. However, if one allows the propulsion system to be turned off and back on, it is clear that these two solutions may differ. In general, high thrust transfers resemble the well-known impulsive transfers where the burn arcs are of very short duration. The low and medium thrust transfers differ in that their thrust acceleration levels yield longer burn arcs which will require more revolutions, thus making the low thrust transfer computational intensive. Here, we consider optimal low and medium thrust orbital transfers.

  15. An improved ant colony optimization algorithm with fault tolerance for job scheduling in grid computing systems

    PubMed Central

    Idris, Hajara; Junaidu, Sahalu B.; Adewumi, Aderemi O.

    2017-01-01

    The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user’s Quality of Service (QoS) requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO) algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user’s QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time. PMID:28545075

  16. Computed tomography-guided screening of surfactant effect on blood circulation time of emulsions: application to the design of an emulsion formulation for paclitaxel.

    PubMed

    Lee, Eun-Hye; Hong, Soon-Seok; Kim, So Hee; Lee, Mi-Kyung; Lim, Joon Seok; Lim, Soo-Jeong

    2014-08-01

    In an effort to apply the imaging techniques currently used in disease diagnosis for monitoring the pharmacokinetics and biodisposition of particulate drug carriers, we sought to use computed tomography (CT) scanning methodology to investigate the impact of surfactant on the blood residence time of emulsions. We prepared the iodinated oil Lipiodol emulsions with different compositions of surfactants and investigated the impact of surfactant on the blood residence time of emulsions by CT scanning. The blood circulation time of emulsions was prolonged by including Tween 80 or DSPE-PEG (polyethylene glycol 2000) in emulsions. Tween 80 was less effective than DSPE-PEG in terms of prolongation effect, but the blood circulating time of emulsions was prolonged in a Tween 80 content-dependent manner. As a proof-of-concept demonstration of the usefulness of CT-guided screening in the process of formulating drugs that need to be loaded in emulsions, paclitaxel was loaded in emulsions prepared with 87 or 65% Tween 80-containing surfactant mixtures. A pharmacokinetics study showed that paclitaxel loaded in 87% Tween 80 emulsions circulated longer in the bloodstream compared to those in 65% Tween 80 emulsions, as predicted by CT imaging. CT-visible, Lipiodol emulsions enabled the simple evaluation of surfactant composition effects on the biodisposition of emulsions.

  17. Estimation of Geotropic Currents in the Bay of Bengal using In-situ Observations.

    NASA Astrophysics Data System (ADS)

    T, V. R.

    2014-12-01

    Geostraphic Currents (GCs) can be estimated from temperature and salinity observations. In this study an attempt has been made to compute GC using temperature and salinity observations from Expendable Bathy Thermograph (XBT) and CTD over Bay of Bengal (BoB). Although in recent time we have Argo observations but it is for a limited period and coarse temporal resolutions. In BoB Bengal, where not enough simultaneous hydrographic temperature and salinity data are available with reasonable spatial resolution (~one degree spatial resolution) and for a longer period. To overcome the limitations of GC computed from XBT profiles, temperature-salinity relationships were used from simultaneous temperature and salinity observations. We have demonstrated that GCs can be computed with an accuracy of less than 8.5 cm/s (root mean square error) at the surface with respect to temperature from XBT and salinity from climatological record. This error reduces with increasing depth. Finally, we demonstrated the application of this approach to study the temporal variation of the GCs during 1992 to 2012 along an XBT transect.

  18. 3D transient electromagnetic simulation using a modified correspondence principle for wave and diffusion fields

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Ji, Y.; Egbert, G. D.

    2015-12-01

    The fictitious time domain method (FTD), based on the correspondence principle for wave and diffusion fields, has been developed and used over the past few years primarily for marine electromagnetic (EM) modeling. Here we present results of our efforts to apply the FTD approach to land and airborne TEM problems which can reduce the computer time several orders of magnitude and preserve high accuracy. In contrast to the marine case, where sources are in the conductive sea water, we must model the EM fields in the air; to allow for topography air layers must be explicitly included in the computational domain. Furthermore, because sources for most TEM applications generally must be modeled as finite loops, it is useful to solve directly for the impulse response appropriate to the problem geometry, instead of the point-source Green functions typically used for marine problems. Our approach can be summarized as follows: (1) The EM diffusion equation is transformed to a fictitious wave equation. (2) The FTD wave equation is solved with an explicit finite difference time-stepping scheme, with CPML (Convolutional PML) boundary conditions for the whole computational domain including the air and earth , with FTD domain source corresponding to the actual transmitter geometry. Resistivity of the air layers is kept as low as possible, to compromise between efficiency (longer fictitious time step) and accuracy. We have generally found a host/air resistivity contrast of 10-3 is sufficient. (3)A "Modified" Fourier Transform (MFT) allow us recover system's impulse response from the fictitious time domain to the diffusion (frequency) domain. (4) The result is multiplied by the Fourier transformation (FT) of the real source current avoiding time consuming convolutions in the time domain. (5) The inverse FT is employed to get the final full waveform and full time response of the system in the time domain. In general, this method can be used to efficiently solve most time-domain EM simulation problems for non-point sources.

  19. Long-term influence of asteroids on planet longitudes and chaotic dynamics of the solar system

    NASA Astrophysics Data System (ADS)

    Woillez, E.; Bouchet, F.

    2017-11-01

    Over timescales much longer than an orbital period, the solar system exhibits large-scale chaotic behavior and can thus be viewed as a stochastic dynamical system. The aim of the present paper is to compare different sources of stochasticity in the solar system. More precisely we studied the importance of the long term influence of asteroids on the chaotic dynamics of the solar system. We show that the effects of asteroids on planets is similar to a white noise process, when those effects are considered on a timescale much larger than the correlation time τϕ ≃ 104 yr of asteroid trajectories. We computed the timescale τe after which the effects of the stochastic evolution of the asteroids lead to a loss of information for the initial conditions of the perturbed Laplace-Lagrange secular dynamics. The order of magnitude of this timescale is precisely determined by theoretical argument, and we find that τe ≃ 104 Myr. Although comparable to the full main-sequence lifetime of the sun, this timescale is considerably longer than the Lyapunov time τI ≃ 10 Myr of the solar system without asteroids. This shows that the external sources of chaos arise as a small perturbation in the stochastic secular behavior of the solar system, rather due to intrinsic chaos.

  20. Optimal remediation of unconfined aquifers: Numerical applications and derivative calculations

    NASA Astrophysics Data System (ADS)

    Mansfield, Christopher M.; Shoemaker, Christine A.

    1999-05-01

    This paper extends earlier work on derivative-based optimization for cost-effective remediation to unconfined aquifers, which have more complex, nonlinear flow dynamics than confined aquifers. Most previous derivative-based optimization of contaminant removal has been limited to consideration of confined aquifers; however, contamination is more common in unconfined aquifers. Exact derivative equations are presented, and two computationally efficient approximations, the quasi-confined (QC) and head independent from previous (HIP) unconfined-aquifer finite element equation derivative approximations, are presented and demonstrated to be highly accurate. The derivative approximations can be used with any nonlinear optimization method requiring derivatives for computation of either time-invariant or time-varying pumping rates. The QC and HIP approximations are combined with the nonlinear optimal control algorithm SALQR into the unconfined-aquifer algorithm, which is shown to compute solutions for unconfined aquifers in CPU times that were not significantly longer than those required by the confined-aquifer optimization model. Two of the three example unconfined-aquifer cases considered obtained pumping policies with substantially lower objective function values with the unconfined model than were obtained with the confined-aquifer optimization, even though the mean differences in hydraulic heads predicted by the unconfined- and confined-aquifer models were small (less than 0.1%). We suggest a possible geophysical index based on differences in drawdown predictions between unconfined- and confined-aquifer models to estimate which aquifers require unconfined-aquifer optimization and which can be adequately approximated by the simpler confined-aquifer analysis.

  1. Particle acceleration due to shocks in the interplanetary field: High time resolution data and simulation results

    NASA Technical Reports Server (NTRS)

    Kessel, R. L.; Armstrong, T. P.; Nuber, R.; Bandle, J.

    1985-01-01

    Data were examined from two experiments aboard the Explorer 50 (IMP 8) spacecraft. The Johns Hopkins University/Applied Lab Charged Particle Measurement Experiment (CPME) provides 10.12 second resolution ion and electron count rates as well as 5.5 minute or longer averages of the same, with data sampled in the ecliptic plane. The high time resolution of the data allows for an explicit, point by point, merging of the magnetic field and particle data and thus a close examination of the pre- and post-shock conditions and particle fluxes associated with large angle oblique shocks in the interplanetary field. A computer simulation has been developed wherein sample particle trajectories, taken from observed fluxes, are allowed to interact with a planar shock either forward or backward in time. One event, the 1974 Day 312 shock, is examined in detail.

  2. Comparative analysis of the modified enclosed energy metric for self-focusing holograms from digital lensless holographic microscopy.

    PubMed

    Trujillo, Carlos; Garcia-Sucerquia, Jorge

    2015-06-01

    A comparative analysis of the performance of the modified enclosed energy (MEE) method for self-focusing holograms recorded with digital lensless holographic microscopy is presented. Notwithstanding the MEE analysis previously published, no extended analysis of its performance has been reported. We have tested the MEE in terms of the minimum axial distance allowed between the set of reconstructed holograms to search for the focal plane and the elapsed time to obtain the focused image. These parameters have been compared with those for some of the already reported methods in the literature. The MEE achieves better results in terms of self-focusing quality but at a higher computational cost. Despite its longer processing time, the method remains within a time frame to be technologically attractive. Modeled and experimental holograms have been utilized in this work to perform the comparative study.

  3. [Decompression problems in diving in mountain lakes].

    PubMed

    Bühlmann, A A

    1989-08-01

    The relationship between tolerated high-pressure tissue nitrogen and ambient pressure is practically linear. The tolerated nitrogen high pressure decreases at altitude, as the ambient pressure is lower. Additionally, tissues with short nitrogen half-times have a higher tolerance than tissues which retain nitrogen for longer duration. For the purpose of determining safe decompression routines, the human body can be regarded as consisting of 16 compartments with half-times from 4 to 635 minutes for nitrogen. The coefficients for calculation of the tolerated nitrogen-high pressure in the tissues can be deduced directly from the half-times for nitrogen. We show as application the results of 573 simulated air dives in the pressure-chamber and 544 real dives in mountain lakes in Switzerland (1400-2600 m above sea level) and in Lake Titicaca (3800 m above sea level). They are in accordance with the computed limits of tolerance.

  4. Object motion computation for the initiation of smooth pursuit eye movements in humans.

    PubMed

    Wallace, Julian M; Stone, Leland S; Masson, Guillaume S

    2005-04-01

    Pursuing an object with smooth eye movements requires an accurate estimate of its two-dimensional (2D) trajectory. This 2D motion computation requires that different local motion measurements are extracted and combined to recover the global object-motion direction and speed. Several combination rules have been proposed such as vector averaging (VA), intersection of constraints (IOC), or 2D feature tracking (2DFT). To examine this computation, we investigated the time course of smooth pursuit eye movements driven by simple objects of different shapes. For type II diamond (where the direction of true object motion is dramatically different from the vector average of the 1-dimensional edge motions, i.e., VA not equal IOC = 2DFT), the ocular tracking is initiated in the vector average direction. Over a period of less than 300 ms, the eye-tracking direction converges on the true object motion. The reduction of the tracking error starts before the closing of the oculomotor loop. For type I diamonds (where the direction of true object motion is identical to the vector average direction, i.e., VA = IOC = 2DFT), there is no such bias. We quantified this effect by calculating the direction error between responses to types I and II and measuring its maximum value and time constant. At low contrast and high speeds, the initial bias in tracking direction is larger and takes longer to converge onto the actual object-motion direction. This effect is attenuated with the introduction of more 2D information to the extent that it was totally obliterated with a texture-filled type II diamond. These results suggest a flexible 2D computation for motion integration, which combines all available one-dimensional (edge) and 2D (feature) motion information to refine the estimate of object-motion direction over time.

  5. Better, Cheaper, Faster Molecular Dynamics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    Recent, revolutionary progress in genomics and structural, molecular and cellular biology has created new opportunities for molecular-level computer simulations of biological systems by providing vast amounts of data that require interpretation. These opportunities are further enhanced by the increasing availability of massively parallel computers. For many problems, the method of choice is classical molecular dynamics (iterative solving of Newton's equations of motion). It focuses on two main objectives. One is to calculate the relative stability of different states of the system. A typical problem that has' such an objective is computer-aided drug design. Another common objective is to describe evolution of the system towards a low energy (possibly the global minimum energy), "native" state. Perhaps the best example of such a problem is protein folding. Both types of problems share the same difficulty. Often, different states of the system are separated by high energy barriers, which implies that transitions between these states are rare events. This, in turn, can greatly impede exploration of phase space. In some instances this can lead to "quasi non-ergodicity", whereby a part of phase space is inaccessible on time scales of the simulation. To overcome this difficulty and to extend molecular dynamics to "biological" time scales (millisecond or longer) new physical formulations and new algorithmic developments are required. To be efficient they should account for natural limitations of multi-processor computer architecture. I will present work along these lines done in my group. In particular, I will focus on a new approach to calculating the free energies (stability) of different states and to overcoming "the curse of rare events". I will also discuss algorithmic improvements to multiple time step methods and to the treatment of slowly decaying, log-ranged, electrostatic effects.

  6. Tools and Techniques for Adding Fault Tolerance to Distributed and Parallel Programs

    DTIC Science & Technology

    1991-12-07

    is rapidly approaching dimensions where fault tolerance can no longer be ignored. No matter how reliable the i .nd~ividual components May be, the...The scale of parallel computing systems is rapidly approaching dimensions where 41to’- erance can no longer be ignored. No matter how relitble the...those employed in the Tandem [71 and Stratus [35] systems, is clearly impractical. * No matter how reliable the individual components are, the sheer

  7. Real-Time Global Flood Estimation Using Satellite-Based Precipitation and a Coupled Land Surface and Routing Model

    NASA Technical Reports Server (NTRS)

    Wu, Huan; Adler, Robert F.; Tian, Yudong; Huffman, George J.; Li, Hongyi; Wang, JianJian

    2014-01-01

    A widely used land surface model, the Variable Infiltration Capacity (VIC) model, is coupled with a newly developed hierarchical dominant river tracing-based runoff-routing model to form the Dominant river tracing-Routing Integrated with VIC Environment (DRIVE) model, which serves as the new core of the real-time Global Flood Monitoring System (GFMS). The GFMS uses real-time satellite-based precipitation to derive flood monitoring parameters for the latitude band 50 deg. N - 50 deg. S at relatively high spatial (approximately 12 km) and temporal (3 hourly) resolution. Examples of model results for recent flood events are computed using the real-time GFMS (http://flood.umd.edu). To evaluate the accuracy of the new GFMS, the DRIVE model is run retrospectively for 15 years using both research-quality and real-time satellite precipitation products. Evaluation results are slightly better for the research-quality input and significantly better for longer duration events (3 day events versus 1 day events). Basins with fewer dams tend to provide lower false alarm ratios. For events longer than three days in areas with few dams, the probability of detection is approximately 0.9 and the false alarm ratio is approximately 0.6. In general, these statistical results are better than those of the previous system. Streamflow was evaluated at 1121 river gauges across the quasi-global domain. Validation using real-time precipitation across the tropics (30 deg. S - 30 deg. N) gives positive daily Nash-Sutcliffe Coefficients for 107 out of 375 (28%) stations with a mean of 0.19 and 51% of the same gauges at monthly scale with a mean of 0.33. There were poorer results in higher latitudes, probably due to larger errors in the satellite precipitation input.

  8. Monitoring of body position and motion in children with severe cerebral palsy for 24 hours.

    PubMed

    Sato, Haruhiko; Iwasaki, Toshiyuki; Yokoyama, Misako; Inoue, Takenobu

    2014-01-01

    To investigate differences in position and body movements between children with severe cerebral palsy (CP) and children with typical development (TD) during the daytime and while asleep at night. Fifteen children with severe quadriplegic CP living at home (GMFCS level V, 7 males, 8 females; mean age 8 years 3 months; range 3-20 years) and 15 children with TD (6 males, 9 females; mean age 8 years 7 months; range 1-16 years) participated. Body position and movements were recorded for 24 h by a body position monitor and a physical activity monitor, respectively. The amount of time spent in one position and the durations of inactive periods during the daytime and during night-time sleep were computed and analyzed for group differences. In children with CP, the mean longest time spent in one position was longer than that in children with TD during night-time sleep (5.6 ± 3.5 h versus 1.6 ± 1.2 h). In contrast, no significant differences were found between the groups during the daytime (1.9 ± 1.1 h versus 1.6 ± 0.7 h). The mean longest time the body remained inactive was longer in the children with CP during both daytime and nighttime sleep (0.6 ± 0.3 h versus 0.3 ± 0.3 h for daytime, 1.4 ± 0.8 h versus 0.7 ± 0.3 h for nighttime). Children with severe CP living at home showed prolonged immobilized posture during night-time sleep when their caregivers would be likely to also be asleep. This may suggest that these children should receive postural care assistance at night.

  9. Real-time global flood estimation using satellite-based precipitation and a coupled land surface and routing model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Huan; Adler, Robert F.; Tian, Yudong

    2014-03-01

    A widely used land surface model, the Variable Infiltration Capacity (VIC) model, is coupled with a newly developed hierarchical dominant river tracing-based runoff-routing model to form the Dominant river tracing-Routing Integrated with VIC Environment (DRIVE) model, which serves as the new core of the real-time Global Flood Monitoring System (GFMS). The GFMS uses real-time satellite-based precipitation to derive flood monitoring parameters for the latitude band 50°N–50°S at relatively high spatial (~12 km) and temporal (3 hourly) resolution. Examples of model results for recent flood events are computed using the real-time GFMS (http://flood.umd.edu). To evaluate the accuracy of the new GFMS,more » the DRIVE model is run retrospectively for 15 years using both research-quality and real-time satellite precipitation products. Evaluation results are slightly better for the research-quality input and significantly better for longer duration events (3 day events versus 1 day events). Basins with fewer dams tend to provide lower false alarm ratios. For events longer than three days in areas with few dams, the probability of detection is ~0.9 and the false alarm ratio is ~0.6. In general, these statistical results are better than those of the previous system. Streamflow was evaluated at 1121 river gauges across the quasi-global domain. Validation using real-time precipitation across the tropics (30°S–30°N) gives positive daily Nash-Sutcliffe Coefficients for 107 out of 375 (28%) stations with a mean of 0.19 and 51% of the same gauges at monthly scale with a mean of 0.33. Finally, there were poorer results in higher latitudes, probably due to larger errors in the satellite precipitation input.« less

  10. Explicit Building Block Multiobjective Evolutionary Computation: Methods and Applications

    DTIC Science & Technology

    2005-06-16

    which is introduced in 1990 by Richard Dawkins in his book ”The Selfish Gene .” [34] 356 E.5.7 Pareto Envelop-based Selection Algorithm I and II...IGC Intelligent Gene Collector . . . . . . . . . . . . . . . . . 59 OED Orthogonal Experimental Design . . . . . . . . . . . . . 59 MED Main Effect...complete one experiment 74 `′ The string length hold within the computer (can be longer than number of genes

  11. Aberrant leukocyte telomere length in Birdshot Uveitis.

    PubMed

    Vazirpanah, Nadia; Verhagen, Fleurieke H; Rothova, Anna; Missotten, Tom O A R; van Velthoven, Mirjam; Den Hollander, Anneke I; Hoyng, Carel B; Radstake, Timothy R D J; Broen, Jasper C A; Kuiper, Jonas J W

    2017-01-01

    Birdshot Uveitis (BU) is an archetypical chronic inflammatory eye disease, with poor visual prognosis, that provides an excellent model for studying chronic inflammation. BU typically affects patients in the fifth decade of life. This suggests that it may represent an age-related chronic inflammatory disease, which has been linked to increased erosion of telomere length of leukocytes. To study this in detail, we exploited a sensitive standardized quantitative real-time polymerase chain reaction to determine the peripheral blood leukocyte telomere length (LTL) in 91 genotyped Dutch BU patients and 150 unaffected Dutch controls. Although LTL erosion rates were very similar between BU patients and healthy controls, we observed that BU patients displayed longer LTL, with a median of log (LTL) = 4.87 (= 74131 base pair) compared to 4.31 (= 20417 base pair) in unaffected controls (P<0.0001). The cause underpinning the difference in LTL could not be explained by clinical parameters, immune cell-subtype distribution, nor genetic predisposition based upon the computed weighted genetic risk score of genotyped validated variants in TERC, TERT, NAF1, OBFC1 and RTEL1. These findings suggest that BU is accompanied by significantly longer LTL.

  12. Dynamic interactions of eye and head movements when reading with single-vision and progressive lenses in a simulated computer-based environment.

    PubMed

    Han, Ying; Ciuffreda, Kenneth J; Selenow, Arkady; Ali, Steven R

    2003-04-01

    To assess dynamic interactions of eye and head movements during return-sweep saccades (RSS) when reading with single-vision (SVL) versus progressive-addition (PAL) lenses in a simulated computer-based business environment. Horizontal eye and head movements were recorded objectively and simultaneously at a rate of 60 Hz during reading of single-page (SP; 14 degrees horizontal [H]) and double-page (DP; 37 degrees H) formats at 60 cm with binocular viewing. Subjects included 11 individuals with normal presbyopic vision aged 45 to 71 years selected by convenience sampling from a clinic population. Reading was performed with three types of spectacle lenses with a different clear near field of view (FOV): a SVL (60 degrees H clear FOV), a PAL-I with a relatively wide intermediate zone (7.85 mm; 18 degrees H clear FOV), and a PAL-II with a relatively narrow intermediate zone (5.60 mm; 13 degrees H clear FOV). Eye movements were initiated before head movements in the SP condition, and the reverse was found in the DP condition, with all three lens types. Duration of eye movements increased as the zone of clear vision decreased in the SP condition, and they were longer with the PALs than with the SVL in the DP condition. Gaze stabilization occurred later with the PALs than with the SVL in both the SP and DP conditions. The duration of head movements was longer with the PAL-II than with the SVL in both the SP and DP conditions. Eye movement peak velocity was greater with the SVL than the PALs in the DP condition. Eye movement and head movement strategies and timing were contingent on viewing conditions. The longer eye movement duration and gaze-stabilization times suggested that additional eye movements were needed to locate the clear-vision zone and commence reading after the RSS. Head movements with PALs for the SP condition were similarly optically induced. These eye movement and head movement results may contribute to the reduced reading rate and related symptoms reported by some PAL wearers. The dynamic interactions of eye movements and head movements during reading with the PALs appear to be a sensitive indicator of the effect of lens optical design parameters on overall reading performance, because the movements can discriminate between SVL and PAL designs and at times even between PALs.

  13. Real-time image processing for passive mmW imagery

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron; Bonnett, James; Harrity, Charles; Mackrides, Daniel; Dillon, Thomas E.; Martin, Richard D.; Schuetz, Christopher A.; Kelmelis, Eric; Prather, Dennis W.

    2015-05-01

    The transmission characteristics of millimeter waves (mmWs) make them suitable for many applications in defense and security, from airport preflight scanning to penetrating degraded visual environments such as brownout or heavy fog. While the cold sky provides sufficient illumination for these images to be taken passively in outdoor scenarios, this utility comes at a cost; the diffraction limit of the longer wavelengths involved leads to lower resolution imagery compared to the visible or IR regimes, and the low power levels inherent to passive imagery allow the data to be more easily degraded by noise. Recent techniques leveraging optical upconversion have shown significant promise, but are still subject to fundamental limits in resolution and signal-to-noise ratio. To address these issues we have applied techniques developed for visible and IR imagery to decrease noise and increase resolution in mmW imagery. We have developed these techniques into fieldable software, making use of GPU platforms for real-time operation of computationally complex image processing algorithms. We present data from a passive, 77 GHz, distributed aperture, video-rate imaging platform captured during field tests at full video rate. These videos demonstrate the increase in situational awareness that can be gained through applying computational techniques in real-time without needing changes in detection hardware.

  14. How Patient Interactions With a Computer-Based Video Intervention Affect Decisions to Test for HIV.

    PubMed

    Aronson, Ian David; Rajan, Sonali; Marsch, Lisa A; Bania, Theodore C

    2014-06-01

    The current study examines predictors of HIV test acceptance among emergency department patients who received an educational video intervention designed to increase HIV testing. A total of 202 patients in the main treatment areas of a high-volume, urban hospital emergency department used inexpensive netbook computers to watch brief educational videos about HIV testing and respond to pre-postintervention data collection instruments. After the intervention, computers asked participants if they would like an HIV test: Approximately 43% (n = 86) accepted. Participants who accepted HIV tests at the end of the intervention took longer to respond to postintervention questions, which included the offer of an HIV test, F(1, 195) = 37.72, p < .001, compared with participants who did not accept testing. Participants who incorrectly answered pretest questions about HIV symptoms were more likely to accept testing F(14, 201) = 4.48, p < .001. White participants were less likely to accept tests than Black, Latino, or "Other" patients, χ(2)(3, N = 202) = 10.39, p < .05. Time spent responding to postintervention questions emerged as the strongest predictor of HIV testing, suggesting that patients who agreed to test spent more time thinking about their response to the offer of an HIV test. Examining intervention usage data, pretest knowledge deficits, and patient demographics can potentially inform more effective behavioral health interventions for underserved populations in clinical settings. © 2013 Society for Public Health Education.

  15. How Patient Interactions With a Computer-Based Video Intervention Affect Decisions to Test for HIV

    PubMed Central

    Aronson, Ian David; Rajan, Sonali; Marsch, Lisa A.; Bania, Theodore C.

    2014-01-01

    The current study examines predictors of HIV test acceptance among emergency department patients who received an educational video intervention designed to increase HIV testing. A total of 202 patients in the main treatment areas of a high-volume, urban hospital emergency department used inexpensive netbook computers to watch brief educational videos about HIV testing and respond to pre–postintervention data collection instruments. After the intervention, computers asked participants if they would like an HIV test: Approximately 43% (n = 86) accepted. Participants who accepted HIV tests at the end of the intervention took longer to respond to postintervention questions, which included the offer of an HIV test, F(1, 195) = 37.72, p < .001, compared with participants who did not accept testing. Participants who incorrectly answered pretest questions about HIV symptoms were more likely to accept testing F(14, 201) = 4.48, p < .001. White participants were less likely to accept tests than Black, Latino, or “Other” patients, χ2(3, N = 202) = 10.39, p < .05. Time spent responding to postintervention questions emerged as the strongest predictor of HIV testing, suggesting that patients who agreed to test spent more time thinking about their response to the offer of an HIV test. Examining intervention usage data, pretest knowledge deficits, and patient demographics can potentially inform more effective behavioral health interventions for underserved populations in clinical settings. PMID:24225031

  16. Flood Forecasting in Wales: Challenges and Solutions

    NASA Astrophysics Data System (ADS)

    How, Andrew; Williams, Christopher

    2015-04-01

    With steep, fast-responding river catchments, exposed coastal reaches with large tidal ranges and large population densities in some of the most at-risk areas; flood forecasting in Wales presents many varied challenges. Utilising advances in computing power and learning from best practice within the United Kingdom and abroad have seen significant improvements in recent years - however, many challenges still remain. Developments in computing and increased processing power comes with a significant price tag; greater numbers of data sources and ensemble feeds brings a better understanding of uncertainty but the wealth of data needs careful management to ensure a clear message of risk is disseminated; new modelling techniques utilise better and faster computation, but lack the history of record and experience gained from the continued use of more established forecasting models. As a flood forecasting team we work to develop coastal and fluvial forecasting models, set them up for operational use and manage the duty role that runs the models in real time. An overview of our current operational flood forecasting system will be presented, along with a discussion on some of the solutions we have in place to address the challenges we face. These include: • real-time updating of fluvial models • rainfall forecasting verification • ensemble forecast data • longer range forecast data • contingency models • offshore to nearshore wave transformation • calculation of wave overtopping

  17. An automatic generation of non-uniform mesh for CFD analyses of image-based multiscale human airway models

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2014-11-01

    The authors have developed a method to automatically generate non-uniform CFD mesh for image-based human airway models. The sizes of generated tetrahedral elements vary in both radial and longitudinal directions to account for boundary layer and multiscale nature of pulmonary airflow. The proposed method takes advantage of our previously developed centerline-based geometry reconstruction method. In order to generate the mesh branch by branch in parallel, we used the open-source programs Gmsh and TetGen for surface and volume meshes, respectively. Both programs can specify element sizes by means of background mesh. The size of an arbitrary element in the domain is a function of wall distance, element size on the wall, and element size at the center of airway lumen. The element sizes on the wall are computed based on local flow rate and airway diameter. The total number of elements in the non-uniform mesh (10 M) was about half of that in the uniform mesh, although the computational time for the non-uniform mesh was about twice longer (170 min). The proposed method generates CFD meshes with fine elements near the wall and smooth variation of element size in longitudinal direction, which are required, e.g., for simulations with high flow rate. NIH Grants R01-HL094315, U01-HL114494, and S10-RR022421. Computer time provided by XSEDE.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berrada, K., E-mail: kberrada@ictp.it; The Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, Miramare-Trieste; Ooi, C. H. Raymond

    Robustness of the geometric phase (GP) with respect to different noise effects is a basic condition for an effective quantum computation. Here, we propose a useful quantum system with real physical parameters by studying the GP of a pair of Stokes and anti-Stokes photons, involving Raman emission processes with and without photonic band gap (PBG) effect. We show that the properties of GP are very sensitive to the change of the Rabi frequency and time, exhibiting collapse phenomenon as the time becomes significantly large. The system allows us to obtain a state which remains with zero GP for longer times.more » This result plays a significant role to enhance the stabilization and control of the system dynamics. Finally, we investigate the nonlocal correlation (entanglement) between the pair photons by taking into account the effect of different parameters. An interesting correlation between the GP and entanglement is observed showing that the PBG stabilizes the fluctuations in the system and makes the entanglement more robust against the change of time and frequency.« less

  19. Fixed-interval matching-to-sample: intermatching time and intermatching error runs1

    PubMed Central

    Nelson, Thomas D.

    1978-01-01

    Four pigeons were trained on a matching-to-sample task in which reinforcers followed either the first matching response (fixed interval) or the fifth matching response (tandem fixed-interval fixed-ratio) that occurred 80 seconds or longer after the last reinforcement. Relative frequency distributions of the matching-to-sample responses that concluded intermatching times and runs of mismatches (intermatching error runs) were computed for the final matching responses directly followed by grain access and also for the three matching responses immediately preceding the final match. Comparison of these two distributions showed that the fixed-interval schedule arranged for the preferential reinforcement of matches concluding relatively extended intermatching times and runs of mismatches. Differences in matching accuracy and rate during the fixed interval, compared to the tandem fixed-interval fixed-ratio, suggested that reinforcers following matches concluding various intermatching times and runs of mismatches influenced the rate and accuracy of the last few matches before grain access, but did not control rate and accuracy throughout the entire fixed-interval period. PMID:16812032

  20. Next Generation Extended Lagrangian Quantum-based Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Negre, Christian

    2017-06-01

    A new framework for extended Lagrangian first-principles molecular dynamics simulations is presented, which overcomes shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while maintaining important advantages of the unified extended Lagrangian formulation of density functional theory pioneered by Car and Parrinello three decades ago. The new framework allows, for the first time, energy conserving, linear-scaling Born-Oppenheimer molecular dynamics simulations, which is necessary to study larger and more realistic systems over longer simulation times than previously possible. Expensive, self-consinstent-field optimizations are avoided and normal integration time steps of regular, direct Born-Oppenheimer molecular dynamics can be used. Linear scaling electronic structure theory is presented using a graph-based approach that is ideal for parallel calculations on hybrid computer platforms. For the first time, quantum based Born-Oppenheimer molecular dynamics simulation is becoming a practically feasible approach in simulations of +100,000 atoms-representing a competitive alternative to classical polarizable force field methods. In collaboration with: Anders Niklasson, Los Alamos National Laboratory.

  1. Improved confidence intervals when the sample is counted an integer times longer than the blank.

    PubMed

    Potter, William Edward; Strzelczyk, Jadwiga Jodi

    2011-05-01

    Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.

  2. Skeletal muscle tensile strain dependence: hyperviscoelastic nonlinearity

    PubMed Central

    Wheatley, Benjamin B; Morrow, Duane A; Odegard, Gregory M; Kaufman, Kenton R; Donahue, Tammy L Haut

    2015-01-01

    Introduction Computational modeling of skeletal muscle requires characterization at the tissue level. While most skeletal muscle studies focus on hyperelasticity, the goal of this study was to examine and model the nonlinear behavior of both time-independent and time-dependent properties of skeletal muscle as a function of strain. Materials and Methods Nine tibialis anterior muscles from New Zealand White rabbits were subject to five consecutive stress relaxation cycles of roughly 3% strain. Individual relaxation steps were fit with a three-term linear Prony series. Prony series coefficients and relaxation ratio were assessed for strain dependence using a general linear statistical model. A fully nonlinear constitutive model was employed to capture the strain dependence of both the viscoelastic and instantaneous components. Results Instantaneous modulus (p<0.0005) and mid-range relaxation (p<0.0005) increased significantly with strain level, while relaxation at longer time periods decreased with strain (p<0.0005). Time constants and overall relaxation ratio did not change with strain level (p>0.1). Additionally, the fully nonlinear hyperviscoelastic constitutive model provided an excellent fit to experimental data, while other models which included linear components failed to capture muscle function as accurately. Conclusions Material properties of skeletal muscle are strain-dependent at the tissue level. This strain dependence can be included in computational models of skeletal muscle performance with a fully nonlinear hyperviscoelastic model. PMID:26409235

  3. Emergent multicellular life cycles in filamentous bacteria owing to density-dependent population dynamics.

    PubMed

    Rossetti, Valentina; Filippini, Manuela; Svercel, Miroslav; Barbour, A D; Bagheri, Homayoun C

    2011-12-07

    Filamentous bacteria are the oldest and simplest known multicellular life forms. By using computer simulations and experiments that address cell division in a filamentous context, we investigate some of the ecological factors that can lead to the emergence of a multicellular life cycle in filamentous life forms. The model predicts that if cell division and death rates are dependent on the density of cells in a population, a predictable cycle between short and long filament lengths is produced. During exponential growth, there will be a predominance of multicellular filaments, while at carrying capacity, the population converges to a predominance of short filaments and single cells. Model predictions are experimentally tested and confirmed in cultures of heterotrophic and phototrophic bacterial species. Furthermore, by developing a formulation of generation time in bacterial populations, it is shown that changes in generation time can alter length distributions. The theory predicts that given the same population growth curve and fitness, species with longer generation times have longer filaments during comparable population growth phases. Characterization of the environmental dependence of morphological properties such as length, and the number of cells per filament, helps in understanding the pre-existing conditions for the evolution of developmental cycles in simple multicellular organisms. Moreover, the theoretical prediction that strains with the same fitness can exhibit different lengths at comparable growth phases has important implications. It demonstrates that differences in fitness attributed to morphology are not the sole explanation for the evolution of life cycles dominated by multicellularity.

  4. Computational and Spectroscopic Investigations of the Molecular Scale Structure and Dynamics of Geologically Important Fluids and Mineral-Fluid Interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. James Kirkpatrick; Andrey G. Kalinichev

    2008-11-25

    Research supported by this grant focuses on molecular scale understanding of central issues related to the structure and dynamics of geochemically important fluids, fluid-mineral interfaces, and confined fluids using computational modeling and experimental methods. Molecular scale knowledge about fluid structure and dynamics, how these are affected by mineral surfaces and molecular-scale (nano-) confinement, and how water molecules and dissolved species interact with surfaces is essential to understanding the fundamental chemistry of a wide range of low-temperature geochemical processes, including sorption and geochemical transport. Our principal efforts are devoted to continued development of relevant computational approaches, application of these approaches tomore » important geochemical questions, relevant NMR and other experimental studies, and application of computational modeling methods to understanding the experimental results. The combination of computational modeling and experimental approaches is proving highly effective in addressing otherwise intractable problems. In 2006-2007 we have significantly advanced in new, highly promising research directions along with completion of on-going projects and final publication of work completed in previous years. New computational directions are focusing on modeling proton exchange reactions in aqueous solutions using ab initio molecular dynamics (AIMD), metadynamics (MTD), and empirical valence bond (EVB) approaches. Proton exchange is critical to understanding the structure, dynamics, and reactivity at mineral-water interfaces and for oxy-ions in solution, but has traditionally been difficult to model with molecular dynamics (MD). Our ultimate objective is to develop this capability, because MD is much less computationally demanding than quantum-chemical approaches. We have also extended our previous MD simulations of metal binding to natural organic matter (NOM) to a much longer time scale (up to 10 ns) for significantly larger systems. These calculations have allowed us, for the first time, to study the effects of metal cations with different charges and charge density on the NOM aggregation in aqueous solutions. Other computational work has looked at the longer-time-scale dynamical behavior of aqueous species at mineral-water interfaces investigated simultaneously by NMR spectroscopy. Our experimental NMR studies have focused on understanding the structure and dynamics of water and dissolved species at mineral-water interfaces and in two-dimensional nano-confinement within clay interlayers. Combined NMR and MD study of H2O, Na+, and Cl- interactions with the surface of quartz has direct implications regarding interpretation of sum frequency vibrational spectroscopic experiments for this phase and will be an important reference for future studies. We also used NMR to examine the behavior of K+ and H2O in the interlayer and at the surfaces of the clay minerals hectorite and illite-rich illite-smectite. This the first time K+ dynamics has been characterized spectroscopically in geochemical systems. Preliminary experiments were also performed to evaluate the potential of 75As NMR as a probe of arsenic geochemical behavior. The 75As NMR study used advanced signal enhancement methods, introduced a new data acquisition approach to minimize the time investment in ultra-wide-line NMR experiments, and provides the first evidence of a strong relationship between the chemical shift and structural parameters for this experimentally challenging nucleus. We have also initiated a series of inelastic and quasi-elastic neutron scattering measurements of water dynamics in the interlayers of clays and layered double hydroxides. The objective of these experiments is to probe the correlations of water molecular motions in confined spaces over the scale of times and distances most directly comparable to our MD simulations and on a time scale different than that probed by NMR. This work is being done in collaboration with Drs. C.-K. Loong, N. de Souza, and A.I. Kolesnikov at the Intense Pulsed Neutron Source facility of the Argonne National Lab, and Dr. A. Faraone at the NIST Center for Neutron Research. A manuscript reporting the first results of these experiments, which are highly complimentary to our previous NMR, X-ray, and infra-red results for these phases, is currently in preparation. In total, in 2006-2007 our work has resulted in the publication of 14 peer-reviewed research papers. We also devoted considerable effort to making our work known to a wide range of researchers, as indicated by the 24 contributed abstracts and 14 invited presentations.« less

  5. Informatics in dental education: a horizon of opportunity.

    PubMed

    Abbey, L M

    1989-11-01

    Computers have presented society with the largest array of opportunities since the printing press. More specifically in dental education they represent the path to freedom from the memory-based curriculum. Computers allow us to be constantly in touch with the entire scope of knowledge necessary for decision making in every aspect of the process of preparing young men and women to practice dentistry. No longer is it necessary to spend the energy or time previously used to memorize facts, test for retention of facts or be concerned with remembering facts when dealing with our patients. Modern information management systems can assume that task allowing dentists to concentrate on understanding, skill, judgement and wisdom while helping patients deal with their problems within a health care system that is simultaneously baffling in its complexity and overflowing with options. This paper presents a summary of the choices facing dental educators as computers continue to afford us the freedom to look differently at teaching, research and practice. The discussion will elaborate some of the ways dental educators must think differently about the educational process in order to utilize fully the power of computers in curriculum development and tracking, integration of basic and clinical teaching, problem solving, patient management, record keeping and research. Some alternative strategies will be discussed that may facilitate the transition from the memory-based to the computer-based curriculum and practice.

  6. Visualizing a silicon quantum computer

    NASA Astrophysics Data System (ADS)

    Sanders, Barry C.; Hollenberg, Lloyd C. L.; Edmundson, Darran; Edmundson, Andrew

    2008-12-01

    Quantum computation is a fast-growing, multi-disciplinary research field. The purpose of a quantum computer is to execute quantum algorithms that efficiently solve computational problems intractable within the existing paradigm of 'classical' computing built on bits and Boolean gates. While collaboration between computer scientists, physicists, chemists, engineers, mathematicians and others is essential to the project's success, traditional disciplinary boundaries can hinder progress and make communicating the aims of quantum computing and future technologies difficult. We have developed a four minute animation as a tool for representing, understanding and communicating a silicon-based solid-state quantum computer to a variety of audiences, either as a stand-alone animation to be used by expert presenters or embedded into a longer movie as short animated sequences. The paper includes a generally applicable recipe for successful scientific animation production.

  7. A synthetic visual plane algorithm for visibility computation in consideration of accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Yu, Jieqing; Wu, Lixin; Hu, Qingsong; Yan, Zhigang; Zhang, Shaoliang

    2017-12-01

    Visibility computation is of great interest to location optimization, environmental planning, ecology, and tourism. Many algorithms have been developed for visibility computation. In this paper, we propose a novel method of visibility computation, called synthetic visual plane (SVP), to achieve better performance with respect to efficiency, accuracy, or both. The method uses a global horizon, which is a synthesis of line-of-sight information of all nearer points, to determine the visibility of a point, which makes it an accurate visibility method. We used discretization of horizon to gain a good performance in efficiency. After discretization, the accuracy and efficiency of SVP depends on the scale of discretization (i.e., zone width). The method is more accurate at smaller zone widths, but this requires a longer operating time. Users must strike a balance between accuracy and efficiency at their discretion. According to our experiments, SVP is less accurate but more efficient than R2 if the zone width is set to one grid. However, SVP becomes more accurate than R2 when the zone width is set to 1/24 grid, while it continues to perform as fast or faster than R2. Although SVP performs worse than reference plane and depth map with respect to efficiency, it is superior in accuracy to these other two algorithms.

  8. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill; Feiereisen, William (Technical Monitor)

    2000-01-01

    The term "Grid" refers to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. The vision for NASN's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks that will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: The scientist / design engineer whose primary interest is problem solving (e.g., determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user if the tool designer: The computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. This paper describes the current state of IPG (the operational testbed), the set of capabilities being put into place for the operational prototype IPG, as well as some of the longer term R&D tasks.

  9. The effects of Medieval dams on genetic divergence and demographic history in brown trout populations

    PubMed Central

    2014-01-01

    Background Habitat fragmentation has accelerated within the last century, but may have been ongoing over longer time scales. We analyzed the timing and genetic consequences of fragmentation in two isolated lake-dwelling brown trout populations. They are from the same river system (the Gudenå River, Denmark) and have been isolated from downstream anadromous trout by dams established ca. 600–800 years ago. For reference, we included ten other anadromous populations and two hatchery strains. Based on analysis of 44 microsatellite loci we investigated if the lake populations have been naturally genetically differentiated from anadromous trout for thousands of years, or have diverged recently due to the establishment of dams. Results Divergence time estimates were based on 1) Approximate Bayesian Computation and 2) a coalescent-based isolation-with-gene-flow model. Both methods suggested divergence times ca. 600–800 years bp, providing strong evidence for establishment of dams in the Medieval as the factor causing divergence. Bayesian cluster analysis showed influence of stocked trout in several reference populations, but not in the focal lake and anadromous populations. Estimates of effective population size using a linkage disequilibrium method ranged from 244 to > 1,000 in all but one anadromous population, but were lower (153 and 252) in the lake populations. Conclusions We show that genetic divergence of lake-dwelling trout in two Danish lakes reflects establishment of water mills and impassable dams ca. 600–800 years ago rather than a natural genetic population structure. Although effective population sizes of the two lake populations are not critically low they may ultimately limit response to selection and thereby future adaptation. Our results demonstrate that populations may have been affected by anthropogenic disturbance over longer time scales than normally assumed. PMID:24903056

  10. The effects of Medieval dams on genetic divergence and demographic history in brown trout populations.

    PubMed

    Hansen, Michael M; Limborg, Morten T; Ferchaud, Anne-Laure; Pujolar, José-Martin

    2014-06-05

    Habitat fragmentation has accelerated within the last century, but may have been ongoing over longer time scales. We analyzed the timing and genetic consequences of fragmentation in two isolated lake-dwelling brown trout populations. They are from the same river system (the Gudenå River, Denmark) and have been isolated from downstream anadromous trout by dams established ca. 600-800 years ago. For reference, we included ten other anadromous populations and two hatchery strains. Based on analysis of 44 microsatellite loci we investigated if the lake populations have been naturally genetically differentiated from anadromous trout for thousands of years, or have diverged recently due to the establishment of dams. Divergence time estimates were based on 1) Approximate Bayesian Computation and 2) a coalescent-based isolation-with-gene-flow model. Both methods suggested divergence times ca. 600-800 years bp, providing strong evidence for establishment of dams in the Medieval as the factor causing divergence. Bayesian cluster analysis showed influence of stocked trout in several reference populations, but not in the focal lake and anadromous populations. Estimates of effective population size using a linkage disequilibrium method ranged from 244 to > 1,000 in all but one anadromous population, but were lower (153 and 252) in the lake populations. We show that genetic divergence of lake-dwelling trout in two Danish lakes reflects establishment of water mills and impassable dams ca. 600-800 years ago rather than a natural genetic population structure. Although effective population sizes of the two lake populations are not critically low they may ultimately limit response to selection and thereby future adaptation. Our results demonstrate that populations may have been affected by anthropogenic disturbance over longer time scales than normally assumed.

  11. Programming and execution of movement in Parkinson's disease.

    PubMed

    Sheridan, M R; Flowers, K A; Hurrell, J

    1987-10-01

    Programming and execution of arm movements in Parkinson's disease were investigated in choice and simple reaction time (RT) situations in which subjects made aimed movements at a target. A no-aiming condition was also studied. Reaction time was fractionated using surface EMG recording into premotor (central) and motor (peripheral) components. Premotor RT was found to be greater for parkinsonian patients than normal age-matched controls in the simple RT condition, but not in the choice condition. This effect did not depend on the parameters of the impending movement. Thus, paradoxically, parkinsonian patients were not inherently slower at initiating aiming movements from the starting position, but seemed unable to use advance information concerning motor task demands to speed up movement initiation. For both groups, low velocity movements took longer to initiate than high velocity ones. In the no-aiming condition parkinsonian RTs were markedly shorter than when aiming, but were still significantly longer than control RTs. Motor RT was constant across all conditions and was not different for patient and control subjects. In all conditions, parkinsonian movements were around 37% slower than control movements, and their movement times were more variable, the differences showing up early on in the movement, that is, during the initial ballistic phase. The within-subject variability of movement endpoints was also greater in patients. The motor dysfunction displayed in Parkinson's disease involves a number of components: (1) a basic central problem with simply initiating movements, even when minimal programming is required (no-aiming condition); (2) difficulty in maintaining computed forces for motor programs over time (simple RT condition); (3) a basic slowness of movement (bradykinesia) in all conditions; and (4) increased variability of movement in both time and space, presumably caused by inherent variability in force production.

  12. A simplified technique for polymethyl methacrylate cranioplasty: combined cotton stacking and finger fracture method.

    PubMed

    Kung, Woon-Man; Lin, Muh-Shi

    2012-01-01

    Polymethyl methacrylate (PMMA) is one of the most frequently used cranioplasty materials. However, limitations exist with PMMA cranioplasty including longer operative time, greater blood loss and a higher infection rate. To reduce these disadvantages, it is proposed to introduce a new surgical method for PMMA cranioplasty. Retrospective review of nine patients who received nine PMMA implants using combined cotton stacking and finger fracture method from January 2008 to July 2011. The definitive height of skull defect was quantified by computer-based image analysis of computed tomography (CT) scans. Aesthetic outcomes as measured by post-reduction radiographs and cranial index of symmetry (CIS), cranial nerve V and VII function and complications (wound infection, hardware extrusions, meningitis, osteomyelitis and brain abscess) were evaluated. The mean operation time for implant moulding was 24.56 ± 4.6 minutes and 178.0 ± 53 minutes for skin-to-skin. Average blood loss was 169 mL. All post-operative radiographs revealed excellent reduction. The mean CIS score was 95.86 ± 1.36%, indicating excellent symmetry. These results indicate the safety, practicability, excellent cosmesis, craniofacial symmetry and stability of this new surgical technique.

  13. Eyes Open on Sleep and Wake: In Vivo to In Silico Neural Networks

    PubMed Central

    Vanvinckenroye, Amaury; Vandewalle, Gilles; Chellappa, Sarah L.

    2016-01-01

    Functional and effective connectivity of cortical areas are essential for normal brain function under different behavioral states. Appropriate cortical activity during sleep and wakefulness is ensured by the balanced activity of excitatory and inhibitory circuits. Ultimately, fast, millisecond cortical rhythmic oscillations shape cortical function in time and space. On a much longer time scale, brain function also depends on prior sleep-wake history and circadian processes. However, much remains to be established on how the brain operates at the neuronal level in humans during sleep and wakefulness. A key limitation of human neuroscience is the difficulty in isolating neuronal excitation/inhibition drive in vivo. Therefore, computational models are noninvasive approaches of choice to indirectly access hidden neuronal states. In this review, we present a physiologically driven in silico approach, Dynamic Causal Modelling (DCM), as a means to comprehend brain function under different experimental paradigms. Importantly, DCM has allowed for the understanding of how brain dynamics underscore brain plasticity, cognition, and different states of consciousness. In a broader perspective, noninvasive computational approaches, such as DCM, may help to puzzle out the spatial and temporal dynamics of human brain function at different behavioural states. PMID:26885400

  14. Computer Aided Simulation Machining Programming In 5-Axis Nc Milling Of Impeller Leaf

    NASA Astrophysics Data System (ADS)

    Huran, Liu

    At present, cad/cam (computer-aided design and manufacture) have fine wider and wider application in mechanical industry. For the complex surfaces, the traditional machine tool can no longer satisfy the requirement of such complex task. Only by the help of cad/cam can fulfill the requirement. The machining of the vane surface of the impeller leaf has been considered as the hardest challenge. Because of their complex shape, the 5-axis cnc machine tool is needed for the machining of such parts. The material is hard to cut, the requirement for the surface finish and clearance is very high, so that the manufacture quality of impeller leaf represent the level of 5-axis machining. This paper opened a new field in machining the complicated surface, based on a relatively more rigid mathematical basis. The theory presented here is relatively more systematical. Since the lack of theoretical guidance, in the former research, people have to try in machining many times. Such case will be changed. The movement of the cutter determined by this method is definite, and the residual is the smallest while the times of travel is the fewest. The criterion is simple and the calculation is easy.

  15. Global distribution of the Energetic Neutral Atom (ENA) / precipitating ion particulate albedo from Low Altitude Emission (LAE) source regions over the last solar maximum

    NASA Astrophysics Data System (ADS)

    Mackler, D. A.; Jahn, J.; Mukherjee, J.; Pollock, C. J.

    2012-12-01

    Charge exchange between ring current ions spiraling into the upper atmosphere and terrestrial neutral constituents produces a non-isotropic distribution of escaping Energetic Neutral Atoms (ENA). These ENA's are no longer tied to the magnetic field, and can therefore be observed remotely from orbiting platforms. Particularly of interest is Low Altitude Emissions (LAE) of ENA's. These ENA emissions occur near the oxygen exobase and constitute the brightest ENA signatures during geomagnetic storms. In this study we build on previous work described in Pollock et al. [2009] in which IMAGE/MENA data was used to compute the Invariant Latitude (IL) and Magnetic Local Time (MLT) distributions of ENA's observed in the 29 October 2003 storm. The algorithms developed in Pollock et al. [2009] are used to compute the IL and MLT of LAE source regions for 76 identified storms at different phases of solar cycle 23. The ENA flux from the source regions are divided by in-situ ion precipitation obtained by DMSP-SSJ4 and NOAA-TED to give a global mapping of the particulate albedo during storm times.

  16. The photoisomerization of a peptidic derivative of azobenzene: A nonadiabatic dynamics simulation of a supramolecular system

    NASA Astrophysics Data System (ADS)

    Ciminelli, Cosimo; Granucci, Giovanni; Persico, Maurizio

    2008-06-01

    The aim of this work is to investigate the mechanism of photoisomerization of an azobenzenic chromophore in a supramolecular environment, where the primary photochemical act produces important changes in the whole system. We have chosen a derivative of azobenzene, with two cyclopeptides attached in the para positions, linked by hydrogen bonds when the chromophore is in the cis geometry. We have run computational simulations of the cis → trans photoisomerization of such derivative of azobenzene, by means of a surface hopping method. The potential energy surfaces and nonadiabatic couplings are computed "on the fly" with a hybrid QM/MM strategy, in which the quantum mechanical subsystem is treated semiempirically. The simulations show that the photoisomerization is fast (about 200 fs) and occurs with high quantum yields, as in free azobenzene. However, the two cyclopeptides are not promptly separated, and the breaking of the hydrogen bonds requires longer times (at least several picoseconds), with the intervention of the solvent molecules (water). As a consequence, the resulting trans-azobenzene is severely distorted, and we show how its approach to the equilibrium geometry could be monitored by time-resolved absorption spectroscopy.

  17. Computer model predictions of the local effects of large, solid-fuel rocket motors on stratospheric ozone. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zittel, P.F.

    1994-09-10

    The solid-fuel rocket motors of large space launch vehicles release gases and particles that may significantly affect stratospheric ozone densities along the vehicle's path. In this study, standard rocket nozzle and flowfield computer codes have been used to characterize the exhaust gases and particles through the afterburning region of the solid-fuel motors of the Titan IV launch vehicle. The models predict that a large fraction of the HCl gas exhausted by the motors is converted to Cl and Cl2 in the plume afterburning region. Estimates of the subsequent chemistry suggest that on expansion into the ambient daytime stratosphere, the highlymore » reactive chlorine may significantly deplete ozone in a cylinder around the vehicle track that ranges from 1 to 5 km in diameter over the altitude range of 15 to 40 km. The initial ozone depletion is estimated to occur on a time scale of less than 1 hour. After the initial effects, the dominant chemistry of the problem changes, and new models are needed to follow the further expansion, or closure, of the ozone hole on a longer time scale.« less

  18. Unification of the family of Garrison-Wright's phases.

    PubMed

    Cui, Xiao-Dong; Zheng, Yujun

    2014-07-24

    Inspired by Garrison and Wight's seminal work on complex-valued geometric phases, we generalize the concept of Pancharatnam's "in-phase" in interferometry and further develop a theoretical framework for unification of the abelian geometric phases for a biorthogonal quantum system modeled by a parameterized or time-dependent nonhermitian hamiltonian with a finite and nondegenerate instantaneous spectrum, that is, the family of Garrison-Wright's phases, which will no longer be confined in the adiabatic and nonadiabatic cyclic cases. Besides, we employ a typical example, Bethe-Lamb model, to illustrate how to apply our theory to obtain an explicit result for the Garrison-Wright's noncyclic geometric phase, and also to present its potential applications in quantum computation and information.

  19. Magnetic circular dichroism of chlorofullerenes: Experimental and computational study

    NASA Astrophysics Data System (ADS)

    Štěpánek, Petr; Straka, Michal; Šebestík, Jaroslav; Bouř, Petr

    2016-03-01

    Magnetic circular dichroism (MCD) spectra of C60Cl6, C70Cl10 and C60Cl24 were measured and interpreted using a sum-over-state (SOS) protocol exploiting time dependent density functional theory (TDDFT). Unlike for plain absorption, the MCD spectra exhibited easily recognizable features specific for each chlorinated molecule and appear as a useful tool for chlorofullerene identification. MCD spectrum of C60Cl24 was below 400 nm partially obscured due to scattering and low solubility. In all cases a finer vibrational structure of the electronic bands was observed at longer wavelengths. The TDDFT simulations provided a reasonable basis for interpretation of the most prominent spectral features.

  20. Imaging in lung transplants: Checklist for the radiologist.

    PubMed

    Madan, Rachna; Chansakul, Thanissara; Goldberg, Hilary J

    2014-10-01

    Post lung transplant complications can have overlapping clinical and imaging features, and hence, the time point at which they occur is a key distinguisher. Complications of lung transplantation may occur along a continuum in the immediate or longer postoperative period, including surgical and mechanical problems due to size mismatch and vascular as well as airway anastomotic complication, injuries from ischemia and reperfusion, acute and chronic rejection, pulmonary infections, and post-transplantation lymphoproliferative disorder. Life expectancy after lung transplantation has been limited primarily by chronic rejection and infection. Multiple detector computed tomography (MDCT) is critical for evaluation and early diagnosis of complications to enable selection of effective therapy and decrease morbidity and mortality among lung transplant recipients.

  1. Does telomere elongation lead to a longer lifespan if cancer is considered?

    NASA Astrophysics Data System (ADS)

    Masa, Michael; Cebrat, Stanisław; Stauffer, Dietrich

    2006-05-01

    As cell proliferation is limited due to the loss of telomere repeats in DNA of normal somatic cells during division, telomere attrition can possibly play an important role in determining the maximum life span of organisms as well as contribute to the process of biological ageing. With computer simulations of cell culture development in organisms, which consist of tissues of normal somatic cells with finite growth, we obtain an increase of life span and life expectancy for longer telomeric DNA in the zygote. By additionally considering a two-mutation model for carcinogenesis and indefinite proliferation by the activation of telomerase, we demonstrate that the risk of dying due to cancer can outweigh the positive effect of longer telomeres on the longevity.

  2. 12 CFR 220.5 - Special memorandum account.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... may be effected by an increase or reduction in the entry. When computing the equity in a margin...) Proceeds of a sale of securities or cash no longer required on any expired or liquidated security position...

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaplan, C.R.; Shaddix, C.R.; Smyth, K.C.

    This paper presents time-dependent numerical simulations of both steady and time-varying CH{sub 4}/air diffusion flames to examine the differences in combustion conditions which lead to the observed enhancement in soot production in the flickering flames. The numerical model solves the two-dimensional, time-dependent, reactive-flow Navier-Stokes equations coupled with submodels for soot formation and radiation transport. Qualitative comparisons between the experimental and computed steady flame show good agreement for the soot burnout height and overall flame shape except near the burner lip. Quantitative comparisons between experimental and computed radial profiles of temperature and soot volume fraction for the steady flame show goodmore » to excellent agreement at mid-flame heights, but some discrepancies near the burner lip and at high flame heights. For the time-varying CH{sub 4}/air flame, the simulations successfully predict that the maximum soot concentration increases by over four times compared to the steady flame with the same mean fuel and air velocities. By numerically tracking fluid parcels in the flowfield, the temperature and stoichiometry history were followed along their convective pathlines. Results for the pathline which passes through the maximum sooting region show that flickering flames exhibit much longer residence times during which the local temperatures and stoichiometries are favorable for soot production. The simulations also suggest that soot inception occurs later in flickering flames, and at slightly higher temperatures and under somewhat leaner conditions compared to the steady flame. The integrated soot model of Syed et al., which was developed from a steady CH{sub 4}/air flame, successfully predicts soot production in the time-varying CH{sub 4}/air flames.« less

  4. Thermosphere Global Time Response to Geomagnetic Storms Caused by Coronal Mass Ejections

    NASA Astrophysics Data System (ADS)

    Oliveira, D. M.; Zesta, E.; Schuck, P. W.; Sutton, E. K.

    2017-10-01

    We investigate, for the first time with a spatial superposed epoch analysis study, the thermosphere global time response to 159 geomagnetic storms caused by coronal mass ejections (CMEs) observed in the solar wind at Earth's orbit during the period of September 2001 to September 2011. The thermosphere neutral mass density is obtained from the CHAMP (CHAllenge Mini-Satellite Payload) and GRACE (Gravity Recovery Climate Experiment) spacecraft. All density measurements are intercalibrated against densities computed by the Jacchia-Bowman 2008 empirical model under the regime of very low geomagnetic activity. We explore both the effects of the pre-CME shock impact on the thermosphere and of the storm main phase onset by taking their times of occurrence as zero epoch times (CME impact and interplanetary magnetic field Bz southward turning) for each storm. We find that the shock impact produces quick and transient responses at the two high-latitude regions with minimal propagation toward lower latitudes. In both cases, thermosphere is heated in very high latitude regions within several minutes. The Bz southward turning of the storm onset has a fast heating manifestation at the two high-latitude regions, and it takes approximately 3 h for that heating to propagate down to equatorial latitudes and to globalize in the thermosphere. This heating propagation is presumably accomplished, at least in part, with traveling atmospheric disturbances and complex meridional wind structures. Current models use longer lag times in computing thermosphere density dynamics during storms. Our results suggest that the thermosphere response time scales are shorter and should be accordingly adjusted in thermospheric empirical models.

  5. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Mid-year report FY17 Q2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Pugmire, David; Rogers, David

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  6. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Pugmire, David; Rogers, David

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  7. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  8. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  9. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Vincent W.C., E-mail: htvinwu@polyu.edu.hk; Tse, Teddy K.H.; Ho, Cola L.M.

    2013-07-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each casemore » by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.« less

  10. Operator performance and localized muscle fatigue in a simulated space vehicle control task

    NASA Technical Reports Server (NTRS)

    Lewis, J. L., Jr.

    1979-01-01

    Fourier transforms in a special purpose computer were utilized to obtain power spectral density functions from electromyograms of the biceps brachii, triceps brachii, brachioradialis, flexor carpi ulnaris, brachialis, and pronator teres in eight subjects performing isometric tracking tasks in two directions utilizing a prototype spacecraft rotational hand controller. Analysis of these spectra in general purpose computers aided in defining muscles involved in performing the task, and yielded a derived measure potentially useful in predicting task termination. The triceps was the only muscle to show significant differences in all possible tests for simple effects in both tasks and, overall, was the most consistently involved of the six muscles. The total power monitored for triceps, biceps, and brachialis dropped to minimal levels across all subjects earlier than for other muscles. However, smaller variances existed for the biceps, brachioradialis, brachialis, and flexor carpi ulnaris muscles and could provide longer predictive times due to smaller standard deviations for a greater population range.

  11. Chemical application of diffusion quantum Monte Carlo

    NASA Technical Reports Server (NTRS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1984-01-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.

  12. A generalized global alignment algorithm.

    PubMed

    Huang, Xiaoqiu; Chao, Kun-Mao

    2003-01-22

    Homologous sequences are sometimes similar over some regions but different over other regions. Homologous sequences have a much lower global similarity if the different regions are much longer than the similar regions. We present a generalized global alignment algorithm for comparing sequences with intermittent similarities, an ordered list of similar regions separated by different regions. A generalized global alignment model is defined to handle sequences with intermittent similarities. A dynamic programming algorithm is designed to compute an optimal general alignment in time proportional to the product of sequence lengths and in space proportional to the sum of sequence lengths. The algorithm is implemented as a computer program named GAP3 (Global Alignment Program Version 3). The generalized global alignment model is validated by experimental results produced with GAP3 on both DNA and protein sequences. The GAP3 program extends the ability of standard global alignment programs to recognize homologous sequences of lower similarity. The GAP3 program is freely available for academic use at http://bioinformatics.iastate.edu/aat/align/align.html.

  13. Machinability of an experimental Ti-Ag alloy in terms of tool life in a dental CAD/CAM system.

    PubMed

    Inagaki, Ryoichi; Kikuchi, Masafumi; Takahashi, Masatoshi; Takada, Yukyo; Sasaki, Keiichi

    2015-01-01

    Titanium is difficult to machine because of its intrinsic properties. In a previous study, the machinability of titanium was improved by alloying with silver. This study aimed to evaluate the durability of tungsten carbide burs after the fabrication of frameworks using a Ti-20%Ag alloy and titanium with a computer-aided design and computer-aided manufacturing system. There was a significant difference in attrition area ratio between the two metals. Compared with titanium, the ratio of the area of attrition of machining burs was significantly lower for the experimental Ti-20%Ag alloy. The difference in the area of attrition for titanium and Ti-20%Ag became remarkable with increasing number of machining operations. The results show that the same burs can be used for a longer time with Ti-20%Ag than with pure titanium. Therefore, in terms of tool life, the machinability of the Ti-20%Ag alloy is superior to that of titanium.

  14. Acceleration of the Particle Swarm Optimization for Peierls–Nabarro modeling of dislocations in conventional and high-entropy alloys

    DOE PAGES

    Pei, Zongrui; Max-Planck-Inst. fur Eisenforschung, Duseldorf; Eisenbach, Markus

    2017-02-06

    Dislocations are among the most important defects in determining the mechanical properties of both conventional alloys and high-entropy alloys. The Peierls-Nabarro model supplies an efficient pathway to their geometries and mobility. The difficulty in solving the integro-differential Peierls-Nabarro equation is how to effectively avoid the local minima in the energy landscape of a dislocation core. Among the other methods to optimize the dislocation core structures, we choose the algorithm of Particle Swarm Optimization, an algorithm that simulates the social behaviors of organisms. By employing more particles (bigger swarm) and more iterative steps (allowing them to explore for longer time), themore » local minima can be effectively avoided. But this would require more computational cost. The advantage of this algorithm is that it is readily parallelized in modern high computing architecture. We demonstrate the performance of our parallelized algorithm scales linearly with the number of employed cores.« less

  15. Core Collapse: The Race Between Stellar Evolution and Binary Heating

    NASA Astrophysics Data System (ADS)

    Converse, Joseph M.; Chandar, R.

    2012-01-01

    The dynamical formation of binary stars can dramatically affect the evolution of their host star clusters. In relatively small clusters (M < 6000 Msun) the most massive stars rapidly form binaries, heating the cluster and preventing any significant contraction of the core. The situation in much larger globular clusters (M 105 Msun) is quite different, with many showing collapsed cores, implying that binary formation did not affect them as severely as lower mass clusters. More massive clusters, however, should take longer to form their binaries, allowing stellar evolution more time to prevent the heating by causing the larger stars to die off. Here, we simulate the evolution of clusters between those of open and globular clusters in order to find at what size a star cluster is able to experience true core collapse. Our simulations make use of a new GPU-based computing cluster recently purchased at the University of Toledo. We also present some benchmarks of this new computational resource.

  16. Energy balance and mass conservation in reduced order models of fluid flows

    NASA Astrophysics Data System (ADS)

    Mohebujjaman, Muhammad; Rebholz, Leo G.; Xie, Xuping; Iliescu, Traian

    2017-10-01

    In this paper, we investigate theoretically and computationally the conservation properties of reduced order models (ROMs) for fluid flows. Specifically, we investigate whether the ROMs satisfy the same (or similar) energy balance and mass conservation as those satisfied by the Navier-Stokes equations. All of our theoretical findings are illustrated and tested in numerical simulations of a 2D flow past a circular cylinder at a Reynolds number Re = 100. First, we investigate the ROM energy balance. We show that using the snapshot average for the centering trajectory (which is a popular treatment of nonhomogeneous boundary conditions in ROMs) yields an incorrect energy balance. Then, we propose a new approach, in which we replace the snapshot average with the Stokes extension. Theoretically, the Stokes extension produces an accurate energy balance. Numerically, the Stokes extension yields more accurate results than the standard snapshot average, especially for longer time intervals. Our second contribution centers around ROM mass conservation. We consider ROMs created using two types of finite elements: the standard Taylor-Hood (TH) element, which satisfies the mass conservation weakly, and the Scott-Vogelius (SV) element, which satisfies the mass conservation pointwise. Theoretically, the error estimates for the SV-ROM are sharper than those for the TH-ROM. Numerically, the SV-ROM yields significantly more accurate results, especially for coarser meshes and longer time intervals.

  17. A non-voxel-based broad-beam (NVBB) framework for IMRT treatment planning.

    PubMed

    Lu, Weiguo

    2010-12-07

    We present a novel framework that enables very large scale intensity-modulated radiation therapy (IMRT) planning in limited computation resources with improvements in cost, plan quality and planning throughput. Current IMRT optimization uses a voxel-based beamlet superposition (VBS) framework that requires pre-calculation and storage of a large amount of beamlet data, resulting in large temporal and spatial complexity. We developed a non-voxel-based broad-beam (NVBB) framework for IMRT capable of direct treatment parameter optimization (DTPO). In this framework, both objective function and derivative are evaluated based on the continuous viewpoint, abandoning 'voxel' and 'beamlet' representations. Thus pre-calculation and storage of beamlets are no longer needed. The NVBB framework has linear complexities (O(N(3))) in both space and time. The low memory, full computation and data parallelization nature of the framework render its efficient implementation on the graphic processing unit (GPU). We implemented the NVBB framework and incorporated it with the TomoTherapy treatment planning system (TPS). The new TPS runs on a single workstation with one GPU card (NVBB-GPU). Extensive verification/validation tests were performed in house and via third parties. Benchmarks on dose accuracy, plan quality and throughput were compared with the commercial TomoTherapy TPS that is based on the VBS framework and uses a computer cluster with 14 nodes (VBS-cluster). For all tests, the dose accuracy of these two TPSs is comparable (within 1%). Plan qualities were comparable with no clinically significant difference for most cases except that superior target uniformity was seen in the NVBB-GPU for some cases. However, the planning time using the NVBB-GPU was reduced many folds over the VBS-cluster. In conclusion, we developed a novel NVBB framework for IMRT optimization. The continuous viewpoint and DTPO nature of the algorithm eliminate the need for beamlets and lead to better plan quality. The computation parallelization on a GPU instead of a computer cluster significantly reduces hardware and service costs. Compared with using the current VBS framework on a computer cluster, the planning time is significantly reduced using the NVBB framework on a single workstation with a GPU card.

  18. Orbitals for classical arbitrary anisotropic colloidal potentials

    NASA Astrophysics Data System (ADS)

    Girard, Martin; Nguyen, Trung Dac; de la Cruz, Monica Olvera

    2017-11-01

    Coarse-grained potentials are ubiquitous in mesoscale simulations. While various methods to compute effective interactions for spherically symmetric particles exist, anisotropic interactions are seldom used, due to their complexity. Here we describe a general formulation, based on a spatial decomposition of the density fields around the particles, akin to atomic orbitals. We show that anisotropic potentials can be efficiently computed in numerical simulations using Fourier-based methods. We validate the field formulation and characterize its computational efficiency with a system of colloids that have Gaussian surface charge distributions. We also investigate the phase behavior of charged Janus colloids immersed in screened media, with screening lengths comparable to the colloid size. The system shows rich behaviors, exhibiting vapor, liquid, gel, and crystalline morphologies, depending on temperature and screening length. The crystalline phase only appears for symmetric Janus particles. For very short screening lengths, the system undergoes a direct transition from a vapor to a crystal on cooling; while, for longer screening lengths, a vapor-liquid-crystal transition is observed. The proposed formulation can be extended to model force fields that are time or orientation dependent, such as those in systems of polymer-grafted particles and magnetic colloids.

  19. Group Velocity Dispersion Curves from Wigner-Ville Distributions

    NASA Astrophysics Data System (ADS)

    Lloyd, Simon; Bokelmann, Goetz; Sucic, Victor

    2013-04-01

    With the widespread adoption of ambient noise tomography, and the increasing number of local earthquakes recorded worldwide due to dense seismic networks and many very dense temporary experiments, we consider it worthwhile to evaluate alternative Methods to measure surface wave group velocity dispersions curves. Moreover, the increased computing power of even a simple desktop computer makes it feasible to routinely use methods other than the typically employed multiple filtering technique (MFT). To that end we perform tests with synthetic and observed seismograms using the Wigner-Ville distribution (WVD) frequency time analysis, and compare dispersion curves measured with WVD and MFT with each other. Initial results suggest WVD to be at least as good as MFT at measuring dispersion, albeit at a greater computational expense. We therefore need to investigate if, and under which circumstances, WVD yields better dispersion curves than MFT, before considering routinely applying the method. As both MFT and WVD generally work well for teleseismic events and at longer periods, we explore how well the WVD method performs at shorter periods and for local events with smaller epicentral distances. Such dispersion information could potentially be beneficial for improving velocity structure resolution within the crust.

  20. THEORETICAL p-MODE OSCILLATION FREQUENCIES FOR THE RAPIDLY ROTATING {delta} SCUTI STAR {alpha} OPHIUCHI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deupree, Robert G., E-mail: bdeupree@ap.smu.ca

    2011-11-20

    A rotating, two-dimensional stellar model is evolved to match the approximate conditions of {alpha} Oph. Both axisymmetric and nonaxisymmetric oscillation frequencies are computed for two-dimensional rotating models which approximate the properties of {alpha} Oph. These computed frequencies are compared to the observed frequencies. Oscillation calculations are made assuming the eigenfunction can be fitted with six Legendre polynomials, but comparison calculations with eight Legendre polynomials show the frequencies agree to within about 0.26% on average. The surface horizontal shape of the eigenfunctions for the two sets of assumed number of Legendre polynomials agrees less well, but all calculations show significant departuresmore » from that of a single Legendre polynomial. It is still possible to determine the large separation, although the small separation is more complicated to estimate. With the addition of the nonaxisymmetric modes with |m| {<=} 4, the frequency space becomes sufficiently dense that it is difficult to comment on the adequacy of the fit of the computed to the observed frequencies. While the nonaxisymmetric frequency mode splitting is no longer uniform, the frequency difference between the frequencies for positive and negative values of the same m remains 2m times the rotation rate.« less

  1. Cost effectiveness of a computer-delivered intervention to improve HIV medication adherence

    PubMed Central

    2013-01-01

    Background High levels of adherence to medications for HIV infection are essential for optimal clinical outcomes and to reduce viral transmission, but many patients do not achieve required levels. Clinician-delivered interventions can improve patients’ adherence, but usually require substantial effort by trained individuals and may not be widely available. Computer-delivered interventions can address this problem by reducing required staff time for delivery and by making the interventions widely available via the Internet. We previously developed a computer-delivered intervention designed to improve patients’ level of health literacy as a strategy to improve their HIV medication adherence. The intervention was shown to increase patients’ adherence, but it was not clear that the benefits resulting from the increase in adherence could justify the costs of developing and deploying the intervention. The purpose of this study was to evaluate the relation of development and deployment costs to the effectiveness of the intervention. Methods Costs of intervention development were drawn from accounting reports for the grant under which its development was supported, adjusted for costs primarily resulting from the project’s research purpose. Effectiveness of the intervention was drawn from results of the parent study. The relation of the intervention’s effects to changes in health status, expressed as utilities, was also evaluated in order to assess the net cost of the intervention in terms of quality adjusted life years (QALYs). Sensitivity analyses evaluated ranges of possible intervention effectiveness and durations of its effects, and costs were evaluated over several deployment scenarios. Results The intervention’s cost effectiveness depends largely on the number of persons using it and the duration of its effectiveness. Even with modest effects for a small number of patients the intervention was associated with net cost savings in some scenarios and for durations greater than three months and longer it was usually associated with a favorable cost per QALY. For intermediate and larger assumed effects and longer durations of intervention effectiveness, the intervention was associated with net cost savings. Conclusions Computer-delivered adherence interventions may be a cost-effective strategy to improve adherence in persons treated for HIV. Trial registration Clinicaltrials.gov identifier NCT01304186. PMID:23446180

  2. Cost effectiveness of a computer-delivered intervention to improve HIV medication adherence.

    PubMed

    Ownby, Raymond L; Waldrop-Valverde, Drenna; Jacobs, Robin J; Acevedo, Amarilis; Caballero, Joshua

    2013-02-28

    High levels of adherence to medications for HIV infection are essential for optimal clinical outcomes and to reduce viral transmission, but many patients do not achieve required levels. Clinician-delivered interventions can improve patients' adherence, but usually require substantial effort by trained individuals and may not be widely available. Computer-delivered interventions can address this problem by reducing required staff time for delivery and by making the interventions widely available via the Internet. We previously developed a computer-delivered intervention designed to improve patients' level of health literacy as a strategy to improve their HIV medication adherence. The intervention was shown to increase patients' adherence, but it was not clear that the benefits resulting from the increase in adherence could justify the costs of developing and deploying the intervention. The purpose of this study was to evaluate the relation of development and deployment costs to the effectiveness of the intervention. Costs of intervention development were drawn from accounting reports for the grant under which its development was supported, adjusted for costs primarily resulting from the project's research purpose. Effectiveness of the intervention was drawn from results of the parent study. The relation of the intervention's effects to changes in health status, expressed as utilities, was also evaluated in order to assess the net cost of the intervention in terms of quality adjusted life years (QALYs). Sensitivity analyses evaluated ranges of possible intervention effectiveness and durations of its effects, and costs were evaluated over several deployment scenarios. The intervention's cost effectiveness depends largely on the number of persons using it and the duration of its effectiveness. Even with modest effects for a small number of patients the intervention was associated with net cost savings in some scenarios and for durations greater than three months and longer it was usually associated with a favorable cost per QALY. For intermediate and larger assumed effects and longer durations of intervention effectiveness, the intervention was associated with net cost savings. Computer-delivered adherence interventions may be a cost-effective strategy to improve adherence in persons treated for HIV. Clinicaltrials.gov identifier NCT01304186.

  3. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB

    PubMed Central

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A.

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED are maintained at http://vislab.github.com/MobbedMatlab/ PMID:24124417

  4. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB.

    PubMed

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED are maintained at http://vislab.github.com/MobbedMatlab/

  5. Quantum Metrology beyond the Classical Limit under the Effect of Dephasing

    NASA Astrophysics Data System (ADS)

    Matsuzaki, Yuichiro; Benjamin, Simon; Nakayama, Shojun; Saito, Shiro; Munro, William J.

    2018-04-01

    Quantum sensors have the potential to outperform their classical counterparts. For classical sensing, the uncertainty of the estimation of the target fields scales inversely with the square root of the measurement time T . On the other hand, by using quantum resources, we can reduce this scaling of the uncertainty with time to 1 /T . However, as quantum states are susceptible to dephasing, it has not been clear whether we can achieve sensitivities with a scaling of 1 /T for a measurement time longer than the coherence time. Here, we propose a scheme that estimates the amplitude of globally applied fields with the uncertainty of 1 /T for an arbitrary time scale under the effect of dephasing. We use one-way quantum-computing-based teleportation between qubits to prevent any increase in the correlation between the quantum state and its local environment from building up and have shown that such a teleportation protocol can suppress the local dephasing while the information from the target fields keeps growing. Our method has the potential to realize a quantum sensor with a sensitivity far beyond that of any classical sensor.

  6. Short-term outcome of 1,465 computer-navigated primary total knee replacements 2005-2008.

    PubMed

    Gøthesen, Oystein; Espehaug, Birgitte; Havelin, Leif; Petursson, Gunnar; Furnes, Ove

    2011-06-01

    and purpose Improvement of positioning and alignment by the use of computer-assisted surgery (CAS) might improve longevity and function in total knee replacements, but there is little evidence. In this study, we evaluated the short-term results of computer-navigated knee replacements based on data from the Norwegian Arthroplasty Register. Primary total knee replacements without patella resurfacing, reported to the Norwegian Arthroplasty Register during the years 2005-2008, were evaluated. The 5 most common implants and the 3 most common navigation systems were selected. Cemented, uncemented, and hybrid knees were included. With the risk of revision for any cause as the primary endpoint and intraoperative complications and operating time as secondary outcomes, 1,465 computer-navigated knee replacements (CAS) and 8,214 conventionally operated knee replacements (CON) were compared. Kaplan-Meier survival analysis and Cox regression analysis with adjustment for age, sex, prosthesis brand, fixation method, previous knee surgery, preoperative diagnosis, and ASA category were used. Kaplan-Meier estimated survival at 2 years was 98% (95% CI: 97.5-98.3) in the CON group and 96% (95% CI: 95.0-97.8) in the CAS group. The adjusted Cox regression analysis showed a higher risk of revision in the CAS group (RR = 1.7, 95% CI: 1.1-2.5; p = 0.02). The LCS Complete knee had a higher risk of revision with CAS than with CON (RR = 2.1, 95% CI: 1.3-3.4; p = 0.004)). The differences were not statistically significant for the other prosthesis brands. Mean operating time was 15 min longer in the CAS group. With the introduction of computer-navigated knee replacement surgery in Norway, the short-term risk of revision has increased for computer-navigated replacement with the LCS Complete. The mechanisms of failure of these implantations should be explored in greater depth, and in this study we have not been able to draw conclusions regarding causation.

  7. The impact on revenue of increasing patient volume at surgical suites with relatively high operating room utilization.

    PubMed

    Dexter, F; Macario, A; Lubarsky, D A

    2001-05-01

    We previously studied hospitals in the United States of America that are losing money despite limiting the hours that operating room (OR) staff are available to care for patients undergoing elective surgery. These hospitals routinely keep utilization relatively high to maximize revenue. We tested, using discrete-event computer simulation, whether increasing patient volume while being reimbursed less for each additional patient can reliably achieve an increase in revenue when initial adjusted OR utilization is 90%. We found that increasing the volume of referred patients by the amount expected to fill the surgical suite (100%/90%) would increase utilization by <1% for a hospital surgical suite (with longer duration cases) and 4% for an ambulatory surgery suite (with short cases). The increase in patient volume would result in longer patient waiting times for surgery and more patients leaving the surgical queue. With a 15% reduction in payment for the new patients, the increase in volume may not increase revenue and can even decrease the contribution margin for the hospital surgical suite. The implication is that for hospitals with a relatively high OR utilization, signing discounted contracts to increase patient volume by the amount expected to "fill" the OR can have the net effect of decreasing the contribution margin (i.e., profitability). Hospitals may try to attract new surgical volume by offering discounted rates. For hospitals with a relatively high operating room utilization (e.g., 90%), computer simulations predict that increasing patient volume by the amount expected to "fill" the operating room can have the net effect of decreasing contribution margin (i.e., profitability).

  8. Deep brain stimulation abolishes slowing of reactions to unlikely stimuli.

    PubMed

    Antoniades, Chrystalina A; Bogacz, Rafal; Kennard, Christopher; FitzGerald, James J; Aziz, Tipu; Green, Alexander L

    2014-08-13

    The cortico-basal-ganglia circuit plays a critical role in decision making on the basis of probabilistic information. Computational models have suggested how this circuit could compute the probabilities of actions being appropriate according to Bayes' theorem. These models predict that the subthalamic nucleus (STN) provides feedback that normalizes the neural representation of probabilities, such that if the probability of one action increases, the probabilities of all other available actions decrease. Here we report the results of an experiment testing a prediction of this theory that disrupting information processing in the STN with deep brain stimulation should abolish the normalization of the neural representation of probabilities. In our experiment, we asked patients with Parkinson's disease to saccade to a target that could appear in one of two locations, and the probability of the target appearing in each location was periodically changed. When the stimulator was switched off, the target probability affected the reaction times (RT) of patients in a similar way to healthy participants. Specifically, the RTs were shorter for more probable targets and, importantly, they were longer for the unlikely targets. When the stimulator was switched on, the patients were still faster for more probable targets, but critically they did not increase RTs as the target was becoming less likely. This pattern of results is consistent with the prediction of the model that the patients on DBS no longer normalized their neural representation of prior probabilities. We discuss alternative explanations for the data in the context of other published results. Copyright © 2014 the authors 0270-6474/14/3410844-09$15.00/0.

  9. Minimum-domain impulse theory for unsteady aerodynamic force

    NASA Astrophysics Data System (ADS)

    Kang, L. L.; Liu, L. Q.; Su, W. D.; Wu, J. Z.

    2018-01-01

    We extend the impulse theory for unsteady aerodynamics from its classic global form to finite-domain formulation then to minimum-domain form and from incompressible to compressible flows. For incompressible flow, the minimum-domain impulse theory raises the finding of Li and Lu ["Force and power of flapping plates in a fluid," J. Fluid Mech. 712, 598-613 (2012)] to a theorem: The entire force with discrete wake is completely determined by only the time rate of impulse of those vortical structures still connecting to the body, along with the Lamb-vector integral thereof that captures the contribution of all the rest disconnected vortical structures. For compressible flows, we find that the global form in terms of the curl of momentum ∇ × (ρu), obtained by Huang [Unsteady Vortical Aerodynamics (Shanghai Jiaotong University Press, 1994)], can be generalized to having an arbitrary finite domain, but the formula is cumbersome and in general ∇ × (ρu) no longer has discrete structures and hence no minimum-domain theory exists. Nevertheless, as the measure of transverse process only, the unsteady field of vorticity ω or ρω may still have a discrete wake. This leads to a minimum-domain compressible vorticity-moment theory in terms of ρω (but it is beyond the classic concept of impulse). These new findings and applications have been confirmed by our numerical experiments. The results not only open an avenue to combine the theory with computation-experiment in wide applications but also reveal a physical truth that it is no longer necessary to account for all wake vortical structures in computing the force and moment.

  10. Speed and Cardiac Recovery Variables Predict the Probability of Elimination in Equine Endurance Events

    PubMed Central

    Younes, Mohamed; Robert, Céline; Cottin, François; Barrey, Eric

    2015-01-01

    Nearly 50% of the horses participating in endurance events are eliminated at a veterinary examination (a vet gate). Detecting unfit horses before a health problem occurs and treatment is required is a challenge for veterinarians but is essential for improving equine welfare. We hypothesized that it would be possible to detect unfit horses earlier in the event by measuring heart rate recovery variables. Hence, the objective of the present study was to compute logistic regressions of heart rate, cardiac recovery time and average speed data recorded at the previous vet gate (n-1) and thus predict the probability of elimination during successive phases (n and following) in endurance events. Speed and heart rate data were extracted from an electronic database of endurance events (80–160 km in length) organized in four countries. Overall, 39% of the horses that started an event were eliminated—mostly due to lameness (64%) or metabolic disorders (15%). For each vet gate, logistic regressions of explanatory variables (average speed, cardiac recovery time and heart rate measured at the previous vet gate) and categorical variables (age and/or event distance) were computed to estimate the probability of elimination. The predictive logistic regressions for vet gates 2 to 5 correctly classified between 62% and 86% of the eliminated horses. The robustness of these results was confirmed by high areas under the receiving operating characteristic curves (0.68–0.84). Overall, a horse has a 70% chance of being eliminated at the next gate if its cardiac recovery time is longer than 11 min at vet gate 1 or 2, or longer than 13 min at vet gates 3 or 4. Heart rate recovery and average speed variables measured at the previous vet gate(s) enabled us to predict elimination at the following vet gate. These variables should be checked at each veterinary examination, in order to detect unfit horses as early as possible. Our predictive method may help to improve equine welfare and ethical considerations in endurance events. PMID:26322506

  11. Speed and Cardiac Recovery Variables Predict the Probability of Elimination in Equine Endurance Events.

    PubMed

    Younes, Mohamed; Robert, Céline; Cottin, François; Barrey, Eric

    2015-01-01

    Nearly 50% of the horses participating in endurance events are eliminated at a veterinary examination (a vet gate). Detecting unfit horses before a health problem occurs and treatment is required is a challenge for veterinarians but is essential for improving equine welfare. We hypothesized that it would be possible to detect unfit horses earlier in the event by measuring heart rate recovery variables. Hence, the objective of the present study was to compute logistic regressions of heart rate, cardiac recovery time and average speed data recorded at the previous vet gate (n-1) and thus predict the probability of elimination during successive phases (n and following) in endurance events. Speed and heart rate data were extracted from an electronic database of endurance events (80-160 km in length) organized in four countries. Overall, 39% of the horses that started an event were eliminated--mostly due to lameness (64%) or metabolic disorders (15%). For each vet gate, logistic regressions of explanatory variables (average speed, cardiac recovery time and heart rate measured at the previous vet gate) and categorical variables (age and/or event distance) were computed to estimate the probability of elimination. The predictive logistic regressions for vet gates 2 to 5 correctly classified between 62% and 86% of the eliminated horses. The robustness of these results was confirmed by high areas under the receiving operating characteristic curves (0.68-0.84). Overall, a horse has a 70% chance of being eliminated at the next gate if its cardiac recovery time is longer than 11 min at vet gate 1 or 2, or longer than 13 min at vet gates 3 or 4. Heart rate recovery and average speed variables measured at the previous vet gate(s) enabled us to predict elimination at the following vet gate. These variables should be checked at each veterinary examination, in order to detect unfit horses as early as possible. Our predictive method may help to improve equine welfare and ethical considerations in endurance events.

  12. Making tomorrow's mistakes today: Evolutionary prototyping for risk reduction and shorter development time

    NASA Astrophysics Data System (ADS)

    Friedman, Gary; Schwuttke, Ursula M.; Burliegh, Scott; Chow, Sanguan; Parlier, Randy; Lee, Lorrine; Castro, Henry; Gersbach, Jim

    1993-03-01

    In the early days of JPL's solar system exploration, each spacecraft mission required its own dedicated data system with all software applications written in the mainframe's native assembly language. Although these early telemetry processing systems were a triumph of engineering in their day, since that time the computer industry has advanced to the point where it is now advantageous to replace these systems with more modern technology. The Space Flight Operations Center (SFOC) Prototype group was established in 1985 as a workstation and software laboratory. The charter of the lab was to determine if it was possible to construct a multimission telemetry processing system using commercial, off-the-shelf computers that communicated via networks. The staff of the lab mirrored that of a typical skunk works operation -- a small, multi-disciplinary team with a great deal of autonomy that could get complex tasks done quickly. In an effort to determine which approaches would be useful, the prototype group experimented with all types of operating systems, inter-process communication mechanisms, network protocols, packet size parameters. Out of that pioneering work came the confidence that a multi-mission telemetry processing system could be built using high-level languages running in a heterogeneous, networked workstation environment. Experience revealed that the operating systems on all nodes should be similar (i.e., all VMS or all PC-DOS or all UNIX), and that a unique Data Transport Subsystem tool needed to be built to address the incompatibilities of network standards, byte ordering, and socket buffering. The advantages of building a telemetry processing system based on emerging industry standards were numerous: by employing these standards, we would no longer be locked into a single vendor. When new technology came to market which offered ten times the performance at one eighth the cost, it would be possible to attach the new machine to the network, re-compile the application code, and run. In addition, we would no longer be plagued with lack of manufacturer support when we encountered obscure bugs. And maybe, hopefully, the eternal elusive goal of software portability across different vendors' platforms would finally be available. Some highlights of our prototyping efforts are described.

  13. Making tomorrow's mistakes today: Evolutionary prototyping for risk reduction and shorter development time

    NASA Technical Reports Server (NTRS)

    Friedman, Gary; Schwuttke, Ursula M.; Burliegh, Scott; Chow, Sanguan; Parlier, Randy; Lee, Lorrine; Castro, Henry; Gersbach, Jim

    1993-01-01

    In the early days of JPL's solar system exploration, each spacecraft mission required its own dedicated data system with all software applications written in the mainframe's native assembly language. Although these early telemetry processing systems were a triumph of engineering in their day, since that time the computer industry has advanced to the point where it is now advantageous to replace these systems with more modern technology. The Space Flight Operations Center (SFOC) Prototype group was established in 1985 as a workstation and software laboratory. The charter of the lab was to determine if it was possible to construct a multimission telemetry processing system using commercial, off-the-shelf computers that communicated via networks. The staff of the lab mirrored that of a typical skunk works operation -- a small, multi-disciplinary team with a great deal of autonomy that could get complex tasks done quickly. In an effort to determine which approaches would be useful, the prototype group experimented with all types of operating systems, inter-process communication mechanisms, network protocols, packet size parameters. Out of that pioneering work came the confidence that a multi-mission telemetry processing system could be built using high-level languages running in a heterogeneous, networked workstation environment. Experience revealed that the operating systems on all nodes should be similar (i.e., all VMS or all PC-DOS or all UNIX), and that a unique Data Transport Subsystem tool needed to be built to address the incompatibilities of network standards, byte ordering, and socket buffering. The advantages of building a telemetry processing system based on emerging industry standards were numerous: by employing these standards, we would no longer be locked into a single vendor. When new technology came to market which offered ten times the performance at one eighth the cost, it would be possible to attach the new machine to the network, re-compile the application code, and run. In addition, we would no longer be plagued with lack of manufacturer support when we encountered obscure bugs. And maybe, hopefully, the eternal elusive goal of software portability across different vendors' platforms would finally be available. Some highlights of our prototyping efforts are described.

  14. Distractions, distractions: does instant messaging affect college students' performance on a concurrent reading comprehension task?

    PubMed

    Fox, Annie Beth; Rosen, Jonathan; Crawford, Mary

    2009-02-01

    Instant messaging (IM) has become one of the most popular forms of computer-mediated communication (CMC) and is especially prevalent on college campuses. Previous research suggests that IM users often multitask while conversing online. To date, no one has yet examined the cognitive effect of concurrent IM use. Participants in the present study (N = 69) completed a reading comprehension task uninterrupted or while concurrently holding an IM conversation. Participants who IMed while performing the reading task took significantly longer to complete the task, indicating that concurrent IM use negatively affects efficiency. Concurrent IM use did not affect reading comprehension scores. Additional analyses revealed that the more time participants reported spending on IM, the lower their reading comprehension scores. Finally, we found that the more time participants reported spending on IM, the lower their self-reported GPA. Implications and future directions are discussed.

  15. Induced mood and selective attention.

    PubMed

    Brand, N; Verspui, L; Oving, A

    1997-04-01

    Subjects (N = 60) were randomly assigned to an elated, depressed, or neutral mood-induction condition to assess the effect of mood state on cognitive functioning. In the elated condition film fragments expressing happiness and euphoria were shown. In the depressed condition some frightening and distressing film fragments were presented. The neutral group watched no film. Mood states were measured using the Profile of Mood States, and a Stroop task assessed selective attention. Both were presented by computer. The induction groups differed significantly in the expected direction on the mood subscales Anger, Tension, Depression, Vigour, and Fatigue, and also in the mean scale response times, i.e., slower responses for the depressed condition and faster for the elated one. Differences between conditions were found in the errors on the Stroop: in the depressed condition were the fewest errors and significantly longer error reaction times. Speed of error was associated with self-reported fatigue.

  16. Ultrasonic Imaging Modalities for Medical Applications

    NASA Astrophysics Data System (ADS)

    Ahmed, Mahfuz; Wade, Glen; Wang, Keith

    1980-06-01

    The ability to "see" with sound has long been an intriguing concept. Certain animals, such as bats and dolphins can do it readily but the human species is not so endowed by nature. However, this lack of natural ability has been overcome by developing an appropriate technology. For example, in various laboratories recently, workers were able to obtain true-focused orthographic images in real time of objects irradiated with sound rather than with light. Cross-sectional images have been available for a much longer period of time stemming from the development of pulse-echo techniques first used in the sonar systems of World War I. By now a wide variety of system concepts for acoustic imaging exist and have been or are being applied for medical diagnosis. The newer systems range from tomographic types using computers to holographic ones using lasers. These are dealt with briefly here.

  17. Ultrafast spectroscopy reveals subnanosecond peptide conformational dynamics and validates molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Spörlein, Sebastian; Carstens, Heiko; Satzger, Helmut; Renner, Christian; Behrendt, Raymond; Moroder, Luis; Tavan, Paul; Zinth, Wolfgang; Wachtveitl, Josef

    2002-06-01

    Femtosecond time-resolved spectroscopy on model peptides with built-in light switches combined with computer simulation of light-triggered motions offers an attractive integrated approach toward the understanding of peptide conformational dynamics. It was applied to monitor the light-induced relaxation dynamics occurring on subnanosecond time scales in a peptide that was backbone-cyclized with an azobenzene derivative as optical switch and spectroscopic probe. The femtosecond spectra permit the clear distinguishing and characterization of the subpicosecond photoisomerization of the chromophore, the subsequent dissipation of vibrational energy, and the subnanosecond conformational relaxation of the peptide. The photochemical cis/trans-isomerization of the chromophore and the resulting peptide relaxations have been simulated with molecular dynamics calculations. The calculated reaction kinetics, as monitored by the energy content of the peptide, were found to match the spectroscopic data. Thus we verify that all-atom molecular dynamics simulations can quantitatively describe the subnanosecond conformational dynamics of peptides, strengthening confidence in corresponding predictions for longer time scales.

  18. TongueToSpeech (TTS): Wearable wireless assistive device for augmented speech.

    PubMed

    Marjanovic, Nicholas; Piccinini, Giacomo; Kerr, Kevin; Esmailbeigi, Hananeh

    2017-07-01

    Speech is an important aspect of human communication; individuals with speech impairment are unable to communicate vocally in real time. Our team has developed the TongueToSpeech (TTS) device with the goal of augmenting speech communication for the vocally impaired. The proposed device is a wearable wireless assistive device that incorporates a capacitive touch keyboard interface embedded inside a discrete retainer. This device connects to a computer, tablet or a smartphone via Bluetooth connection. The developed TTS application converts text typed by the tongue into audible speech. Our studies have concluded that an 8-contact point configuration between the tongue and the TTS device would yield the best user precision and speed performance. On average using the TTS device inside the oral cavity takes 2.5 times longer than the pointer finger using a T9 (Text on 9 keys) keyboard configuration to type the same phrase. In conclusion, we have developed a discrete noninvasive wearable device that allows the vocally impaired individuals to communicate in real time.

  19. Comparison of the results of several heat transfer computer codes when applied to a hypothetical nuclear waste repository

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Claiborne, H.C.; Wagner, R.S.; Just, R.A.

    1979-12-01

    A direct comparison of transient thermal calculations was made with the heat transfer codes HEATING5, THAC-SIP-3D, ADINAT, SINDA, TRUMP, and TRANCO for a hypothetical nuclear waste repository. With the exception of TRUMP and SINDA (actually closer to the earlier CINDA3G version), the other codes agreed to within +-5% for the temperature rises as a function of time. The TRUMP results agreed within +-5% up to about 50 years, where the maximum temperature occurs, and then began an oscillary behavior with up to 25% deviations at longer times. This could have resulted from time steps that were too large or frommore » some unknown system problems. The available version of the SINDA code was not compatible with the IBM compiler without using an alternative method for handling a variable thermal conductivity. The results were about 40% low, but a reasonable agreement was obtained by assuming a uniform thermal conductivity; however, a programming error was later discovered in the alternative method. Some work is required on the IBM version to make it compatible with the system and still use the recommended method of handling variable thermal conductivity. TRANCO can only be run as a 2-D model, and TRUMP and CINDA apparently required longer running times and did not agree in the 2-D case; therefore, only HEATING5, THAC-SIP-3D, and ADINAT were used for the 3-D model calculations. The codes agreed within +-5%; at distances of about 1 ft from the waste canister edge, temperature rises were also close to that predicted by the 3-D model.« less

  20. Iris colour in relation to myopia among Chinese school-aged children.

    PubMed

    Pan, Chen-Wei; Qiu, Qin-Xiao; Qian, Deng-Juan; Hu, Dan-Ning; Li, Jun; Saw, Seang-Mei; Zhong, Hua

    2018-01-01

    Understanding the association of iris colour and myopia may provide further insights into the role of the wavelength of lights in the pathophysiology of myopia. We aim to assess the association of iris colour and myopia in a school-based sample of Chinese students. Two thousand three hundred and forty-six Year 7 students from 10 middle schools (93.5% response rate) aged 13-14 years in Mojiang, a small county located in Southwestern China, participated in the study. We obtained standardised slit lamp photographs and developed a grading system assessing iris colour (higher grade denoting a darker iris). Refractive error was measured after cycloplegia using an autorefractor by optometrists or trained technicians. An IOLMaster (www.zeiss.com) was used to measure ocular biometric parameters including axial length (AL). Of all the study participants, 693 (29.5%) were affected by myopia with the prevalence estimates being higher in girls (36.8%; 95% confidence interval [CI]: 34.0, 39.6) than in boys (22.8%; 95% CI: 20.4, 25.1) (p < 0.001). After adjusting for gender, height, parental history of myopia, time spent on computer, time spent watching TV, time spent outdoors, and time spent reading and writing, participants with a darker iris colour tended to have a higher prevalence of myopia, a more myopic refraction and a longer AL. Dose-response relationships were observed in all regression models (p for trend <0.05). Darker iris colour was associated with more myopic refractive errors and longer ALs among Chinese school-aged children and this association was independent of other known myopia-related risk factors. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  1. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    NASA Astrophysics Data System (ADS)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  2. Cutoff size need not strongly influence molecular dynamics results for solvated polypeptides.

    PubMed

    Beck, David A C; Armen, Roger S; Daggett, Valerie

    2005-01-18

    The correct treatment of van der Waals and electrostatic nonbonded interactions in molecular force fields is essential for performing realistic molecular dynamics (MD) simulations of solvated polypeptides. The most computationally tractable treatment of nonbonded interactions in MD utilizes a spherical distance cutoff (typically, 8-12 A) to reduce the number of pairwise interactions. In this work, we assess three spherical atom-based cutoff approaches for use with all-atom explicit solvent MD: abrupt truncation, a CHARMM-style electrostatic shift truncation, and our own force-shifted truncation. The chosen system for this study is an end-capped 17-residue alanine-based alpha-helical peptide, selected because of its use in previous computational and experimental studies. We compare the time-averaged helical content calculated from these MD trajectories with experiment. We also examine the effect of varying the cutoff treatment and distance on energy conservation. We find that the abrupt truncation approach is pathological in its inability to conserve energy. The CHARMM-style shift truncation performs quite well but suffers from energetic instability. On the other hand, the force-shifted spherical cutoff method conserves energy, correctly predicts the experimental helical content, and shows convergence in simulation statistics as the cutoff is increased. This work demonstrates that by using proper and rigorous techniques, it is possible to correctly model polypeptide dynamics in solution with a spherical cutoff. The inherent computational advantage of spherical cutoffs over Ewald summation (and related) techniques is essential in accessing longer MD time scales.

  3. Wearable wireless User Interface Cursor-Controller (UIC-C).

    PubMed

    Marjanovic, Nicholas; Kerr, Kevin; Aranda, Ricardo; Hickey, Richard; Esmailbeigi, Hananeh

    2017-07-01

    Controlling a computer or a smartphone's cursor allows the user to access a world full of information. For millions of people with limited upper extremities motor function, controlling the cursor becomes profoundly difficult. Our team has developed the User Interface Cursor-Controller (UIC-C) to assist the impaired individuals in regaining control over the cursor. The UIC-C is a hands-free device that utilizes the tongue muscle to control the cursor movements. The entire device is housed inside a subject specific retainer. The user maneuvers the cursor by manipulating a joystick imbedded inside the retainer via their tongue. The joystick movement commands are sent to an electronic device via a Bluetooth connection. The device is readily recognizable as a cursor controller by any Bluetooth enabled electronic device. The device testing results have shown that the time it takes the user to control the cursor accurately via the UIC-C is about three times longer than a standard computer mouse controlled via the hand. The device does not require any permanent modifications to the body; therefore, it could be used during the period of acute rehabilitation of the hands. With the development of modern smart homes, and enhancement electronics controlled by the computer, UIC-C could be integrated into a system that enables individuals with permanent impairment, the ability to control the cursor. In conclusion, the UIC-C device is designed with the goal of allowing the user to accurately control a cursor during the periods of either acute or permanent upper extremities impairment.

  4. Discharge-measurement system using an acoustic Doppler current profiler with applications to large rivers and estuaries

    USGS Publications Warehouse

    Simpson, Michael R.; Oltmann, Richard N.

    1993-01-01

    Discharge measurement of large rivers and estuaries is difficult, time consuming, and sometimes dangerous. Frequently, discharge measurements cannot be made in tide-affected rivers and estuaries using conventional discharge-measurement techniques because of dynamic discharge conditions. The acoustic Doppler discharge-measurement system (ADDMS) was developed by the U.S. Geological Survey using a vessel-mounted acoustic Doppler current profiler coupled with specialized computer software to measure horizontal water velocity at 1-meter vertical intervals in the water column. The system computes discharge from water-and vessel-velocity data supplied by the ADDMS using vector-algebra algorithms included in the discharge-measurement software. With this system, a discharge measurement can be obtained by engaging the computer software and traversing a river or estuary from bank to bank; discharge in parts of the river or estuarine cross sections that cannot be measured because of ADDMS depth limitations are estimated by the system. Comparisons of ADDMS-measured discharges with ultrasonic-velocity-meter-measured discharges, along with error-analysis data, have confirmed that discharges provided by the ADDMS are at least as accurate as those produced using conventional methods. In addition, the advantage of a much shorter measurement time (2 minutes using the ADDMS compared with 1 hour or longer using conventional methods) has enabled use of the ADDMS for several applications where conventional discharge methods could not have been used with the required accuracy because of dynamic discharge conditions.

  5. Some key considerations in evolving a computer system and software engineering support environment for the space station program

    NASA Technical Reports Server (NTRS)

    Mckay, C. W.; Bown, R. L.

    1985-01-01

    The space station data management system involves networks of computing resources that must work cooperatively and reliably over an indefinite life span. This program requires a long schedule of modular growth and an even longer period of maintenance and operation. The development and operation of space station computing resources will involve a spectrum of systems and software life cycle activities distributed across a variety of hosts, an integration, verification, and validation host with test bed, and distributed targets. The requirement for the early establishment and use of an apporopriate Computer Systems and Software Engineering Support Environment is identified. This environment will support the Research and Development Productivity challenges presented by the space station computing system.

  6. XVIS: Visualization for the Extreme-Scale Scientific-Computation Ecosystem Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geveci, Berk; Maynard, Robert

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respectivemore » features into a new visualization toolkit called VTK-m.« less

  7. Spatiotemporal correlation structure of the Earth's surface temperature

    NASA Astrophysics Data System (ADS)

    Fredriksen, Hege-Beate; Rypdal, Kristoffer; Rypdal, Martin

    2015-04-01

    We investigate the spatiotemporal temperature variability for several gridded instrumental and climate model data sets. The temporal variability is analysed by estimating the power spectral density and studying the differences between local and global temperatures, land and sea, and among local temperature records at different locations. The spatiotemporal correlation structure is analysed through cross-spectra that allow us to compute frequency-dependent spatial autocorrelation functions (ACFs). Our results are then compared to theoretical spectra and frequency-dependent spatial ACFs derived from a fractional stochastic-diffusive energy balance model (FEBM). From the FEBM we expect both local and global temperatures to have a long-range persistent temporal behaviour, and the spectral exponent (β) is expected to increase by a factor of two when going from local to global scales. Our comparison of the average local spectrum and the global spectrum shows good agreement with this model, although the FEBM has so far only been studied for a pure land planet and a pure ocean planet, respectively, with no seasonal forcing. Hence it cannot capture the substantial variability among the local spectra, in particular between the spectra for land and sea, and for equatorial and non-equatorial temperatures. Both models and observation data show that land temperatures in general have a low persistence, while sea surface temperatures show a higher, and also more variable degree of persistence. Near the equator the spectra deviate from the power-law shape expected from the FEBM. Instead we observe large variability at time scales of a few years due to ENSO, and a flat spectrum at longer time scales, making the spectrum more reminiscent of that of a red noise process. From the frequency-dependent spatial ACFs we observe that the spatial correlation length increases with increasing time scale, which is also consistent with the FEBM. One consequence of this is that longer-lasting structures must also be wider in space. The spatial correlation length is also observed to be longer for land than for sea. The climate model simulations studied are mainly CMIP5 control runs of length 500-1000 yr. On time scales up to several centuries we do not observe that the difference between the local and global spectral exponents vanish. This also follows from the FEBM and shows that the dynamics is spatiotemporal (not just temporal) even on these time scales.

  8. Modelling the time at which overcrowding and feed interruption emerge on the swine premises under movement restrictions during a classical swine fever outbreak.

    PubMed

    Weng, H Y; Yadav, S; Olynk Widmar, N J; Croney, C; Ash, M; Cooper, M

    2017-03-01

    A stochastic risk model was developed to estimate the time elapsed before overcrowding (TOC) or feed interruption (TFI) emerged on the swine premises under movement restrictions during a classical swine fever (CSF) outbreak in Indiana, USA. Nursery (19 to 65 days of age) and grow-to-finish (40 to 165 days of age) pork production operations were modelled separately. Overcrowding was defined as the total weight of pigs on premises exceeding 100% to 115% of the maximum capacity of the premises, which was computed as the total weight of the pigs at harvest/transition age. Algorithms were developed to estimate age-specific weight of the pigs on premises and to compare the daily total weight of the pigs with the threshold weight defining overcrowding to flag the time when the total weight exceeded the threshold (i.e. when overcrowding occurred). To estimate TFI, an algorithm was constructed to model a swine producer's decision to discontinue feed supply by incorporating the assumptions that a longer estimated epidemic duration, a longer time interval between the age of pigs at the onset of the outbreak and the harvest/transition age, or a longer progression of an ongoing outbreak would increase the probability of a producer's decision to discontinue the feed supply. Adverse animal welfare conditions were modelled to emerge shortly after an interruption of feed supply. Simulations were run with 100 000 iterations each for a 365-day period. Overcrowding occurred in all simulated iterations, and feed interruption occurred in 30% of the iterations. The median (5th and 95th percentiles) TOC was 24 days (10, 43) in nursery operations and 78 days (26, 134) in grow-to-finish operations. Most feed interruptions, if they emerged, occurred within 15 days of an outbreak. The median (5th and 95th percentiles) time at which either overcrowding or feed interruption emerged was 19 days (4, 42) in nursery and 57 days (4, 130) in grow-to-finish operations. The study findings suggest that overcrowding and feed interruption could emerge early during a CSF outbreak among swine premises under movement restrictions. The outputs derived from the risk model could be used to estimate and evaluate associated mitigation strategies for alleviating adverse animal welfare conditions resulting from movement restrictions.

  9. Human-computer interaction: psychological aspects of the human use of computing.

    PubMed

    Olson, Gary M; Olson, Judith S

    2003-01-01

    Human-computer interaction (HCI) is a multidisciplinary field in which psychology and other social sciences unite with computer science and related technical fields with the goal of making computing systems that are both useful and usable. It is a blend of applied and basic research, both drawing from psychological research and contributing new ideas to it. New technologies continuously challenge HCI researchers with new options, as do the demands of new audiences and uses. A variety of usability methods have been developed that draw upon psychological principles. HCI research has expanded beyond its roots in the cognitive processes of individual users to include social and organizational processes involved in computer usage in real environments as well as the use of computers in collaboration. HCI researchers need to be mindful of the longer-term changes brought about by the use of computing in a variety of venues.

  10. Should We Remove the Retrievable Cook Celect Inferior Vena Cava Filter? Eight Years of Experience at a Single Center.

    PubMed

    Son, Joohyung; Bae, Miju; Chung, Sung Woon; Lee, Chung Won; Huh, Up; Song, Seunghwan

    2017-12-01

    The inferior vena cava filter (IVCF) is very effective for preventing pulmonary embolism in patients who cannot undergo anticoagulation therapy. However, if a filter is placed in the body permanently, it may lead to other complications. A retrospective study was performed of 159 patients who underwent retrievable Cook Celect IVCF implantation between January 2007 and April 2015 at a single center. Baseline characteristics, indications, and complications caused by the filter were investigated. The most common underlying disease of patients receiving the filter was cancer (24.3%). Venous thrombolysis or thrombectomy was the most common indication for IVCF insertion in this study (47.2%). The most common complication was inferior vena cava penetration, the risk of which increased the longer the filter remained in the body (p=0.032, Exp(B)=1.004). If the patient is able to retry anticoagulation therapy and the filter is no longer needed, the filter should be removed, even if a long time has elapsed since implantation. If the filter cannot be removed, it is recommended that follow-up computed tomography be performed regularly to monitor the progress of venous thromboembolisms as well as any filter-related complications.

  11. Effects of ketoconazole on the pharmacokinetics and pharmacodynamics of morphine in healthy Greyhounds.

    PubMed

    Kukanich, Butch; Borum, Stacy L

    2008-05-01

    To assess pharmacokinetics and pharmacodynamics of morphine and the effects of ketoconazole on the pharmacokinetics and pharmacodynamics of morphine in healthy Greyhounds. 6 healthy Greyhounds, 3 male and 3 female. Morphine sulfate (0.5 mg/kg. IV) was administered to Greyhounds prior to and after 5 days of ketoconazole (12.7 +/- 0.6 mg/kg, PO) treatment. Plasma samples were obtained from blood samples that were collected at predetermined time points for measurement of morphine and ketoconazole concentrations by mass spectrometry. Pharmacokinetics of morphine were estimated by use of computer software. Pharmacodynamic effects of morphine in Greyhounds were similar to those of other studies in dogs and were similar between treatment groups. Morphine was rapidly eliminated with a half-life of 1.28 hours and a plasma clearance of 32.55 mL/min/kg. The volume of distribution was 3.6 L/kg. No significant differences in the pharmacokinetics of morphine were found after treatment with ketoconazole. Plasma concentrations of ketoconazole were high and persisted longer than expected in Greyhounds. Ketoconazole had no significant effect on morphine pharmacokinetics, and the pharmacodynamics were similar between treatment groups. Plasma concentrations of ketoconazole were higher than expected and persisted longer than expected in Greyhounds.

  12. National performance on door-in to door-out time among patients transferred for primary percutaneous coronary intervention.

    PubMed

    Herrin, Jeph; Miller, Lauren E; Turkmani, Dima F; Nsa, Wato; Drye, Elizabeth E; Bernheim, Susannah M; Ling, Shari M; Rapp, Michael T; Han, Lein F; Bratzler, Dale W; Bradley, Elizabeth H; Nallamothu, Brahmajee K; Ting, Henry H; Krumholz, Harlan M

    2011-11-28

    Delays in treatment time are commonplace for patients with ST-segment elevation acute myocardial infarction who must be transferred to another hospital for percutaneous coronary intervention. Experts have recommended that door-in to door-out (DIDO) time (ie, time from arrival at the first hospital to transfer from that hospital to the percutaneous coronary intervention hospital) should not exceed 30 minutes. We sought to describe national performance in DIDO time using a new measure developed by the Centers for Medicare & Medicaid Services. We report national median DIDO time and examine associations with patient characteristics (age, sex, race, contraindication to fibrinolytic therapy, and arrival time) and hospital characteristics (number of beds, geographic region, location [rural or urban], and number of cases reported) using a mixed effects multivariable model. Among 13,776 included patients from 1034 hospitals, only 1343 (9.7%) had a DIDO time within 30 minutes, and DIDO exceeded 90 minutes for 4267 patients (31.0%). Mean estimated times (95% CI) to transfer based on multivariable analysis were 8.9 (5.6-12.2) minutes longer for women, 9.1 (2.7-16.0) minutes longer for African Americans, 6.9 (1.6-11.9) minutes longer for patients with contraindication to fibrinolytic therapy, shorter for all age categories (except >75 years) relative to the category of 18 to 35 years, 15.3 (7.3-23.5) minutes longer for rural hospitals, and 14.4 (6.6-21.3) minutes longer for hospitals with 9 or fewer transfers vs 15 or more in 2009 (all P < .001). Among patients presenting to emergency departments and requiring transfer to another facility for percutaneous coronary intervention, the DIDO time rarely met the recommended 30 minutes.

  13. National Performance on Door-In to Door-Out Time Among Patients Transferred for Primary Percutaneous Coronary Intervention

    PubMed Central

    Herrin, Jeph; Miller, Lauren E.; Turkmani, Dima F.; Nsa, Wato; Drye, Elizabeth E.; Bernheim, Susannah M.; Ling, Shari M.; Rapp, Michael T.; Han, Lein F.; Bratzler, Dale W.; Bradley, Elizabeth H.; Nallamothu, Brahmajee K.; Ting, Henry H.; Krumholz, Harlan M.

    2015-01-01

    Background Delays in treatment time are commonplace for patients with ST-segment elevation acute myocardial infarction who must be transferred to another hospital for per-cutaneous coronary intervention. Experts have recommended that door-in to door-out (DIDO) time(ie, time from arrival at the first hospital to transfer from that hospital to the percutaneous coronary intervention hospital) should not exceed 30 minutes. We sought to describe national performance in DIDO time using a new measure developed by the Centers for Medicare & Medicaid Services. Methods We report national median DIDO time and examine associations with patient characteristics (age, sex, race, contraindication to fibrinolytic therapy, and arrival time) and hospital characteristics (number of beds, geographic region, location [rural or urban], and number of cases reported) using a mixed effects multivariable model. Results Among 13 776 included patients from 1034 hospitals, only 1343 (9.7%) had a DIDO time within 30 minutes, and DIDO exceeded 90 minutes for 4267 patients (31.0%). Mean estimated times (95% CI) to transfer based on multivariable analysis were 8.9 (5.6-12.2) minutes longer for women, 9.1 (2.7-16.0) minutes longer for African Americans, 6.9 (1.6-11.9) minutes longer for patients with contraindication to fibrinolytic therapy, shorter for all age categories (except >75 years) relative to the category of 18 to 35 years, 15.3 (7.3-23.5) minutes longer for rural hospitals, and 14.4 (6.6-21.3) minutes longer for hospitals with 9 or fewer transfers vs 15 or more in 2009 (all P<.001). Conclusion Among patients presenting to emergency departments and requiring transfer to another facility for percutaneous coronary intervention, the DIDO time rarely met the recommended 30 minutes. PMID:22123793

  14. GAMUT: GPU accelerated microRNA analysis to uncover target genes through CUDA-miRanda

    PubMed Central

    2014-01-01

    Background Non-coding sequences such as microRNAs have important roles in disease processes. Computational microRNA target identification (CMTI) is becoming increasingly important since traditional experimental methods for target identification pose many difficulties. These methods are time-consuming, costly, and often need guidance from computational methods to narrow down candidate genes anyway. However, most CMTI methods are computationally demanding, since they need to handle not only several million query microRNA and reference RNA pairs, but also several million nucleotide comparisons within each given pair. Thus, the need to perform microRNA identification at such large scale has increased the demand for parallel computing. Methods Although most CMTI programs (e.g., the miRanda algorithm) are based on a modified Smith-Waterman (SW) algorithm, the existing parallel SW implementations (e.g., CUDASW++ 2.0/3.0, SWIPE) are unable to meet this demand in CMTI tasks. We present CUDA-miRanda, a fast microRNA target identification algorithm that takes advantage of massively parallel computing on Graphics Processing Units (GPU) using NVIDIA's Compute Unified Device Architecture (CUDA). CUDA-miRanda specifically focuses on the local alignment of short (i.e., ≤ 32 nucleotides) sequences against longer reference sequences (e.g., 20K nucleotides). Moreover, the proposed algorithm is able to report multiple alignments (up to 191 top scores) and the corresponding traceback sequences for any given (query sequence, reference sequence) pair. Results Speeds over 5.36 Giga Cell Updates Per Second (GCUPs) are achieved on a server with 4 NVIDIA Tesla M2090 GPUs. Compared to the original miRanda algorithm, which is evaluated on an Intel Xeon E5620@2.4 GHz CPU, the experimental results show up to 166 times performance gains in terms of execution time. In addition, we have verified that the exact same targets were predicted in both CUDA-miRanda and the original miRanda implementations through multiple test datasets. Conclusions We offer a GPU-based alternative to high performance compute (HPC) that can be developed locally at a relatively small cost. The community of GPU developers in the biomedical research community, particularly for genome analysis, is still growing. With increasing shared resources, this community will be able to advance CMTI in a very significant manner. Our source code is available at https://sourceforge.net/projects/cudamiranda/. PMID:25077821

  15. Development of Sensors for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro

    2005-01-01

    Advances in technology have led to the availability of smaller and more accurate sensors. Computer power to process large amounts of data is no longer the prevailing issue; thus multiple and redundant sensors can be used to obtain more accurate and comprehensive measurements in a space vehicle. The successful integration and commercialization of micro- and nanotechnology for aerospace applications require that a close and interactive relationship be developed between the technology provider and the end user early in the project. Close coordination between the developers and the end users is critical since qualification for flight is time-consuming and expensive. The successful integration of micro- and nanotechnology into space vehicles requires a coordinated effort throughout the design, development, installation, and integration processes

  16. Post World War II trends in tropical Pacific surface trades

    NASA Technical Reports Server (NTRS)

    Harrison, D. E.

    1989-01-01

    Multidecadal time series of surface winds from central tropical Pacific islands are used to compute trends in the trade winds between the end of WWII and 1985. Over this period, averaged over the whole region, there is no statistically significant trend in speed or zonal or meridional wind (or pseudostress). However, there is some tendency, within a few degrees of the equator, toward weakening of the easterlies and increased meridional flow toward the equator. Anomalous conditions subsequent to the 1972-73 ENSO event make a considerable contribution to the long-term trends. The period 1974-80 has been noted previously to have been anomalous, and trends over that period are sharply greater than those over the longer records.

  17. Development and operation of a high-throughput accurate-wavelength lens-based spectrometer a)

    DOE PAGES

    Bell, Ronald E.

    2014-07-11

    A high-throughput spectrometer for the 400-820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm -1 grating is matched with fast f /1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy ≤ 0.075 arc seconds. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount behind the entrance slit. The computer-controlled hardware allows automated control of wavelength, timing, f-number, automated datamore » collection, and wavelength calibration.« less

  18. Spin Path Integrals and Generations

    NASA Astrophysics Data System (ADS)

    Brannen, Carl

    2010-11-01

    The spin of a free electron is stable but its position is not. Recent quantum information research by G. Svetlichny, J. Tolar, and G. Chadzitaskos have shown that the Feynman position path integral can be mathematically defined as a product of incompatible states; that is, as a product of mutually unbiased bases (MUBs). Since the more common use of MUBs is in finite dimensional Hilbert spaces, this raises the question “what happens when spin path integrals are computed over products of MUBs?” Such an assumption makes spin no longer stable. We show that the usual spin-1/2 is obtained in the long-time limit in three orthogonal solutions that we associate with the three elementary particle generations. We give applications to the masses of the elementary leptons.

  19. Proceedings of the FAA-NASA Symposium on the Continued Airworthiness of Aircraft Structures. Volume 1

    NASA Technical Reports Server (NTRS)

    Bigelow, Catherine A. (Compiler)

    1997-01-01

    This publication contains the fifty-two technical papers presented at the FAA-NASA Symposium on the Continued Airworthiness of Aircraft Structures. The symposium, hosted by the FAA Center of Excellence for Computational Modeling of Aircraft Structures at Georgia Institute of Technology, was held to disseminate information on recent developments in advanced technologies to extend the life of high-time aircraft and design longer-life aircraft. Affiliations of the participants included 33% from government agencies and laboratories, 19% from academia, and 48% from industry; in all 240 people were in attendance. Technical papers were selected for presentation at the symposium, after a review of extended abstracts received by the Organizing Committee from a general call for papers.

  20. Proceedings of the FAA-NASA Symposium on the Continued Airworthiness of Aircraft Structures. Volume 2

    NASA Technical Reports Server (NTRS)

    Bigelow, Catherine A. (Compiler)

    1997-01-01

    This publication contains the fifty-two technical papers presented at the FAA-NASA Symposium on the Continued Airworthiness of Aircraft Structures. The symposium, hosted by the FAA Center of Excellence for Computational Modeling of Aircraft Structures at Georgia Institute of Technology, was held to disseminate information on recent developments in advanced technologies to extend the life of high-time aircraft and design longer-life aircraft. Affiliations of the participants included 33% from government agencies and laboratories, 19% from academia, and 48% from industry; in all 240 people were in attendance. Technical papers were selected for presentation at the symposium, after a review of extended abstracts received by the Organizing Committee from a general call for papers.

  1. Wavelength-scale photonic-crystal laser formed by electron-beam-induced nano-block deposition.

    PubMed

    Seo, Min-Kyo; Kang, Ju-Hyung; Kim, Myung-Ki; Ahn, Byeong-Hyeon; Kim, Ju-Young; Jeong, Kwang-Yong; Park, Hong-Gyu; Lee, Yong-Hee

    2009-04-13

    A wavelength-scale cavity is generated by printing a carbonaceous nano-block on a photonic-crystal waveguide. The nanometer-size carbonaceous block is grown at a pre-determined region by the electron-beam-induced deposition method. The wavelength-scale photonic-crystal cavity operates as a single mode laser, near 1550 nm with threshold of approximately 100 microW at room temperature. Finite-difference time-domain computations show that a high-quality-factor cavity mode is defined around the nano-block with resonant wavelength slightly longer than the dispersion-edge of the photonic-crystal waveguide. Measured near-field images exhibit photon distribution well-localized in the proximity of the printed nano-block. Linearly-polarized emission along the vertical direction is also observed.

  2. Time and timing in the acoustic recognition system of crickets

    PubMed Central

    Hennig, R. Matthias; Heller, Klaus-Gerhard; Clemens, Jan

    2014-01-01

    The songs of many insects exhibit precise timing as the result of repetitive and stereotyped subunits on several time scales. As these signals encode the identity of a species, time and timing are important for the recognition system that analyzes these signals. Crickets are a prominent example as their songs are built from sound pulses that are broadcast in a long trill or as a chirped song. This pattern appears to be analyzed on two timescales, short and long. Recent evidence suggests that song recognition in crickets relies on two computations with respect to time; a short linear-nonlinear (LN) model that operates as a filter for pulse rate and a longer integration time window for monitoring song energy over time. Therefore, there is a twofold role for timing. A filter for pulse rate shows differentiating properties for which the specific timing of excitation and inhibition is important. For an integrator, however, the duration of the time window is more important than the precise timing of events. Here, we first review evidence for the role of LN-models and integration time windows for song recognition in crickets. We then parameterize the filter part by Gabor functions and explore the effects of duration, frequency, phase, and offset as these will correspond to differently timed patterns of excitation and inhibition. These filter properties were compared with known preference functions of crickets and katydids. In a comparative approach, the power for song discrimination by LN-models was tested with the songs of over 100 cricket species. It is demonstrated how the acoustic signals of crickets occupy a simple 2-dimensional space for song recognition that arises from timing, described by a Gabor function, and time, the integration window. Finally, we discuss the evolution of recognition systems in insects based on simple sensory computations. PMID:25161622

  3. An iterative method for hydrodynamic interactions in Brownian dynamics simulations of polymer dynamics

    NASA Astrophysics Data System (ADS)

    Miao, Linling; Young, Charles D.; Sing, Charles E.

    2017-07-01

    Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.

  4. Evidence for Neural Computations of Temporal Coherence in an Auditory Scene and Their Enhancement during Active Listening.

    PubMed

    O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C

    2015-05-06

    The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene. Copyright © 2015 the authors 0270-6474/15/357256-08$15.00/0.

  5. Another View of "PC vs. Mac."

    ERIC Educational Resources Information Center

    DeMillion, John A.

    1998-01-01

    An article by Nan Wodarz in the November 1997 issue listed reasons why the Microsoft computer operating system was superior to the Apple Macintosh platform. This rebuttal contends the Macintosh is less expensive, lasts longer, and requires less technical staff for support. (MLF)

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uberuaga, Blas Pedro; Voter, Arthur F; Perez, Danny

    A long-standing limitation in the use of molecular dynamics (MD) simulation is that it can only be applied directly to processes that take place on very short timescales: nanoseconds if empirical potentials are employed, or picoseconds if we rely on electronic structure methods. Many processes of interest in chemistry, biochemistry, and materials science require study over microseconds and beyond, due either to the natural timescale for the evolution or to the duration of the experiment of interest. Ignoring the case of liquids xxx, the dynamics on these time scales is typically characterized by infrequent-event transitions, from state to state, usuallymore » involving an energy barrier. There is a long and venerable tradition in chemistry of using transition state theory (TST) [10, 19, 23] to directly compute rate constants for these kinds of activated processes. If needed dynamical corrections to the TST rate, and even quantum corrections, can be computed to achieve an accuracy suitable for the problem at hand. These rate constants then allow them to understand the system behavior on longer time scales than we can directly reach with MD. For complex systems with many reaction paths, the TST rates can be fed into a stochastic simulation procedure such as kinetic Monte Carlo xxx, and a direct simulation of the advance of the system through its possible states can be obtained in a probabilistically exact way. A problem that has become more evident in recent years, however, is that for many systems of interest there is a complexity that makes it difficult, if not impossible, to determine all the relevant reaction paths to which TST should be applied. This is a serious issue, as omitted transition pathways can have uncontrollable consequences on the simulated long-time kinetics. Over the last decade or so, we have been developing a new class of methods for treating the long-time dynamics in these complex, infrequent-event systems. Rather than trying to guess in advance what reaction pathways may be important, we return instead to a molecular dynamics treatment, in which the trajectory itself finds an appropriate way to escape from each state of the system. Since a direct integration of the trajectory would be limited to nanoseconds, while we are seeking to follow the system for much longer times, we modify the dynamics in some way to cause the first escape to happen much more quickly, thereby accelerating the dynamics. The key is to design the modified dynamics in a way that does as little damage as possible to the probability for escaping along a given pathway - i.e., we try to preserve the relative rate constants for the different possible escape paths out of the state. We can then use this modified dynamics to follow the system from state to state, reaching much longer times than we could reach with direct MD. The dynamics within any one state may no longer be meaningful, but the state-to-state dynamics, in the best case, as we discuss in the paper, can be exact. We have developed three methods in this accelerated molecular dynamics (AMD) class, in each case appealing to TST, either implicitly or explicitly, to design the modified dynamics. Each of these methods has its own advantages, and we and others have applied these methods to a wide range of problems. The purpose of this article is to give the reader a brief introduction to how these methods work, and discuss some of the recent developments that have been made to improve their power and applicability. Note that this brief review does not claim to be exhaustive: various other methods aiming at similar goals have been proposed in the literature. For the sake of brevity, our focus will exclusively be on the methods developed by the group.« less

  7. Delays in Referral and Enrolment Are Associated With Mitigated Benefits of Cardiac Rehabilitation After Coronary Artery Bypass Surgery.

    PubMed

    Marzolini, Susan; Blanchard, Chris; Alter, David A; Grace, Sherry L; Oh, Paul I

    2015-11-01

    Cardiac rehabilitation (CR) is recommended after coronary artery bypass graft surgery; however, the consequences of longer wait times to start CR have not been elucidated. Cardiopulmonary, demographic, and anthropometric assessments were conducted before and after 6 months of CR in consecutively enrolled patients from January 1995 to October 2012. Wait times were ascertained from referral forms and charts. Neighborhood characteristics were ascertained using census data and cross-referencing with patients' home geographic location. Among 6497 post- coronary artery bypass graft participants, mean and median total wait time (time from surgery to first exercise session) was 101.1±47.9 and 80 days, respectively. In multiple linear regression, correlates of longer total wait time and the 2 wait-time phases, time from surgery to CR referral and time from CR referral to first exercise session, were determined. Factors influencing longer wait times included female sex, greater age, being employed, less social support, longer drive time to CR, lower neighborhood socioeconomic status, higher systolic blood pressure, abdominal obesity, and a complex medical history. After adjusting for correlates of delayed entry, longer wait time for each of the total and 2 wait-time phases was significantly associated with less improvement in cardiopulmonary fitness (VO2peak; β=-0.165, P<0.001), body fat percentage (β=0.032, P<0.02), resting heart rate (β=0.066, P<0.001), and poorer attendance to CR classes (β=-0.081, P<0.001) and completion rate (β=2.741, P<0.001). Strategies for timely access to CR at each phase of the process are important given the negative impact that wait time has on key clinical outcomes. This is relevant because optimizing VO2peak and attendance to CR has been shown to confer a mortality advantage. © 2015 American Heart Association, Inc.

  8. How many hours do people sleep in Bangladesh? A country-representative survey.

    PubMed

    Yunus, Fakir M; Khan, Safayet; Akter, Tahera; Jhohura, Fatema T; Reja, Saifur; Islam, Akramul; Rahman, Mahfuzar

    2016-06-01

    This study investigated total sleep time in the Bangladeshi population and identified the proportion of the population at greater risk of developing chronic diseases due to inadequate sleep. Using a cross-sectional survey, total sleep time was captured and analysed in 3968 respondents aged between 6 and 106 years in 24 (of 64) districts in Bangladesh. Total sleep time was defined as the hours of total sleep in the previous 24 h. We used National Sleep Foundation (2015) guidelines to determine the recommended sleep hours in different age categories. Less or more than the recommended total sleep time (in hours) was considered 'shorter' and 'longer' sleep time, respectively. Linear and multinomial logistic regression models were used to determine the relationship between demographic variables and estimated risk of shorter and longer total sleep time. The mean (±standard deviation) total sleep time of children (6-13 years), teenagers (14-17 years), young adults and adults (18-64 years) and older adults (≥65 years) were 8.6 (±1.1), 8.1 (±1.0), 7.7 (±0.9) and 7.8 (±1.4) h, respectively, which were significantly different (P < 0.01). More than half of school-age children (55%) slept less than, and 28.2% of older adults slept longer than, recommended. Residents in all divisions (except Chittagong) in Bangladesh were less likely to sleep longer than in the Dhaka division. Rural populations had a 3.96× greater chance of sleeping for a shorter time than urban residents. The Bangladeshi population tends to sleep for longer and/or shorter times than their respective recommended sleep hours, which is detrimental to health. © 2016 European Sleep Research Society.

  9. Evaluation of a hybrid kinetics/mixing-controlled combustion model for turbulent premixed and diffusion combustion using KIVA-II

    NASA Technical Reports Server (NTRS)

    Nguyen, H. Lee; Wey, Ming-Jyh

    1990-01-01

    Two-dimensional calculations were made of spark ignited premixed-charge combustion and direct injection stratified-charge combustion in gasoline fueled piston engines. Results are obtained using kinetic-controlled combustion submodel governed by a four-step global chemical reaction or a hybrid laminar kinetics/mixing-controlled combustion submodel that accounts for laminar kinetics and turbulent mixing effects. The numerical solutions are obtained by using KIVA-2 computer code which uses a kinetic-controlled combustion submodel governed by a four-step global chemical reaction (i.e., it assumes that the mixing time is smaller than the chemistry). A hybrid laminar/mixing-controlled combustion submodel was implemented into KIVA-2. In this model, chemical species approach their thermodynamics equilibrium with a rate that is a combination of the turbulent-mixing time and the chemical-kinetics time. The combination is formed in such a way that the longer of the two times has more influence on the conversion rate and the energy release. An additional element of the model is that the laminar-flame kinetics strongly influence the early flame development following ignition.

  10. Evaluation of a hybrid kinetics/mixing-controlled combustion model for turbulent premixed and diffusion combustion using KIVA-2

    NASA Technical Reports Server (NTRS)

    Nguyen, H. Lee; Wey, Ming-Jyh

    1990-01-01

    Two dimensional calculations were made of spark ignited premixed-charge combustion and direct injection stratified-charge combustion in gasoline fueled piston engines. Results are obtained using kinetic-controlled combustion submodel governed by a four-step global chemical reaction or a hybrid laminar kinetics/mixing-controlled combustion submodel that accounts for laminar kinetics and turbulent mixing effects. The numerical solutions are obtained by using KIVA-2 computer code which uses a kinetic-controlled combustion submodel governed by a four-step global chemical reaction (i.e., it assumes that the mixing time is smaller than the chemistry). A hybrid laminar/mixing-controlled combustion submodel was implemented into KIVA-2. In this model, chemical species approach their thermodynamics equilibrium with a rate that is a combination of the turbulent-mixing time and the chemical-kinetics time. The combination is formed in such a way that the longer of the two times has more influence on the conversion rate and the energy release. An additional element of the model is that the laminar-flame kinetics strongly influence the early flame development following ignition.

  11. A real-time electronic imaging system for solar X-ray observations from sounding rockets

    NASA Technical Reports Server (NTRS)

    Davis, J. M.; Ting, J. W.; Gerassimenko, M.

    1979-01-01

    A real-time imaging system for displaying the solar coronal soft X-ray emission, focussed by a grazing incidence telescope, is described. The design parameters of the system, which is to be used primarily as part of a real-time control system for a sounding rocket experiment, are identified. Their achievement with a system consisting of a microchannel plate, for the conversion of X-rays into visible light, and a slow-scan vidicon, for recording and transmission of the integrated images, is described in detail. The system has a quantum efficiency better than 8 deg above 8 A, a dynamic range of 1000 coupled with a sensitivity to single photoelectrons, and provides a spatial resolution of 15 arc seconds over a field of view of 40 x 40 square arc minutes. The incident radiation is filtered to eliminate wavelengths longer than 100 A. Each image contains 3.93 x 10 to the 5th bits of information and is transmitted to the ground where it is processed by a mini-computer and displayed in real-time on a standard TV monitor.

  12. Serious games for elderly continuous monitoring.

    PubMed

    Lemus-Zúñiga, Lenin-G; Navarro-Pardo, Esperanza; Moret-Tatay, Carmen; Pocinho, Ricardo

    2015-01-01

    Information technology (IT) and serious games allow older population to remain independent for longer. Hence, when designing technology for this population, developmental changes, such as attention and/or perception, should be considered. For instance, a crucial developmental change has been related to cognitive speed in terms of reaction time (RT). However, this variable presents a skewed distribution that difficult data analysis. An alternative strategy is to characterize the data to an ex-Gaussian function. Furthermore, this procedure provides different parameters that have been related to underlying cognitive processes in the literature. Another issue to be considered is the optimal data recording, storing and processing. For that purpose mobile devices (smart phones and tablets) are a good option for targeting serious games where valuable information can be stored (time spent in the application, reaction time, frequency of use, and a long etcetera). The data stored inside the smartphones and tablets can be sent to a central computer (cloud storage) in order to store the data collected to not only fill the distribution of reaction times to mathematical functions, but also to estimate parameters which may reflect cognitive processes underlying language, aging, and decisional process.

  13. NAPL source zone depletion model and its application to railroad-tank-car spills.

    PubMed

    Marruffo, Amanda; Yoon, Hongkyu; Schaeffer, David J; Barkan, Christopher P L; Saat, Mohd Rapik; Werth, Charles J

    2012-01-01

    We developed a new semi-analytical source zone depletion model (SZDM) for multicomponent light nonaqueous phase liquids (LNAPLs) and incorporated this into an existing screening model for estimating cleanup times for chemical spills from railroad tank cars that previously considered only single-component LNAPLs. Results from the SZDM compare favorably to those from a three-dimensional numerical model, and from another semi-analytical model that does not consider source zone depletion. The model was used to evaluate groundwater contamination and cleanup times for four complex mixtures of concern in the railroad industry. Among the petroleum hydrocarbon mixtures considered, the cleanup time of diesel fuel was much longer than E95, gasoline, and crude oil. This is mainly due to the high fraction of low solubility components in diesel fuel. The results demonstrate that the updated screening model with the newly developed SZDM is computationally efficient, and provides valuable comparisons of cleanup times that can be used in assessing the health and financial risk associated with chemical mixture spills from railroad-tank-car accidents. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.

  14. 35 years of Ambient Noise: Can We Evidence Daily to Climatic Relative Velocity Changes ?

    NASA Astrophysics Data System (ADS)

    Lecocq, T.; Pedersen, H.; Brenguier, F.; Stammler, K.

    2014-12-01

    The broadband Grafenberg array (Germany) has been installed in 1976 and, thanks to visionary scientists and network maintainers, the continuous data acquired has been preserved until today. Using state of the art pre-processing and cross-correlation techniques, we are able to extract cross-correlation functions (CCF) between sensor pairs. It has been shown recently that, provided enough computation power is available, there is no need to define a reference CCF to compare all days to. Indeed, one can compare each day to all days, computing the "all-doublet". The number of calculations becomes huge (N vs ref = N calculations, N vs N= N*N), but the result, once inverted, is way more stable because of the N observations per day. This analysis has been done on a parallelized version of MSNoise (http://msnoise.org), running on the VEGA cluster hosted at the Université Libre de Bruxelles (ULB, Belgium). Here, we present preliminary results of the analysis of two stations, GRA1 and GRA2, the first two stations installed in March 1976. The interferogram (observation of the CCF through time, see Figure) already shows interesting features in the ballistic wave shape, highly correlated to the seasons. A reasonably high correlation can still be seen outside the ballistic arrival, after +-5 second lag time. The lag times between 5 and 25 seconds are then used to compute the dv/v using the all-doublet method. We expect to evidence daily to seasonal, or even to longer period dv/v variations and/or noise source position changes using this method. Once done with 1 sensor pair, the full data of the Grafenberg array will be used to enhance the resolution even more.

  15. The interplay of attention economics and computer-aided detection marks in screening mammography

    NASA Astrophysics Data System (ADS)

    Schwartz, Tayler M.; Sridharan, Radhika; Wei, Wei; Lukyanchenko, Olga; Geiser, William; Whitman, Gary J.; Haygood, Tamara Miner

    2016-03-01

    Introduction: According to attention economists, overabundant information leads to decreased attention for individual pieces of information. Computer-aided detection (CAD) alerts radiologists to findings potentially associated with breast cancer but is notorious for creating an abundance of false-positive marks. We suspected that increased CAD marks do not lengthen mammogram interpretation time, as radiologists will selectively disregard these marks when present in larger numbers. We explore the relevance of attention economics in mammography by examining how the number of CAD marks affects interpretation time. Methods: We performed a retrospective review of bilateral digital screening mammograms obtained between January 1, 2011 and February 28, 2014, using only weekend interpretations to decrease distractions and the likelihood of trainee participation. We stratified data according to reader and used ANOVA to assess the relationship between number of CAD marks and interpretation time. Results: Ten radiologists, with median experience after residency of 12.5 years (range 6 to 24,) interpreted 1849 mammograms. When accounting for number of images, Breast Imaging Reporting and Data System category, and breast density, increasing numbers of CAD marks was correlated with longer interpretation time only for the three radiologists with the fewest years of experience (median 7 years.) Conclusion: For the 7 most experienced readers, increasing CAD marks did not lengthen interpretation time. We surmise that as CAD marks increase, the attention given to individual marks decreases. Experienced radiologists may rapidly dismiss larger numbers of CAD marks as false-positive, having learned that devoting extra attention to such marks does not improve clinical detection.

  16. Patterns of sedentary behavior in overweight and moderately obese users of the Catalan primary-health care system

    PubMed Central

    Beltran, Angela-Maria; Martín-Borràs, Carme; Lasaosa-Medina, Lourdes; Real, Jordi; Trujillo, José-Manuel; Solà-Gonfaus, Mercè; Puigdomenech, Elisa; Castillo-Ramos, Eva; Puig-Ribera, Anna; Giné-Garriga, Maria; Serra-Paya, Noemi; Rodriguez-Roca, Beatriz; Gascón-Catalán, Ana; Martín-Cantera, Carlos

    2018-01-01

    Background and objectives Prolonged sitting time (ST) has negative consequences on health. Changing this behavior is paramount in overweight/obese individuals because they are more sedentary than those with normal weight. The aim of the study was to establish the pattern of sedentary behavior and its relationship to health, socio-demographics, occupation, and education level in Catalan overweight/obese individuals. Methods A descriptive study was performed at 25 healthcare centers in Catalonia (Spain) with 464 overweight/moderately obese patients, aged25 to 65 years. Exclusion criteria were chronic diseases which contraindicated physical activity and language barriers. Face-to-face interviews were conducted to collect data on age, gender, educational level, social class, and marital status. Main outcome was ‘sitting time’ (collected by the Marshall questionnaire); chronic diseases and anthropometric measurements were registered. Results 464 patients, 58.4% women, mean age 51.9 years (SD 10.1), 76.1% married, 60% manual workers, and 48.7% had finished secondary education. Daily sitting time was 6.2 hours on working days (374 minutes/day, SD: 190), and about 6 hours on non-working ones (357 minutes/day, SD: 170). 50% of participants were sedentary ≥6 hours. The most frequent sedentary activities were: working/academic activities around 2 hours (128 minutes, SD: 183), followed by watching television, computer use, and commuting. Men sat longer than women (64 minutes more on working days and 54 minutes on non-working days), and individuals with office jobs (91 minutes),those with higher levels of education (42 minutes), and younger subjects (25 to 35 years) spent more time sitting. Conclusions In our study performed in overweight/moderately obese patients the mean sitting time was around 6 hours which was mainly spent doing work/academic activities and watching television. Men, office workers, individuals with higher education, and younger subjects had longer sitting time. Our results may help design interventions targeted at these sedentary patients to decrease sitting time. PMID:29370176

  17. A comparison between different finite elements for elastic and aero-elastic analyses.

    PubMed

    Mahran, Mohamed; ELsabbagh, Adel; Negm, Hani

    2017-11-01

    In the present paper, a comparison between five different shell finite elements, including the Linear Triangular Element, Linear Quadrilateral Element, Linear Quadrilateral Element based on deformation modes, 8-node Quadrilateral Element, and 9-Node Quadrilateral Element was presented. The shape functions and the element equations related to each element were presented through a detailed mathematical formulation. Additionally, the Jacobian matrix for the second order derivatives was simplified and used to derive each element's strain-displacement matrix in bending. The elements were compared using carefully selected elastic and aero-elastic bench mark problems, regarding the number of elements needed to reach convergence, the resulting accuracy, and the needed computation time. The best suitable element for elastic free vibration analysis was found to be the Linear Quadrilateral Element with deformation-based shape functions, whereas the most suitable element for stress analysis was the 8-Node Quadrilateral Element, and the most suitable element for aero-elastic analysis was the 9-Node Quadrilateral Element. Although the linear triangular element was the last choice for modal and stress analyses, it establishes more accurate results in aero-elastic analyses, however, with much longer computation time. Additionally, the nine-node quadrilateral element was found to be the best choice for laminated composite plates analysis.

  18. Evaluation of Aortic Valve Replacement via the Right Parasternal Approach without Rib Removal

    PubMed Central

    Hattori, Koji; Kato, Yasuyuki; Motoki, Manabu; Takahashi, Yosuke; Nishimura, Shinsuke; Shibata, Toshihiko

    2014-01-01

    Background: Although right parasternal approach (RPA) decreases the incidence of mediastinal infection, this approach is associated with lung hernia and flail chest. Our RPA employs thoracotomy with bending rib cartilages and wound closure performed by repositioning the ribs with underlying sheet reinforcement. Methods: We evaluated 16 patients who underwent aortic valve replacement via the RPA from January 2010 to August 2013. We compared outcomes of 15 male patients had the RPA with 30 male patients had full median sternotomy. Results: One patient with a history of radical breast cancer treatment underwent RPA with concomitant right coronary artery bypass grafting. No hospital deaths occurred. Four patients developed hospital-associated morbidity (re-exploration for bleeding, prolonged ventilation, cardiac tamponade, and perioperative myocardial infarction). There were no conversions to full median sternotomy, mediastinal infections, and lung hernias. Preoperative computed tomography showed that the distance from the right sternal border to the aortic root was significantly associated with operation times. With RPA, there was no significant difference in outcomes, despite significantly longer operation times compared with full median sternotomy. Conclusion: Our RPA provides satisfactory outcomes without lung hernia, especially in patients unsuitable for sternotomy. Preoperative computed tomography is useful for identifying appropriate candidates for the RPA. PMID:25167927

  19. Molecular Dynamics based on a Generalized Born solvation model: application to protein folding

    NASA Astrophysics Data System (ADS)

    Onufriev, Alexey

    2004-03-01

    An accurate description of the aqueous environment is essential for realistic biomolecular simulations, but may become very expensive computationally. We have developed a version of the Generalized Born model suitable for describing large conformational changes in macromolecules. The model represents the solvent implicitly as continuum with the dielectric properties of water, and include charge screening effects of salt. The computational cost associated with the use of this model in Molecular Dynamics simulations is generally considerably smaller than the cost of representing water explicitly. Also, compared to traditional Molecular Dynamics simulations based on explicit water representation, conformational changes occur much faster in implicit solvation environment due to the absence of viscosity. The combined speed-up allow one to probe conformational changes that occur on much longer effective time-scales. We apply the model to folding of a 46-residue three helix bundle protein (residues 10-55 of protein A, PDB ID 1BDD). Starting from an unfolded structure at 450 K, the protein folds to the lowest energy state in 6 ns of simulation time, which takes about a day on a 16 processor SGI machine. The predicted structure differs from the native one by 2.4 A (backbone RMSD). Analysis of the structures seen on the folding pathway reveals details of the folding process unavailable form experiment.

  20. Hyper-resolution hydrological modeling: Completeness of Formulation, Appropriateness of Descritization, and Physical LImits of Predictability

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.

    2017-12-01

    HIgh performance computing and the widespread availabilities of geospatial physiographic and forcing datasets have enabled consideration of flood impact predictions with longer lead times and more detailed spatial descriptions. We are now considering multi-hour flash flood forecast lead times at the subdivision level in so-called hydroblind regions away from the National Hydrography network. However, the computational demands of such models are high, necessitating a nested simulation approach. Research on hyper-resolution hydrologic modeling over the past three decades have illustrated some fundamental limits on predictability that are simultaneously related to runoff generation mechanism(s), antecedent conditions, rates and total amounts of precipitation, discretization of the model domain, and complexity or completeness of the model formulation. This latter point is an acknowledgement that in some ways hydrologic understanding in key areas related to land use, land cover, tillage practices, seasonality, and biological effects has some glaring deficiencies. This presentation represents a review of what is known related to the interacting effects of precipitation amount, model spatial discretization, antecedent conditions, physiographic characteristics and model formulation completeness for runoff predictions. These interactions define a region in multidimensional forcing, parameter and process space where there are in some cases clear limits on predictability, and in other cases diminished uncertainty.

  1. Quantum Nuclear Dynamics Pumped and Probed by Ultrafast Polarization Controlled Steering of a Coherent Electronic State in LiH.

    PubMed

    Nikodem, Astrid; Levine, R D; Remacle, F

    2016-05-19

    The quantum wave packet dynamics following a coherent electronic excitation of LiH by an ultrashort, polarized, strong one-cycle infrared optical pulse is computed on several electronic states using a grid method. The coupling to the strong field of the pump and the probe pulses is included in the Hamiltonian used to solve the time-dependent Schrodinger equation. The polarization of the pump pulse allows us to control the localization in time and in space of the nonequilibrium coherent electronic motion and the subsequent nuclear dynamics. We show that transient absorption, resulting from the interaction of the total molecular dipole with the electric fields of the pump and the probe, is a very versatile probe of the different time scales of the vibronic dynamics. It allows probing both the ultrashort, femtosecond time scale of the electronic coherences as well as the longer dozens of femtoseconds time scales of the nuclear motion on the excited electronic states. The ultrafast beatings of the electronic coherences in space and in time are shown to be modulated by the different periods of the nuclear motion.

  2. Subsonic Scarf Inlets Investigated

    NASA Technical Reports Server (NTRS)

    Abbott, John M.

    2005-01-01

    A computational investigation is underway at the NASA Glenn Research Center to determine the aerodynamic performance of subsonic scarf inlets. These inlets are characterized as being longer over the lower portion of the inlet, as shown in the preceding figure. One of the key variables being investigated in the research is the circumferential extent of the longer portion of the inlet. It shows two specific geometries that are being examined: one in which the length of the inlet transitions from long-to-short over the full 180 deg. from bottom to top, and a second in which the length transitions over 67.5 deg.

  3. Good enough practices in scientific computing.

    PubMed

    Wilson, Greg; Bryan, Jennifer; Cranston, Karen; Kitzes, Justin; Nederbragt, Lex; Teal, Tracy K

    2017-06-01

    Computers are now essential in all branches of science, but most researchers are never taught the equivalent of basic lab skills for research computing. As a result, data can get lost, analyses can take much longer than necessary, and researchers are limited in how effectively they can work with software and data. Computing workflows need to follow the same practices as lab projects and notebooks, with organized data, documented steps, and the project structured for reproducibility, but researchers new to computing often don't know where to start. This paper presents a set of good computing practices that every researcher can adopt, regardless of their current level of computational skill. These practices, which encompass data management, programming, collaborating with colleagues, organizing projects, tracking work, and writing manuscripts, are drawn from a wide variety of published sources from our daily lives and from our work with volunteer organizations that have delivered workshops to over 11,000 people since 2010.

  4. Computer mouse movement patterns: A potential marker of mild cognitive impairment.

    PubMed

    Seelye, Adriana; Hagler, Stuart; Mattek, Nora; Howieson, Diane B; Wild, Katherine; Dodge, Hiroko H; Kaye, Jeffrey A

    2015-12-01

    Subtle changes in cognitively demanding activities occur in MCI but are difficult to assess with conventional methods. In an exploratory study, we examined whether patterns of computer mouse movements obtained from routine home computer use discriminated between older adults with and without MCI. Participants were 42 cognitively intact and 20 older adults with MCI enrolled in a longitudinal study of in-home monitoring technologies. Mouse pointer movement variables were computed during one week of routine home computer use using algorithms that identified and characterized mouse movements within each computer use session. MCI was associated with making significantly fewer total mouse moves ( p <.01), and making mouse movements that were more variable, less efficient, and with longer pauses between movements ( p <.05). Mouse movement measures were significantly associated with several cognitive domains ( p 's<.01-.05). Remotely monitored computer mouse movement patterns are a potential early marker of real-world cognitive changes in MCI.

  5. Integrating Technology into K-12 School Design.

    ERIC Educational Resources Information Center

    Syvertsen, Ken

    2002-01-01

    Asserting that advanced technology in schools is no longer reserved solely for spaces such as computer labs, media centers, and libraries, discusses how technology integration affects school design, addressing areas such as installation, space and proportion, lighting, furniture, and flexibility and simplicity. (EV)

  6. Time crawls when you're not having fun: feeling entitled makes dull tasks drag on.

    PubMed

    O'Brien, Edward H; Anastasio, Phyllis A; Bushman, Brad J

    2011-10-01

    All people have to complete dull tasks, but individuals who feel entitled may be more inclined to perceive them as a waste of their "precious" time, resulting in the perception that time drags. This hypothesis was confirmed in three studies. In Study 1, participants with higher trait entitlement (controlling for related variables) thought dull tasks took longer to complete; no link was found for fun tasks. In Study 2, participants exposed to entitled messages thought taking a dull survey was a greater waste of time and took longer to complete. In Study 3, participants subliminally exposed to entitled words thought dull tasks were less interesting, thought they took longer to complete, and walked away faster when leaving the laboratory. Like most resources, time is a resource valued more by entitled individuals. A time-entitlement link provides novel insight into mechanisms underlying self-focus and prosocial dynamics.

  7. Effect of spurt duration on the heat transfer dynamics during cryogen spray cooling.

    PubMed

    Aguilar, Guillermo; Wang, Guo-Xiang; Nelson, J Stuart

    2003-07-21

    Although cryogen spray cooling (CSC) is used to minimize the risk of epidermal damage during laser dermatologic surgery, optimization of the current cooling approach is needed to permit the safe use of higher light doses, which should improve the therapeutic outcome in many patients. The objective of this study was to measure the effect of spurt duration (delta t) on the heat transfer dynamics during CSC using a model skin phantom. A fast-response temperature sensor was constructed to record the changes in surface temperature during CSC. Temperature measurements as a function of delta t at two nozzle-to-skin distances (z = 50 and 20 mm) were performed. The average surface heat fluxes (q) and heat transfer coefficients (h) for each delta t were computed using an inverse heat conduction problem algorithm. It was observed that q undergoes a marked dynamic variation during the entire delta t, with a maximum heat flux (qc) occurring early in the spurt (5-15 ms), followed by a quick decrease. The estimated qc vary from 450 to 600 kW m(-2), corresponding to h maxima of 10 and 17-22 kW m(-2) K(-1) for z = 50 and 20 mm, respectively. For z = 50 mm, spurts longer than 40 ms do not increase the total heat removal (Q) within the first 200 ms. However, for z = 20 mm, delta t longer than 100 ms are required to achieve the same Q. It is shown that the heat transfer dynamics and the time it takes to reach qc during CSC can be understood through classic boiling theory as a transition from transient to nucleate boiling. Based on the results of this model skin phantom, it is shown that spurts longer than 40 ms have a negligible impact on both q and Q within clinically relevant cooling times (10-100 ms).

  8. Creating Learning Environments in the Early Grades That Support Teacher and Student Success: Profiles of Effective Practices in Three Expanded Learning Time Schools

    ERIC Educational Resources Information Center

    Farbman, David A.; Novoryta, Ami

    2016-01-01

    In "Creating Learning Environments in the Early Grades that Support Teacher and Student Success," the National Center on Time & Learning (NCTL) profiles three expanded-time elementary schools that leverage a longer school day to better serve young students. In particular, the report describes how a longer day opens up opportunities…

  9. SPA- STATISTICAL PACKAGE FOR TIME AND FREQUENCY DOMAIN ANALYSIS

    NASA Technical Reports Server (NTRS)

    Brownlow, J. D.

    1994-01-01

    The need for statistical analysis often arises when data is in the form of a time series. This type of data is usually a collection of numerical observations made at specified time intervals. Two kinds of analysis may be performed on the data. First, the time series may be treated as a set of independent observations using a time domain analysis to derive the usual statistical properties including the mean, variance, and distribution form. Secondly, the order and time intervals of the observations may be used in a frequency domain analysis to examine the time series for periodicities. In almost all practical applications, the collected data is actually a mixture of the desired signal and a noise signal which is collected over a finite time period with a finite precision. Therefore, any statistical calculations and analyses are actually estimates. The Spectrum Analysis (SPA) program was developed to perform a wide range of statistical estimation functions. SPA can provide the data analyst with a rigorous tool for performing time and frequency domain studies. In a time domain statistical analysis the SPA program will compute the mean variance, standard deviation, mean square, and root mean square. It also lists the data maximum, data minimum, and the number of observations included in the sample. In addition, a histogram of the time domain data is generated, a normal curve is fit to the histogram, and a goodness-of-fit test is performed. These time domain calculations may be performed on both raw and filtered data. For a frequency domain statistical analysis the SPA program computes the power spectrum, cross spectrum, coherence, phase angle, amplitude ratio, and transfer function. The estimates of the frequency domain parameters may be smoothed with the use of Hann-Tukey, Hamming, Barlett, or moving average windows. Various digital filters are available to isolate data frequency components. Frequency components with periods longer than the data collection interval are removed by least-squares detrending. As many as ten channels of data may be analyzed at one time. Both tabular and plotted output may be generated by the SPA program. This program is written in FORTRAN IV and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 142K (octal) of 60 bit words. This core requirement can be reduced by segmentation of the program. The SPA program was developed in 1978.

  10. Relationship of Attention Deficit Hyperactivity Disorder and Postconcussion Recovery in Youth Athletes.

    PubMed

    Mautner, Kenneth; Sussman, Walter I; Axtman, Matthew; Al-Farsi, Yahya; Al-Adawi, Samir

    2015-07-01

    To investigate whether attention deficit hyperactivity disorder (ADHD) influences postconcussion recovery, as measured by computerized neurocognitive testing. This is a retrospective case control study. Computer laboratories across 10 high schools in the greater Atlanta, Georgia area. Immediate postconcussion assessment and cognitive testing (ImPACT) scores of 70 athletes with a self-reported diagnosis of ADHD and who sustained a sport-related concussion were compared with a randomly selected age-matched control group. Immediate postconcussion assessment and cognitive testing scores over a 5-year interval were reviewed for inclusion. Postconcussion recovery was defined as a return to equivalent baseline neurocognitive score on the ImPACT battery, and a concussion symptom score of ≤7. Athletes with ADHD had on average a longer time to recovery when compared with the control group (16.5 days compared with 13.5 days), although not statistically significant. The number of previous concussions did not have any effect on the rate of recovery in the ADHD or the control group. In addition, baseline neurocognitive testing did not statistically differ between the 2 groups, except in verbal memory. Although not statistically significant, youth athletes with ADHD took on average 3 days longer to return to baseline neurocognitive testing compared with a control group without ADHD. Youth athletes with ADHD may have a marginally prolonged recovery as indexed by neurocognitive testing and should be considered when prognosticating time to recovery in this subset of student athletes.

  11. Theoretical approaches for dynamical ordering of biomolecular systems.

    PubMed

    Okumura, Hisashi; Higashi, Masahiro; Yoshida, Yuichiro; Sato, Hirofumi; Akiyama, Ryo

    2018-02-01

    Living systems are characterized by the dynamic assembly and disassembly of biomolecules. The dynamical ordering mechanism of these biomolecules has been investigated both experimentally and theoretically. The main theoretical approaches include quantum mechanical (QM) calculation, all-atom (AA) modeling, and coarse-grained (CG) modeling. The selected approach depends on the size of the target system (which differs among electrons, atoms, molecules, and molecular assemblies). These hierarchal approaches can be combined with molecular dynamics (MD) simulation and/or integral equation theories for liquids, which cover all size hierarchies. We review the framework of quantum mechanical/molecular mechanical (QM/MM) calculations, AA MD simulations, CG modeling, and integral equation theories. Applications of these methods to the dynamical ordering of biomolecular systems are also exemplified. The QM/MM calculation enables the study of chemical reactions. The AA MD simulation, which omits the QM calculation, can follow longer time-scale phenomena. By reducing the number of degrees of freedom and the computational cost, CG modeling can follow much longer time-scale phenomena than AA modeling. Integral equation theories for liquids elucidate the liquid structure, for example, whether the liquid follows a radial distribution function. These theoretical approaches can analyze the dynamic behaviors of biomolecular systems. They also provide useful tools for exploring the dynamic ordering systems of biomolecules, such as self-assembly. This article is part of a Special Issue entitled "Biophysical Exploration of Dynamical Ordering of Biomolecular Systems" edited by Dr. Koichi Kato. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Insomnia symptoms among Greek adolescent students with excessive computer use

    PubMed Central

    Siomos, K E; Braimiotis, D; Floros, G D; Dafoulis, V; Angelopoulos, N V

    2010-01-01

    Background: The aim of the present study is to assess the intensity of computer use and insomnia epidemiology among Greek adolescents, to examine any possible age and gender differences and to investigate whether excessive computer use is a risk factor for developing insomnia symptoms. Patients and Methods: Cross-sectional study of a stratified sample of 2195 high school students. Demographic data were recorded and two specific questionnaires were used, the Adolescent Computer Addiction Test (ACAT) and the Athens Insomnia Scale (AIS). Results: Females scored higher than males on insomnia complaints but lower on computer use and addiction. A dosemediated effect of computer use on insomnia complaints was recorded. Computer use had a larger effect size than sex on insomnia complaints. Duration of computer use was longer for those adolescents classified as suffering from insomnia compared to those who were not. Conclusions: Computer use can be a significant cause of insomnia complaints in an adolescent population regardless of whether the individual is classified as addicted or not. PMID:20981171

  13. Unequal Probability Marking Approach to Enhance Security of Traceback Scheme in Tree-Based WSNs.

    PubMed

    Huang, Changqin; Ma, Ming; Liu, Xiao; Liu, Anfeng; Zuo, Zhengbang

    2017-06-17

    Fog (from core to edge) computing is a newly emerging computing platform, which utilizes a large number of network devices at the edge of a network to provide ubiquitous computing, thus having great development potential. However, the issue of security poses an important challenge for fog computing. In particular, the Internet of Things (IoT) that constitutes the fog computing platform is crucial for preserving the security of a huge number of wireless sensors, which are vulnerable to attack. In this paper, a new unequal probability marking approach is proposed to enhance the security performance of logging and migration traceback (LM) schemes in tree-based wireless sensor networks (WSNs). The main contribution of this paper is to overcome the deficiency of the LM scheme that has a higher network lifetime and large storage space. In the unequal probability marking logging and migration (UPLM) scheme of this paper, different marking probabilities are adopted for different nodes according to their distances to the sink. A large marking probability is assigned to nodes in remote areas (areas at a long distance from the sink), while a small marking probability is applied to nodes in nearby area (areas at a short distance from the sink). This reduces the consumption of storage and energy in addition to enhancing the security performance, lifetime, and storage capacity. Marking information will be migrated to nodes at a longer distance from the sink for increasing the amount of stored marking information, thus enhancing the security performance in the process of migration. The experimental simulation shows that for general tree-based WSNs, the UPLM scheme proposed in this paper can store 1.12-1.28 times the amount of stored marking information that the equal probability marking approach achieves, and has 1.15-1.26 times the storage utilization efficiency compared with other schemes.

  14. Unequal Probability Marking Approach to Enhance Security of Traceback Scheme in Tree-Based WSNs

    PubMed Central

    Huang, Changqin; Ma, Ming; Liu, Xiao; Liu, Anfeng; Zuo, Zhengbang

    2017-01-01

    Fog (from core to edge) computing is a newly emerging computing platform, which utilizes a large number of network devices at the edge of a network to provide ubiquitous computing, thus having great development potential. However, the issue of security poses an important challenge for fog computing. In particular, the Internet of Things (IoT) that constitutes the fog computing platform is crucial for preserving the security of a huge number of wireless sensors, which are vulnerable to attack. In this paper, a new unequal probability marking approach is proposed to enhance the security performance of logging and migration traceback (LM) schemes in tree-based wireless sensor networks (WSNs). The main contribution of this paper is to overcome the deficiency of the LM scheme that has a higher network lifetime and large storage space. In the unequal probability marking logging and migration (UPLM) scheme of this paper, different marking probabilities are adopted for different nodes according to their distances to the sink. A large marking probability is assigned to nodes in remote areas (areas at a long distance from the sink), while a small marking probability is applied to nodes in nearby area (areas at a short distance from the sink). This reduces the consumption of storage and energy in addition to enhancing the security performance, lifetime, and storage capacity. Marking information will be migrated to nodes at a longer distance from the sink for increasing the amount of stored marking information, thus enhancing the security performance in the process of migration. The experimental simulation shows that for general tree-based WSNs, the UPLM scheme proposed in this paper can store 1.12–1.28 times the amount of stored marking information that the equal probability marking approach achieves, and has 1.15–1.26 times the storage utilization efficiency compared with other schemes. PMID:28629135

  15. Social Cognitive Theory Predictors of Exercise Behavior in Endometrial Cancer Survivors

    PubMed Central

    Basen-Engquist, Karen; Carmack, Cindy L.; Li, Yisheng; Brown, Jubilee; Jhingran, Anuja; Hughes, Daniel C.; Perkins, Heidi Y.; Scruggs, Stacie; Harrison, Carol; Baum, George; Bodurka, Diane C.; Waters, Andrew

    2014-01-01

    Objective This study evaluated whether social cognitive theory (SCT) variables, as measured by questionnaire and ecological momentary assessment (EMA), predicted exercise in endometrial cancer survivors. Methods One hundred post-treatment endometrial cancer survivors received a 6-month home-based exercise intervention. EMAs were conducted using hand-held computers for 10- to 12-day periods every 2 months. Participants rated morning self-efficacy and positive and negative outcome expectations using the computer, recorded exercise information in real time and at night, and wore accelerometers. At the midpoint of each assessment period participants completed SCT questionnaires. Using linear mixed-effects models, we tested whether morning SCT variables predicted minutes of exercise that day (Question 1) and whether exercise minutes at time point Tj could be predicted by questionnaire measures of SCT variables from time point Tj-1 (Question 2). Results Morning self-efficacy significantly predicted that day’s exercise minutes (p<.0001). Morning positive outcome expectations was also associated with exercise minutes (p=0.0003), but the relationship was attenuated when self-efficacy was included in the model (p=0.4032). Morning negative outcome expectations was not associated with exercise minutes. Of the questionnaire measures of SCT variables, only exercise self-efficacy predicted exercise at the next time point (p=0.003). Conclusions The consistency of the relationship between self-efficacy and exercise minutes over short (same day) and longer (Tj to Tj-1) time periods provides support for a causal relationship. The strength of the relationship between morning self-efficacy and exercise minutes suggest that real-time interventions that target daily variation in self-efficacy may benefit endometrial cancer survivors’ exercise adherence. PMID:23437853

  16. Approaches and Data Quality for Global Precipitation Estimation

    NASA Astrophysics Data System (ADS)

    Huffman, G. J.; Bolvin, D. T.; Nelkin, E. J.

    2015-12-01

    The space and time scales on which precipitation varies are small compared to the satellite coverage that we have, so it is necessary to merge "all" of the available satellite estimates. Differing retrieval capabilities from the various satellites require inter-calibration for the satellite estimates, while "morphing", i.e., Lagrangian time interpolation, is used to lengthen the period over which time interpolation is valid. Additionally, estimates from geostationary-Earth-orbit infrared data are plentiful, but of sufficiently lower quality compared to low-Earth-orbit passive microwave estimates that they are only used when needed. Finally, monthly surface precipitation gauge data can be used to reduce bias and improve patterns of occurrence for monthly satellite data, and short-interval satellite estimates can be improved with a simple scaling such that they sum to the monthly satellite-gauge combination. The presentation will briefly consider some of the design decisions for practical computation of the Global Precipitation Measurement (GPM) mission product Integrated Multi-satellitE Retrievals for GPM (IMERG), then examine design choices that maximize value for end users. For example, data fields are provided in the output file that provide insight into the basis for the estimated precipitation, including error, sensor providing the estimate, precipitation phase (solid/liquid), and intermediate precipitation estimates. Another important initiative is successive computations for the same data date/time at longer latencies as additional data are received, which for IMERG is currently done at 6 hours, 16 hours, and 3 months after observation time. Importantly, users require long records for each latency, which runs counter to the data archiving practices at most archive sites. As well, the assignment of Digital Object Identifiers (DOI's) for near-real-time data sets (at 6 and 16 hours for IMERG) is not a settled issue.

  17. FPGA-Based X-Ray Detection and Measurement for an X-Ray Polarimeter

    NASA Technical Reports Server (NTRS)

    Gregory, Kyle; Hill, Joanne; Black, Kevin; Baumgartner, Wayne

    2013-01-01

    This technology enables detection and measurement of x-rays in an x-ray polarimeter using a field-programmable gate array (FPGA). The technology was developed for the Gravitational and Extreme Magnetism Small Explorer (GEMS) mission. It performs precision energy and timing measurements, as well as rejection of non-x-ray events. It enables the GEMS polarimeter to detect precisely when an event has taken place so that additional measurements can be made. The technology also enables this function to be performed in an FPGA using limited resources so that mass and power can be minimized while reliability for a space application is maximized and precise real-time operation is achieved. This design requires a low-noise, charge-sensitive preamplifier; a highspeed analog to digital converter (ADC); and an x-ray detector with a cathode terminal. It functions by computing a sum of differences for time-samples whose difference exceeds a programmable threshold. A state machine advances through states as a programmable number of consecutive samples exceeds or fails to exceed this threshold. The pulse height is recorded as the accumulated sum. The track length is also measured based on the time from the start to the end of accumulation. For track lengths longer than a certain length, the algorithm estimates the barycenter of charge deposit by comparing the accumulator value at the midpoint to the final accumulator value. The design also employs a number of techniques for rejecting background events. This innovation enables the function to be performed in space where it can operate autonomously with a rapid response time. This implementation combines advantages of computing system-based approaches with those of pure analog approaches. The result is an implementation that is highly reliable, performs in real-time, rejects background events, and consumes minimal power.

  18. Population coding and decoding in a neural field: a computational study.

    PubMed

    Wu, Si; Amari, Shun-Ichi; Nakahara, Hiroyuki

    2002-05-01

    This study uses a neural field model to investigate computational aspects of population coding and decoding when the stimulus is a single variable. A general prototype model for the encoding process is proposed, in which neural responses are correlated, with strength specified by a gaussian function of their difference in preferred stimuli. Based on the model, we study the effect of correlation on the Fisher information, compare the performances of three decoding methods that differ in the amount of encoding information being used, and investigate the implementation of the three methods by using a recurrent network. This study not only rediscovers main results in existing literatures in a unified way, but also reveals important new features, especially when the neural correlation is strong. As the neural correlation of firing becomes larger, the Fisher information decreases drastically. We confirm that as the width of correlation increases, the Fisher information saturates and no longer increases in proportion to the number of neurons. However, we prove that as the width increases further--wider than (sqrt)2 times the effective width of the turning function--the Fisher information increases again, and it increases without limit in proportion to the number of neurons. Furthermore, we clarify the asymptotic efficiency of the maximum likelihood inference (MLI) type of decoding methods for correlated neural signals. It shows that when the correlation covers a nonlocal range of population (excepting the uniform correlation and when the noise is extremely small), the MLI type of method, whose decoding error satisfies the Cauchy-type distribution, is not asymptotically efficient. This implies that the variance is no longer adequate to measure decoding accuracy.

  19. Comparison of circular orbit and Fourier power series ephemeris representations for backup use by the upper atmosphere research satellite onboard computer

    NASA Technical Reports Server (NTRS)

    Kast, J. R.

    1988-01-01

    The Upper Atmosphere Research Satellite (UARS) is a three-axis stabilized Earth-pointing spacecraft in a low-Earth orbit. The UARS onboard computer (OBC) uses a Fourier Power Series (FPS) ephemeris representation that includes 42 position and 42 velocity coefficients per axis, with position residuals at 10-minute intervals. New coefficients and 32 hours of residuals are uploaded daily. This study evaluated two backup methods that permit the OBC to compute an approximate spacecraft ephemeris in the event that new ephemeris data cannot be uplinked for several days: (1) extending the use of the FPS coefficients previously uplinked, and (2) switching to a simple circular orbit approximation designed and tested (but not implemented) for LANDSAT-D. The FPS method provides greater accuracy during the backup period and does not require additional ground operational procedures for generating and uplinking an additional ephemeris table. The tradeoff is that the high accuracy of the FPS will be degraded slightly by adopting the longer fit period necessary to obtain backup accuracy for an extended period of time. The results for UARS show that extended use of the FPS is superior to the circular orbit approximation for short-term ephemeris backup.

  20. Where Is the Research on Negative Messages?

    ERIC Educational Resources Information Center

    DeKay, Sam H.

    2012-01-01

    Most business communication textbooks treat "unfavorable" communications as written documents--denials of credit, collection requests, rejections for employment, inability to meet deadlines, etc. These written "unfavorable" documents are no longer actually written by most employees. In fact, many of these communications are computer generated and…

  1. Integrative genomic mining for enzyme function to enable engineering of a non-natural biosynthetic pathway.

    PubMed

    Mak, Wai Shun; Tran, Stephen; Marcheschi, Ryan; Bertolani, Steve; Thompson, James; Baker, David; Liao, James C; Siegel, Justin B

    2015-11-24

    The ability to biosynthetically produce chemicals beyond what is commonly found in Nature requires the discovery of novel enzyme function. Here we utilize two approaches to discover enzymes that enable specific production of longer-chain (C5-C8) alcohols from sugar. The first approach combines bioinformatics and molecular modelling to mine sequence databases, resulting in a diverse panel of enzymes capable of catalysing the targeted reaction. The median catalytic efficiency of the computationally selected enzymes is 75-fold greater than a panel of naively selected homologues. This integrative genomic mining approach establishes a unique avenue for enzyme function discovery in the rapidly expanding sequence databases. The second approach uses computational enzyme design to reprogramme specificity. Both approaches result in enzymes with >100-fold increase in specificity for the targeted reaction. When enzymes from either approach are integrated in vivo, longer-chain alcohol production increases over 10-fold and represents >95% of the total alcohol products.

  2. Electron and ion heating by whistler turbulence: Three-dimensional particle-in-cell simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, R. Scott; Gary, S. Peter; Wang, Joseph

    2014-12-17

    Three-dimensional particle-in-cell simulations of decaying whistler turbulence are carried out on a collisionless, homogeneous, magnetized, electron-ion plasma model. In addition, the simulations use an initial ensemble of relatively long wavelength whistler modes with a broad range of initial propagation directions with an initial electron beta β e = 0.05. The computations follow the temporal evolution of the fluctuations as they cascade into broadband turbulent spectra at shorter wavelengths. Three simulations correspond to successively larger simulation boxes and successively longer wavelengths of the initial fluctuations. The computations confirm previous results showing electron heating is preferentially parallel to the background magnetic fieldmore » B o, and ion heating is preferentially perpendicular to B o. The new results here are that larger simulation boxes and longer initial whistler wavelengths yield weaker overall dissipation, consistent with linear dispersion theory predictions of decreased damping, stronger ion heating, consistent with a stronger ion Landau resonance, and weaker electron heating.« less

  3. Computer-Delivered Interventions to Reduce College Student Drinking: A Meta-Analysis

    PubMed Central

    Carey, Kate B.; Scott-Sheldon, Lori A. J.; Elliott, Jennifer C.; Bolles, Jamie R.; Carey, Michael P.

    2009-01-01

    Aims This meta-analysis evaluates the efficacy and moderators of computer-delivered interventions (CDIs) to reduce alcohol use among college students. Methods We included 35 manuscripts with 43 separate interventions, and calculated both between-group and within-group effect sizes for alcohol consumption and alcohol-related problems. Effects sizes were calculated for short-term (≤ 5 weeks) and longer-term (≥ 6 weeks) intervals. All studies were coded for study descriptors, participant characteristics, and intervention components. Results The effects of CDIs depended on the nature of the comparison condition: CDIs reduced quantity and frequency measures relative to assessment-only controls, but rarely differed from comparison conditions that included alcohol content. Small-to-medium within-group effect sizes can be expected for CDIs at short- and longer-term follow-ups; these changes are less than or equivalent to the within-group effect sizes observed for more intensive interventions. Conclusions CDIs reduce the quantity and frequency of drinking among college students. CDIs are generally equivalent to alternative alcohol-related comparison interventions. PMID:19744139

  4. The Effect of Interruption Duration and Demand on Resuming Suspended Goals

    ERIC Educational Resources Information Center

    Monk, Christopher A.; Trafton, J. Gregory; Boehm-Davis, Deborah A.

    2008-01-01

    The time to resume task goals after an interruption varied depending on the duration and cognitive demand of interruptions, as predicted by the memory for goals model (Altmann & Trafton, 2002). Three experiments using an interleaved tasks interruption paradigm showed that longer and more demanding interruptions led to longer resumption times in a…

  5. Computing the unconscious.

    PubMed

    Dougherty, Stephen

    2010-01-01

    This essay examines the unconscious as modeled by cognitive science and compares it to the psychoanalytic unconscious. In making this comparison, the author underscores the important but usually overlooked fact that computational psychology and psychoanalytic theory are both varieties of posthumanism. He argues that if posthumanism is to advance a vision for our future that is no longer fixated on a normative image of the human, then its own normative claims about the primacy of Darwinian functioning must be disrupted and undermined through a renewed emphasis on its Freudian heritage.

  6. Software For Computer-Security Audits

    NASA Technical Reports Server (NTRS)

    Arndt, Kate; Lonsford, Emily

    1994-01-01

    Information relevant to potential breaches of security gathered efficiently. Automated Auditing Tools for VAX/VMS program includes following automated software tools performing noted tasks: Privileged ID Identification, program identifies users and their privileges to circumvent existing computer security measures; Critical File Protection, critical files not properly protected identified; Inactive ID Identification, identifications of users no longer in use found; Password Lifetime Review, maximum lifetimes of passwords of all identifications determined; and Password Length Review, minimum allowed length of passwords of all identifications determined. Written in DEC VAX DCL language.

  7. Data handling and visualization for NASA's science programs

    NASA Technical Reports Server (NTRS)

    Bredekamp, Joseph H. (Editor)

    1995-01-01

    Advanced information systems capabilities are essential to conducting NASA's scientific research mission. Access to these capabilities is no longer a luxury for a select few within the science community, but rather an absolute necessity for carrying out scientific investigations. The dependence on high performance computing and networking, as well as ready and expedient access to science data, metadata, and analysis tools is the fundamental underpinning for the entire research endeavor. At the same time, advances in the whole range of information technologies continues on an almost explosive growth path, reaching beyond the research community to affect the population as a whole. Capitalizing on and exploiting these advances are critical to the continued success of space science investigations. NASA must remain abreast of developments in the field and strike an appropriate balance between being a smart buyer and a direct investor in the technology which serves its unique requirements. Another key theme deals with the need for the space and computer science communities to collaborate as partners to more fully realize the potential of information technology in the space science research environment.

  8. Online computer gaming: a comparison of adolescent and adult gamers.

    PubMed

    Griffiths, M D; Davies, Mark N O; Chappell, Darren

    2004-02-01

    Despite the growing popularity of online game playing, there have been no surveys comparing adolescent and adult players. Therefore, an online questionnaire survey was used to examine various factors of online computer game players (n = 540) who played the most popular online game Everquest. The survey examined basic demographic information, playing frequency (i.e. amount of time spent playing the game a week), playing history (i.e. how long they had been playing the game, who they played the game with, whether they had ever gender swapped their game character, the favourite and least favourite aspects of playing the game, and what they sacrifice (if anything) to play the game. Results showed that adolescent gamers were significantly more likely to be male, significantly less likely to gender swap their characters, and significantly more likely to sacrifice their education or work. In relation to favourite aspects of game play, the biggest difference between the groups was that significantly more adolescents than adults claimed their favourite aspect of playing was violence. Results also showed that in general, the younger the player, the longer they spent each week playing.

  9. A high-resolution optical measurement system for rapid acquisition of radiation flux density maps

    NASA Astrophysics Data System (ADS)

    Thelen, Martin; Raeder, Christian; Willsch, Christian; Dibowski, Gerd

    2017-06-01

    To identify the power and flux density of concentrated solar radiation the Institute of Solar Research at the German Aerospace Center (DLR - Deutsches Zentrum für Luft-und Raumfahrt e. V.) has used the camera-based measurement system FATMES (Flux and Temperature Measurement System) since 1995. The disadvantages of low resolution, difficult handling and poor computing power required a revision of the existing measurement system. The measurement system FMAS (Flux Mapping Acquisition system) is equipped with state-of-the-art-hardware, is compatible with computers off-the-shelf and is programmed in LabView. The expenditure of time for an image evaluation is reduced by the factor 60 compared to FATMES. The new measurement system is no longer associated with the facilities Solar Furnace and High Flux Solar Simulator at the DLR in Cologne but is also applicable as a mobile system. The data and the algorithms are transparent throughout the complete process. The measurement accuracy of FMAS is determined to at most ±3 % until now. The error of measurement of FATMES is at least 2 % higher according to the conducted comparison tests.

  10. Faster Double-Size Bipartite Multiplication out of Montgomery Multipliers

    NASA Astrophysics Data System (ADS)

    Yoshino, Masayuki; Okeya, Katsuyuki; Vuillaume, Camille

    This paper proposes novel algorithms for computing double-size modular multiplications with few modulus-dependent precomputations. Low-end devices such as smartcards are usually equipped with hardware Montgomery multipliers. However, due to progresses of mathematical attacks, security institutions such as NIST have steadily demanded longer bit-lengths for public-key cryptography, making the multipliers quickly obsolete. In an attempt to extend the lifespan of such multipliers, double-size techniques compute modular multiplications with twice the bit-length of the multipliers. Techniques are known for extending the bit-length of classical Euclidean multipliers, of Montgomery multipliers and the combination thereof, namely bipartite multipliers. However, unlike classical and bipartite multiplications, Montgomery multiplications involve modulus-dependent precomputations, which amount to a large part of an RSA encryption or signature verification. The proposed double-size technique simulates double-size multiplications based on single-size Montgomery multipliers, and yet precomputations are essentially free: in an 2048-bit RSA encryption or signature verification with public exponent e=216+1, the proposal with a 1024-bit Montgomery multiplier is at least 1.5 times faster than previous double-size Montgomery multiplications.

  11. Throwing computing into reverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, Michael P.

    For more than 50 years, computers have made steady and dramatic improvements, all thanks to Moore’s Law—the exponential increase over time in the number of transistors that can be fabricated on an integrated circuit of a given size. Moore’s Law owed its success to the fact that as transistors were made smaller, they became simultaneously cheaper, faster, and more energy efficient. The payoff from this win-win-win scenario enabled reinvestment in semiconductor fabrication technology that could make even smaller, more densely-packed transistors. And so this virtuous cycle continued, decade after decade. Now though, experts in industry, academia, and government laboratories anticipatemore » that semiconductor miniaturization won’t continue much longer—maybe 10 years or so, at best. Making transistors smaller no longer yields the improvements it used to. The physical characteristics of small transistors forced clock speeds to cease getting faster more than a decade ago, which drove the industry to start building chips with multiple cores. But even multi-core architectures must contend with increasing amounts of “dark silicon,” areas of the chip that must be powered off to avoid overheating.« less

  12. Outpatient Office Wait Times and Quality of Care for Medicaid Patients

    PubMed Central

    Oostrom, Tamar; Einav, Liran; Finkelstein, Amy

    2018-01-01

    Time spent in the doctor’s waiting room captures an important aspect of the healthcare experience. We analyzed data on 21 million outpatient visits obtained from electronic health record systems, allowing us to measure time spent in the waiting room beyond the scheduled appointment time. Median wait time was just over 4 minutes. Almost one-fifth of visits had waits longer than 20 minutes, and 10% were over 30 minutes. Waits were shorter for early morning appointments, younger patients, and at larger practices. Median wait time was 4.1 minutes for privately-insured and 4.6 minutes for Medicaid patients; adjusting for patient and appointment characteristics, Medicaid patients were 20% more likely than the privately-insured to wait longer than 20 minutes (P<0.001), with most of this disparity explained by differences in practices and providers they saw. Wait time for Medicaid patients relative to the privately-insured was longer in states with relatively lower Medicaid reimbursement rates. PMID:28461348

  13. Typical visual search performance and atypical gaze behaviors in response to faces in Williams syndrome.

    PubMed

    Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho

    2016-01-01

    Evidence indicates that individuals with Williams syndrome (WS) exhibit atypical attentional characteristics when viewing faces. However, the dynamics of visual attention captured by faces remain unclear, especially when explicit attentional forces are present. To clarify this, we introduced a visual search paradigm and assessed how the relative strength of visual attention captured by a face and explicit attentional control changes as search progresses. Participants (WS and controls) searched for a target (butterfly) within an array of distractors, which sometimes contained an upright face. We analyzed reaction time and location of the first fixation-which reflect the attentional profile at the initial stage-and fixation durations. These features represent aspects of attention at later stages of visual search. The strength of visual attention captured by faces and explicit attentional control (toward the butterfly) was characterized by the frequency of first fixations on a face or butterfly and on the duration of face or butterfly fixations. Although reaction time was longer in all groups when faces were present, and visual attention was not dominated by faces in any group during the initial stages of the search, when faces were present, attention to faces dominated in the WS group during the later search stages. Furthermore, for the WS group, reaction time correlated with eye-movement measures at different stages of searching such that longer reaction times were associated with longer face-fixations, specifically at the initial stage of searching. Moreover, longer reaction times were associated with longer face-fixations at the later stages of searching, while shorter reaction times were associated with longer butterfly fixations. The relative strength of attention captured by faces in people with WS is not observed at the initial stage of searching but becomes dominant as the search progresses. Furthermore, although behavioral responses are associated with some aspects of eye movements, they are not as sensitive as eye-movement measurements themselves at detecting atypical attentional characteristics in people with WS.

  14. An integrated computational tool for precipitation simulation

    NASA Astrophysics Data System (ADS)

    Cao, W.; Zhang, F.; Chen, S.-L.; Zhang, C.; Chang, Y. A.

    2011-07-01

    Computer aided materials design is of increasing interest because the conventional approach solely relying on experimentation is no longer viable within the constraint of available resources. Modeling of microstructure and mechanical properties during precipitation plays a critical role in understanding the behavior of materials and thus accelerating the development of materials. Nevertheless, an integrated computational tool coupling reliable thermodynamic calculation, kinetic simulation, and property prediction of multi-component systems for industrial applications is rarely available. In this regard, we are developing a software package, PanPrecipitation, under the framework of integrated computational materials engineering to simulate precipitation kinetics. It is seamlessly integrated with the thermodynamic calculation engine, PanEngine, to obtain accurate thermodynamic properties and atomic mobility data necessary for precipitation simulation.

  15. Atomistic details of protein dynamics and the role of hydration water

    DOE PAGES

    Khodadadi, Sheila; Sokolov, Alexei P.

    2016-05-04

    The importance of protein dynamics for their biological activity is nowwell recognized. Different experimental and computational techniques have been employed to study protein dynamics, hierarchy of different processes and the coupling between protein and hydration water dynamics. But, understanding the atomistic details of protein dynamics and the role of hydration water remains rather limited. Based on overview of neutron scattering, molecular dynamic simulations, NMR and dielectric spectroscopy results we present a general picture of protein dynamics covering time scales from faster than ps to microseconds and the influence of hydration water on different relaxation processes. Internal protein dynamics spread overmore » a wide time range fromfaster than picosecond to longer than microseconds. We suggest that the structural relaxation in hydrated proteins appears on the microsecond time scale, while faster processes present mostly motion of side groups and some domains. Hydration water plays a crucial role in protein dynamics on all time scales. It controls the coupled protein-hydration water relaxation on 10 100 ps time scale. Our process defines the friction for slower protein dynamics. Analysis suggests that changes in amount of hydration water affect not only general friction, but also influence significantly the protein's energy landscape.« less

  16. Atomistic details of protein dynamics and the role of hydration water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khodadadi, Sheila; Sokolov, Alexei P.

    The importance of protein dynamics for their biological activity is nowwell recognized. Different experimental and computational techniques have been employed to study protein dynamics, hierarchy of different processes and the coupling between protein and hydration water dynamics. But, understanding the atomistic details of protein dynamics and the role of hydration water remains rather limited. Based on overview of neutron scattering, molecular dynamic simulations, NMR and dielectric spectroscopy results we present a general picture of protein dynamics covering time scales from faster than ps to microseconds and the influence of hydration water on different relaxation processes. Internal protein dynamics spread overmore » a wide time range fromfaster than picosecond to longer than microseconds. We suggest that the structural relaxation in hydrated proteins appears on the microsecond time scale, while faster processes present mostly motion of side groups and some domains. Hydration water plays a crucial role in protein dynamics on all time scales. It controls the coupled protein-hydration water relaxation on 10 100 ps time scale. Our process defines the friction for slower protein dynamics. Analysis suggests that changes in amount of hydration water affect not only general friction, but also influence significantly the protein's energy landscape.« less

  17. Gamification and serious games for personalized health.

    PubMed

    McCallum, Simon

    2012-01-01

    Computer games are no longer just a trivial activity played by children in arcades. Social networking and casual gaming have broadened the market for, and acceptance of, games. This has coincided with a realization of their power to engage and motivate players. Good computer games are excellent examples of modern educational theory [1]. The military, health providers, governments, and educators, all use computer games. This paper focuses on Games for Health, discussing the range of areas and approaches to developing these games. We extend a taxonomy for Games for Health, describe a case study on games for dementia sufferers, and finally, present some challenges and research opportunities in this area.

  18. Specimen origin, type and testing laboratory are linked to longer turnaround times for HIV viral load testing in Malawi

    PubMed Central

    Chipungu, Geoffrey; Kim, Andrea A.; Sarr, Abdoulaye; Ali, Hammad; Mwenda, Reuben; Nkengasong, John N.; Singer, Daniel

    2017-01-01

    Background Efforts to reach UNAIDS’ treatment and viral suppression targets have increased demand for viral load (VL) testing and strained existing laboratory networks, affecting turnaround time. Longer VL turnaround times delay both initiation of formal adherence counseling and switches to second-line therapy for persons failing treatment and contribute to poorer health outcomes. Methods We utilized descriptive statistics and logistic regression to analyze VL testing data collected in Malawi between January 2013 and March 2016. The primary outcomes assessed were greater-than-median pretest phase turnaround time (days elapsed from specimen collection to receipt at the laboratory) and greater-than-median test phase turnaround time (days from receipt to testing). Results The median number of days between specimen collection and testing increased 3-fold between 2013 (8 days, interquartile range (IQR) = 6–16) and 2015 (24, IQR = 13–39) (p<0.001). Multivariable analysis indicated that the odds of longer pretest phase turnaround time were significantly higher for specimen collection districts without laboratories capable of conducting viral load tests (adjusted odds ratio (aOR) = 5.16; 95% confidence interval (CI) = 5.04–5.27) as well as for Malawi’s Northern and Southern regions. Longer test phase turnaround time was significantly associated with use of dried blood spots instead of plasma (aOR = 2.30; 95% CI = 2.23–2.37) and for certain testing months and testing laboratories. Conclusion Increased turnaround time for VL testing appeared to be driven in part by categorical factors specific to the phase of turnaround time assessed. Given the implications of longer turnaround time and the global effort to scale up VL testing, addressing these factors via increasing efficiencies, improving quality management systems and generally strengthening the VL spectrum should be considered essential components of controlling the HIV epidemic. PMID:28235013

  19. Evolution of correlation structure of industrial indices of U.S. equity markets.

    PubMed

    Buccheri, Giuseppe; Marmi, Stefano; Mantegna, Rosario N

    2013-07-01

    We investigate the dynamics of correlations present between pairs of industry indices of U.S. stocks traded in U.S. markets by studying correlation-based networks and spectral properties of the correlation matrix. The study is performed by using 49 industry index time series computed by K. French and E. Fama during the time period from July 1969 to December 2011, which spans more than 40 years. We show that the correlation between industry indices presents both a fast and a slow dynamics. The slow dynamics has a time scale longer than 5 years, showing that a different degree of diversification of the investment is possible in different periods of time. Moreover, we also detect a fast dynamics associated with exogenous or endogenous events. The fast time scale we use is a monthly time scale and the evaluation time period is a 3-month time period. By investigating the correlation dynamics monthly, we are able to detect two examples of fast variations in the first and second eigenvalue of the correlation matrix. The first occurs during the dot-com bubble (from March 1999 to April 2001) and the second occurs during the period of highest impact of the subprime crisis (from August 2008 to August 2009).

  20. Evolution of correlation structure of industrial indices of U.S. equity markets

    NASA Astrophysics Data System (ADS)

    Buccheri, Giuseppe; Marmi, Stefano; Mantegna, Rosario N.

    2013-07-01

    We investigate the dynamics of correlations present between pairs of industry indices of U.S. stocks traded in U.S. markets by studying correlation-based networks and spectral properties of the correlation matrix. The study is performed by using 49 industry index time series computed by K. French and E. Fama during the time period from July 1969 to December 2011, which spans more than 40 years. We show that the correlation between industry indices presents both a fast and a slow dynamics. The slow dynamics has a time scale longer than 5 years, showing that a different degree of diversification of the investment is possible in different periods of time. Moreover, we also detect a fast dynamics associated with exogenous or endogenous events. The fast time scale we use is a monthly time scale and the evaluation time period is a 3-month time period. By investigating the correlation dynamics monthly, we are able to detect two examples of fast variations in the first and second eigenvalue of the correlation matrix. The first occurs during the dot-com bubble (from March 1999 to April 2001) and the second occurs during the period of highest impact of the subprime crisis (from August 2008 to August 2009).

  1. The change in spatial distribution of upper trapezius muscle activity is correlated to contraction duration.

    PubMed

    Farina, Dario; Leclerc, Frédéric; Arendt-Nielsen, Lars; Buttelli, Olivier; Madeleine, Pascal

    2008-02-01

    The aim of the study was to confirm the hypothesis that the longer a contraction is sustained, the larger are the changes in the spatial distribution of muscle activity. For this purpose, surface electromyographic (EMG) signals were recorded with a 13 x 5 grid of electrodes from the upper trapezius muscle of 11 healthy male subjects during static contractions with shoulders 90 degrees abducted until endurance. The entropy (degree of uniformity) and center of gravity of the EMG root mean square map were computed to assess spatial inhomogeneity in muscle activation and changes over time in EMG amplitude spatial distribution. At the endurance time, entropy decreased (mean+/-SD, percent change 2.0+/-1.6%; P<0.0001) and the center of gravity moved in the cranial direction (shift 11.2+/-6.1mm; P<0.0001) with respect to the beginning of the contraction. The shift in the center of gravity was positively correlated with endurance time (R(2)=0.46, P<0.05), thus subjects with larger shift in the activity map showed longer endurance time. The percent variation in average (over the grid) root mean square was positively correlated with the shift in the center of gravity (R(2)=0.51, P<0.05). Moreover, the shift in the center of gravity was negatively correlated to both initial and final (at the endurance) entropy (R(2)=0.54 and R(2)=0.56, respectively; P<0.01 in both cases), indicating that subjects with less uniform root mean square maps had larger shift of the center of gravity over time. The spatial changes in root mean square EMG were likely due to spatially-dependent changes in motor unit activation during the sustained contraction. It was concluded that the changes in spatial muscle activity distribution play a role in the ability to maintain a static contraction.

  2. A computer-based measure of resultant achievement motivation.

    PubMed

    Blankenship, V

    1987-08-01

    Three experiments were conducted to develop a computer-based measure of individual differences in resultant achievement motivation (RAM) on the basis of level-of-aspiration, achievement motivation, and dynamics-of-action theories. In Experiment 1, the number of atypical shifts and greater responsiveness to incentives on 21 trials with choices among easy, intermediate, and difficult levels of an achievement-oriented game were positively correlated and were found to differentiate the 62 subjects (31 men, 31 women) on the amount of time they spent at a nonachievement task (watching a color design) 1 week later. In Experiment 2, test-retest reliability was established with the use of 67 subjects (15 men, 52 women). Point and no-point trials were offered in blocks, with point trials first for half the subjects and no-point trials first for the other half. Reliability was higher for the atypical-shift measure than for the incentive-responsiveness measure and was higher when points were offered first. In Experiment 3, computer anxiety was manipulated by creating a simulated computer breakdown in the experimental condition. Fifty-nine subjects (13 men, 46 women) were randomly assigned to the experimental condition or to one of two control conditions (an interruption condition and a no-interruption condition). Subjects with low RAM, as demonstrated by a low number of typical shifts, took longer to choose the achievement-oriented task, as predicted by the dynamics-of-action theory. The difference was evident in all conditions and most striking in the computer-breakdown condition. A change of focus from atypical to typical shifts is discussed.

  3. Capabilities of VOS-based fluxes for estimating ocean heat budget and its variability

    NASA Astrophysics Data System (ADS)

    Gulev, S.; Belyaev, K.

    2016-12-01

    We consider here the perspective of using VOS observations by merchant ships available form the ICOADS data for estimating ocean surface heat budget at different time scale. To this purpose we compute surface turbulent heat fluxes as well as short- and long-wave radiative fluxes from the ICOADS reports for the last several decades in the North Atlantic mid latitudes. Turbulent fluxes were derived using COARE-3 algorithm and for computation of radiative fluxes new algorithms accounting for cloud types were used. Sampling uncertainties in the VOS-based fluxes were estimated by sub-sampling of the recomputed reanalysis (ERA-Interim) fluxes according to the VOS sampling scheme. For the turbulent heat fluxes we suggest an approach to minimize sampling uncertainties. The approach is based on the integration of the turbulent heat fluxes in the coordinates of steering parameters (vertical surface temperature and humidity gradients on one hand and wind speed on the other) for which theoretical probability distributions are known. For short-wave radiative fluxes sampling uncertainties were minimized by "rotating local observation time around the clock" and using probability density functions for the cloud cover occurrence distributions. Analysis was performed for the North Atlantic latitudinal band from 25 N to 60 N, for which also estimates of the meridional heat transport are available from the ocean cross-sections. Over the last 35 years turbulent fluxes within the region analysed increase by about 6 W/m2 with the major growth during the 1990s and early 2000s. Decreasing incoming short wave radiation during the same time (about 1 W/m2) implies upward change of the ocean surface heat loss by about 7-8 W/m2. We discuss different sources of uncertainties of computations as well as potential of the application of the analysis concept to longer time series going back to 1920s.

  4. Modeling flow and transport in fracture networks using graphs

    NASA Astrophysics Data System (ADS)

    Karra, S.; O'Malley, D.; Hyman, J. D.; Viswanathan, H. S.; Srinivasan, G.

    2018-03-01

    Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizations of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. The good accuracy and the low computational cost, with O (104) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.

  5. Modeling flow and transport in fracture networks using graphs.

    PubMed

    Karra, S; O'Malley, D; Hyman, J D; Viswanathan, H S; Srinivasan, G

    2018-03-01

    Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizations of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. The good accuracy and the low computational cost, with O(10^{4}) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.

  6. Modeling flow and transport in fracture networks using graphs

    DOE PAGES

    Karra, S.; O'Malley, D.; Hyman, J. D.; ...

    2018-03-09

    Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizationsmore » of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. In conclusion, the good accuracy and the low computational cost, with O(10 4) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.« less

  7. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).

  8. Modeling flow and transport in fracture networks using graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karra, S.; O'Malley, D.; Hyman, J. D.

    Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizationsmore » of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. In conclusion, the good accuracy and the low computational cost, with O(10 4) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.« less

  9. Hormonal therapy followed by chemotherapy or the reverse sequence as first-line treatment of hormone-responsive, human epidermal growth factor receptor-2 negative metastatic breast cancer patients: results of an observational study.

    PubMed

    Bighin, Claudia; Dozin, Beatrice; Poggio, Francesca; Ceppi, Marcello; Bruzzi, Paolo; D'Alonzo, Alessia; Levaggi, Alessia; Giraudi, Sara; Lambertini, Matteo; Miglietta, Loredana; Vaglica, Marina; Fontana, Vincenzo; Iacono, Giuseppina; Pronzato, Paolo; Del Mastro, Lucia

    2017-07-04

    Introduction Although hormonal-therapy is the preferred first-line treatment for hormone-responsive, HER2 negative metastatic breast cancer, no data from clinical trials support the choice between hormonal-therapy and chemotherapy.Methods Patients were divided into two groups according to the treatment: chemotherapy or hormonal-therapy. Outcomes in terms of clinical benefit and median overall survival (OS) were retrospectively evaluated in the two groups. To calculate the time spent in chemotherapy with respect to OS in the two groups, the proportion of patients in chemotherapy relative to those present in either group was computed at every day from the start of therapy.Results From 1999 to 2013, 119 patients received first-line hormonal-therapy (HT-first group) and 100 first-line chemotherapy (CT-first group). Patients in the CT-first group were younger and with poorer prognostic factors as compared to those in HT-first group. Clinical benefit (77 vs 81%) and median OS (50.7 vs 51.1 months) were similar in the two groups. Time spent in chemotherapy was significantly longer during the first 3 years in CT-first group (54-34%) as compared to the HT-first group (11-18%). This difference decreased after the third year and overall was 28% in the CT-first group and 18% in the HT-first group.Conclusions The sequence first-line chemotherapy followed by hormonal-therapy, as compared with the opposite sequence, is associated with a longer time of OS spent in chemotherapy. However, despite the poorer prognostic factors, patients in the CT-first group had a superimposable OS than those in the HT-first group.

  10. Long term estimations of low frequency noise levels over water from an off-shore wind farm.

    PubMed

    Bolin, Karl; Almgren, Martin; Ohlsson, Esbjörn; Karasalo, Ilkka

    2014-03-01

    This article focuses on computations of low frequency sound propagation from an off-shore wind farm. Two different methods for sound propagation calculations are combined with meteorological data for every 3 hours in the year 2010 to examine the varying noise levels at a reception point at 13 km distance. It is shown that sound propagation conditions play a vital role in the noise impact from the off-shore wind farm and ordinary assessment methods can become inaccurate at longer propagation distances over water. Therefore, this paper suggests that methodologies to calculate noise immission with realistic sound speed profiles need to be combined with meteorological data over extended time periods to evaluate the impact of low frequency noise from modern off-shore wind farms.

  11. Scaled Runge-Kutta algorithms for handling dense output

    NASA Technical Reports Server (NTRS)

    Horn, M. K.

    1981-01-01

    Low order Runge-Kutta algorithms are developed which determine the solution of a system of ordinary differential equations at any point within a given integration step, as well as at the end of each step. The scaled Runge-Kutta methods are designed to be used with existing Runge-Kutta formulas, using the derivative evaluations of these defining algorithms as the core of the system. For a slight increase in computing time, the solution may be generated within the integration step, improving the efficiency of the Runge-Kutta algorithms, since the step length need no longer be severely reduced to coincide with the desired output point. Scaled Runge-Kutta algorithms are presented for orders 3 through 5, along with accuracy comparisons between the defining algorithms and their scaled versions for a test problem.

  12. Establishing an IERS Sub-Center for Ocean Angular Momentum

    NASA Technical Reports Server (NTRS)

    Ponte, Rui M.

    2001-01-01

    With the objective of establishing the Special Bureau for the Oceans (SBO), a new archival center for ocean angular momentum (OAM) products, we have computed and analyzed a number of OAM products from several ocean models, with and without data assimilation. All three components of OAM (axial term related to length of day variations and equatorial terms related to polar motion) have been examined in detail, in comparison to the respective Earth rotation parameters. An 11+ year time series of OAM given at 5-day intervals has been made publicly available. Other OAM products spanning longer periods and with higher temporal resolution, as well as products calculated from ocean model/data assimilation systems, have been prepared and should become part of the SBO archives in the near future.

  13. Coarse-grained molecular dynamics simulations for giant protein-DNA complexes

    NASA Astrophysics Data System (ADS)

    Takada, Shoji

    Biomolecules are highly hierarchic and intrinsically flexible. Thus, computational modeling calls for multi-scale methodologies. We have been developing a coarse-grained biomolecular model where on-average 10-20 atoms are grouped into one coarse-grained (CG) particle. Interactions among CG particles are tuned based on atomistic interactions and the fluctuation matching algorithm. CG molecular dynamics methods enable us to simulate much longer time scale motions of much larger molecular systems than fully atomistic models. After broad sampling of structures with CG models, we can easily reconstruct atomistic models, from which one can continue conventional molecular dynamics simulations if desired. Here, we describe our CG modeling methodology for protein-DNA complexes, together with various biological applications, such as the DNA duplication initiation complex, model chromatins, and transcription factor dynamics on chromatin-like environment.

  14. Big Data Ecosystems Enable Scientific Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Critchlow, Terence J.; Kleese van Dam, Kerstin

    Over the past 5 years, advances in experimental, sensor and computational technologies have driven the exponential growth in the volumes, acquisition rates, variety and complexity of scientific data. As noted by Hey et al in their 2009 e-book The Fourth Paradigm, this availability of large-quantities of scientifically meaningful data has given rise to a new scientific methodology - data intensive science. Data intensive science is the ability to formulate and evaluate hypotheses using data and analysis to extend, complement and, at times, replace experimentation, theory, or simulation. This new approach to science no longer requires scientists to interact directly withmore » the objects of their research; instead they can utilize digitally captured, reduced, calibrated, analyzed, synthesized and visualized results - allowing them carry out 'experiments' in data.« less

  15. A probabilistic approach to information retrieval in heterogeneous databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, A.; Segev, A.

    During the post decade, organizations have increased their scope and operations beyond their traditional geographic boundaries. At the same time, they have adopted heterogeneous and incompatible information systems independent of each other without a careful consideration that one day they may need to be integrated. As a result of this diversity, many important business applications today require access to data stored in multiple autonomous databases. This paper examines a problem of inter-database information retrieval in a heterogeneous environment, where conventional techniques are no longer efficient. To solve the problem, broader definitions for join, union, intersection and selection operators are proposed.more » Also, a probabilistic method to specify the selectivity of these operators is discussed. An algorithm to compute these probabilities is provided in pseudocode.« less

  16. The Development of the Administrative Sciences Personal Computer Network Tutorial.

    DTIC Science & Technology

    1987-09-01

    much attention being paid recently to the field of user interface design. No longer is it important to just design systems that meet the market demand...4 8 9. The Netw ork Status ................... ............. . 55 10. A pplications and O nline Help .. .......................... 62 11. Leavin

  17. On Cloud Nine

    ERIC Educational Resources Information Center

    McCrea, Bridget; Weil, Marty

    2011-01-01

    Across the U.S., innovative collaboration practices are happening in the cloud: Sixth-graders participate in literary salons. Fourth-graders mentor kindergarteners. And teachers use virtual Post-it notes to advise students as they create their own television shows. In other words, cloud computing is no longer just used to manage administrative…

  18. Applications of a general random-walk theory for confined diffusion.

    PubMed

    Calvo-Muñoz, Elisa M; Selvan, Myvizhi Esai; Xiong, Ruichang; Ojha, Madhusudan; Keffer, David J; Nicholson, Donald M; Egami, Takeshi

    2011-01-01

    A general random walk theory for diffusion in the presence of nanoscale confinement is developed and applied. The random-walk theory contains two parameters describing confinement: a cage size and a cage-to-cage hopping probability. The theory captures the correct nonlinear dependence of the mean square displacement (MSD) on observation time for intermediate times. Because of its simplicity, the theory also requires modest computational requirements and is thus able to simulate systems with very low diffusivities for sufficiently long time to reach the infinite-time-limit regime where the Einstein relation can be used to extract the self-diffusivity. The theory is applied to three practical cases in which the degree of order in confinement varies. The three systems include diffusion of (i) polyatomic molecules in metal organic frameworks, (ii) water in proton exchange membranes, and (iii) liquid and glassy iron. For all three cases, the comparison between theory and the results of molecular dynamics (MD) simulations indicates that the theory can describe the observed diffusion behavior with a small fraction of the computational expense. The confined-random-walk theory fit to the MSDs of very short MD simulations is capable of accurately reproducing the MSDs of much longer MD simulations. Furthermore, the values of the parameter for cage size correspond to the physical dimensions of the systems and the cage-to-cage hopping probability corresponds to the activation barrier for diffusion, indicating that the two parameters in the theory are not simply fitted values but correspond to real properties of the physical system.

  19. Security Implications of Induced Earthquakes

    NASA Astrophysics Data System (ADS)

    Jha, B.; Rao, A.

    2016-12-01

    The increase in earthquakes induced or triggered by human activities motivates us to research how a malicious entity could weaponize earthquakes to cause damage. Specifically, we explore the feasibility of controlling the location, timing and magnitude of an earthquake by activating a fault via injection and production of fluids into the subsurface. Here, we investigate the relationship between the magnitude and trigger time of an induced earthquake to the well-to-fault distance. The relationship between magnitude and distance is important to determine the farthest striking distance from which one could intentionally activate a fault to cause certain level of damage. We use our novel computational framework to model the coupled multi-physics processes of fluid flow and fault poromechanics. We use synthetic models representative of the New Madrid Seismic Zone and the San Andreas Fault Zone to assess the risk in the continental US. We fix injection and production flow rates of the wells and vary their locations. We simulate injection-induced Coulomb destabilization of faults and evolution of fault slip under quasi-static deformation. We find that the effect of distance on the magnitude and trigger time is monotonic, nonlinear, and time-dependent. Evolution of the maximum Coulomb stress on the fault provides insights into the effect of the distance on rupture nucleation and propagation. The damage potential of induced earthquakes can be maintained even at longer distances because of the balance between pressure diffusion and poroelastic stress transfer mechanisms. We conclude that computational modeling of induced earthquakes allows us to measure feasibility of weaponzing earthquakes and developing effective defense mechanisms against such attacks.

  20. Effects of diversity and procrastination in priority queuing theory: The different power law regimes

    NASA Astrophysics Data System (ADS)

    Saichev, A.; Sornette, D.

    2010-01-01

    Empirical analyses show that after the update of a browser, or the publication of the vulnerability of a software, or the discovery of a cyber worm, the fraction of computers still using the older browser or software version, or not yet patched, or exhibiting worm activity decays as a power law ˜1/tα with 0<α≤1 over a time scale of years. We present a simple model for this persistence phenomenon, framed within the standard priority queuing theory, of a target task which has the lowest priority compared to all other tasks that flow on the computer of an individual. We identify a “time deficit” control parameter β and a bifurcation to a regime where there is a nonzero probability for the target task to never be completed. The distribution of waiting time T until the completion of the target task has the power law tail ˜1/t1/2 , resulting from a first-passage solution of an equivalent Wiener process. Taking into account a diversity of time deficit parameters in a population of individuals, the power law tail is changed into 1/tα , with αɛ(0.5,∞) , including the well-known case 1/t . We also study the effect of “procrastination,” defined as the situation in which the target task may be postponed or delayed even after the individual has solved all other pending tasks. This regime provides an explanation for even slower apparent decay and longer persistence.

  1. Predicting Ambulance Time of Arrival to the Emergency Department Using Global Positioning System and Google Maps

    PubMed Central

    Fleischman, Ross J.; Lundquist, Mark; Jui, Jonathan; Newgard, Craig D.; Warden, Craig

    2014-01-01

    Objective To derive and validate a model that accurately predicts ambulance arrival time that could be implemented as a Google Maps web application. Methods This was a retrospective study of all scene transports in Multnomah County, Oregon, from January 1 through December 31, 2008. Scene and destination hospital addresses were converted to coordinates. ArcGIS Network Analyst was used to estimate transport times based on street network speed limits. We then created a linear regression model to improve the accuracy of these street network estimates using weather, patient characteristics, use of lights and sirens, daylight, and rush-hour intervals. The model was derived from a 50% sample and validated on the remainder. Significance of the covariates was determined by p < 0.05 for a t-test of the model coefficients. Accuracy was quantified by the proportion of estimates that were within 5 minutes of the actual transport times recorded by computer-aided dispatch. We then built a Google Maps-based web application to demonstrate application in real-world EMS operations. Results There were 48,308 included transports. Street network estimates of transport time were accurate within 5 minutes of actual transport time less than 16% of the time. Actual transport times were longer during daylight and rush-hour intervals and shorter with use of lights and sirens. Age under 18 years, gender, wet weather, and trauma system entry were not significant predictors of transport time. Our model predicted arrival time within 5 minutes 73% of the time. For lights and sirens transports, accuracy was within 5 minutes 77% of the time. Accuracy was identical in the validation dataset. Lights and sirens saved an average of 3.1 minutes for transports under 8.8 minutes, and 5.3 minutes for longer transports. Conclusions An estimate of transport time based only on a street network significantly underestimated transport times. A simple model incorporating few variables can predict ambulance time of arrival to the emergency department with good accuracy. This model could be linked to global positioning system data and an automated Google Maps web application to optimize emergency department resource use. Use of lights and sirens had a significant effect on transport times. PMID:23865736

  2. Feasibility Study of Needle Placement in Percutaneous Vertebroplasty: Cone-Beam Computed Tomography Guidance Versus Conventional Fluoroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braak, Sicco J., E-mail: sjbraak@gmail.com; Zuurmond, Kirsten, E-mail: kirsten.zuurmond@philips.com; Aerts, Hans C. J., E-mail: hans.cj.aerts@philips.com

    2013-08-01

    ObjectiveTo investigate the accuracy, procedure time, fluoroscopy time, and dose area product (DAP) of needle placement during percutaneous vertebroplasty (PVP) using cone-beam computed tomography (CBCT) guidance versus fluoroscopy.Materials and MethodsOn 4 spine phantoms with 11 vertebrae (Th7-L5), 4 interventional radiologists (2 experienced with CBCT guidance and two inexperienced) punctured all vertebrae in a bipedicular fashion. Each side was randomization to either CBCT guidance or fluoroscopy. CBCT guidance is a sophisticated needle guidance technique using CBCT, navigation software, and real-time fluoroscopy. The placement of the needle had to be to a specific target point. After the procedure, CBCT was performed tomore » determine the accuracy, procedure time, fluoroscopy time, and DAP. Analysis of the difference between methods and experience level was performed.ResultsMean accuracy using CBCT guidance (2.61 mm) was significantly better compared with fluoroscopy (5.86 mm) (p < 0.0001). Procedure time was in favor of fluoroscopy (7.39 vs. 10.13 min; p = 0.001). Fluoroscopy time during CBCT guidance was lower, but this difference is not significant (71.3 vs. 95.8 s; p = 0.056). DAP values for CBCT guidance and fluoroscopy were 514 and 174 mGy cm{sup 2}, respectively (p < 0.0001). There was a significant difference in favor of experienced CBCT guidance users regarding accuracy for both methods, procedure time of CBCT guidance, and added DAP values for fluoroscopy.ConclusionCBCT guidance allows users to perform PVP more accurately at the cost of higher patient dose and longer procedure time. Because procedural complications (e.g., cement leakage) are related to the accuracy of the needle placement, improvements in accuracy are clinically relevant. Training in CBCT guidance is essential to achieve greater accuracy and decrease procedure time/dose values.« less

  3. A computational feedforward model predicts categorization of masked emotional body language for longer, but not for shorter, latencies.

    PubMed

    Stienen, Bernard M C; Schindler, Konrad; de Gelder, Beatrice

    2012-07-01

    Given the presence of massive feedback loops in brain networks, it is difficult to disentangle the contribution of feedforward and feedback processing to the recognition of visual stimuli, in this case, of emotional body expressions. The aim of the work presented in this letter is to shed light on how well feedforward processing explains rapid categorization of this important class of stimuli. By means of parametric masking, it may be possible to control the contribution of feedback activity in human participants. A close comparison is presented between human recognition performance and the performance of a computational neural model that exclusively modeled feedforward processing and was engineered to fulfill the computational requirements of recognition. Results show that the longer the stimulus onset asynchrony (SOA), the closer the performance of the human participants was to the values predicted by the model, with an optimum at an SOA of 100 ms. At short SOA latencies, human performance deteriorated, but the categorization of the emotional expressions was still above baseline. The data suggest that, although theoretically, feedback arising from inferotemporal cortex is likely to be blocked when the SOA is 100 ms, human participants still seem to rely on more local visual feedback processing to equal the model's performance.

  4. Combining Fog Computing with Sensor Mote Machine Learning for Industrial IoT.

    PubMed

    Lavassani, Mehrzad; Forsström, Stefan; Jennehag, Ulf; Zhang, Tingting

    2018-05-12

    Digitalization is a global trend becoming ever more important to our connected and sustainable society. This trend also affects industry where the Industrial Internet of Things is an important part, and there is a need to conserve spectrum as well as energy when communicating data to a fog or cloud back-end system. In this paper we investigate the benefits of fog computing by proposing a novel distributed learning model on the sensor device and simulating the data stream in the fog, instead of transmitting all raw sensor values to the cloud back-end. To save energy and to communicate as few packets as possible, the updated parameters of the learned model at the sensor device are communicated in longer time intervals to a fog computing system. The proposed framework is implemented and tested in a real world testbed in order to make quantitative measurements and evaluate the system. Our results show that the proposed model can achieve a 98% decrease in the number of packets sent over the wireless link, and the fog node can still simulate the data stream with an acceptable accuracy of 97%. We also observe an end-to-end delay of 180 ms in our proposed three-layer framework. Hence, the framework shows that a combination of fog and cloud computing with a distributed data modeling at the sensor device for wireless sensor networks can be beneficial for Industrial Internet of Things applications.

  5. Combining Fog Computing with Sensor Mote Machine Learning for Industrial IoT

    PubMed Central

    Lavassani, Mehrzad; Jennehag, Ulf; Zhang, Tingting

    2018-01-01

    Digitalization is a global trend becoming ever more important to our connected and sustainable society. This trend also affects industry where the Industrial Internet of Things is an important part, and there is a need to conserve spectrum as well as energy when communicating data to a fog or cloud back-end system. In this paper we investigate the benefits of fog computing by proposing a novel distributed learning model on the sensor device and simulating the data stream in the fog, instead of transmitting all raw sensor values to the cloud back-end. To save energy and to communicate as few packets as possible, the updated parameters of the learned model at the sensor device are communicated in longer time intervals to a fog computing system. The proposed framework is implemented and tested in a real world testbed in order to make quantitative measurements and evaluate the system. Our results show that the proposed model can achieve a 98% decrease in the number of packets sent over the wireless link, and the fog node can still simulate the data stream with an acceptable accuracy of 97%. We also observe an end-to-end delay of 180 ms in our proposed three-layer framework. Hence, the framework shows that a combination of fog and cloud computing with a distributed data modeling at the sensor device for wireless sensor networks can be beneficial for Industrial Internet of Things applications. PMID:29757227

  6. Business continuity strategies for cyber defence: battling time and information overload.

    PubMed

    Streufert, John

    2010-11-01

    Can the same numbers and letters which are the life blood of modern business and government computer systems be harnessed to protect computers from attack against known information security risks? For the past seven years, Foreign Service officers and technicians of the US Government have sought to maintain diplomatic operations in the face of rising cyber attacks and test the hypothesis that an ounce of prevention is worth a pound of cure. As eight out of ten attacks leverage known computer security vulnerabilities or configuration setting weaknesses, a pound of cure would seem to be easy to come by. Yet modern security tools present an unusually consequential threat to business continuity - too much rather than too little information on cyber problems is presented, harking back to a phenomenon cited by social scientists in the 1960s called 'information overload'. Experience indicates that the longer the most serious cyber problems go untreated, the wider the attack surface adversaries can find. One technique used at the Department of State, called 'risk scoring', resulted in an 89 per cent overall reduction in measured risk over 12 months for the Department of State's servers and personal computers. Later refinements of risk scoring enabled technicians to correct unique security threats with unprecedented speed. This paper explores how the use of metrics, special care in presenting information to technicians and executives alike, as well as tactical use of organisational incentives can result in stronger cyber defences protecting modern organisations.

  7. Computer-mediated and face-to-face communication in metastatic cancer support groups.

    PubMed

    Vilhauer, Ruvanee P

    2014-08-01

    To compare the experiences of women with metastatic breast cancer (MBC) in computer-mediated and face-to-face support groups. Interviews from 18 women with MBC, who were currently in computer-mediated support groups (CMSGs), were examined using interpretative phenomenological analysis. The CMSGs were in an asynchronous mailing list format; women communicated exclusively via email. All the women were also, or had previously been, in a face-to-face support group (FTFG). CMSGs had both advantages and drawbacks, relative to face-to-face groups (FTFGs), for this population. Themes examined included convenience, level of support, intimacy, ease of expression, range of information, and dealing with debilitation and dying. CMSGs may provide a sense of control and a greater level of support. Intimacy may take longer to develop in a CMSG, but women may have more opportunities to get to know each other. CMSGs may be helpful while adjusting to a diagnosis of MBC, because women can receive support without being overwhelmed by physical evidence of disability in others or exposure to discussions about dying before they are ready. However, the absence of nonverbal cues in CMSGs also led to avoidance of topics related to death and dying when women were ready to face them. Agendas for discussion, the presence of a facilitator or more time in CMSGs may attenuate this problem. The findings were discussed in light of prevailing research and theories about computer-mediated communication. They have implications for designing CMSGs for this population.

  8. Prediction of oral disintegration time of fast disintegrating tablets using texture analyzer and computational optimization.

    PubMed

    Szakonyi, G; Zelkó, R

    2013-05-20

    One of the promising approaches to predict in vivo disintegration time of orally disintegrating tablets (ODT) is the use of texture analyzer instrument. Once the method is able to provide good in vitro in vivo correlation (IVIVC) in the case of different tablets, it might be able to predict the oral disintegration time of similar products. However, there are many tablet parameters that influence the in vivo and the in vitro disintegration time of ODT products. Therefore, the measured in vitro and in vivo disintegration times can occasionally differ, even if they coincide in most cases of the investigated products and the in vivo disintegration times may also change if the aimed patient group is suffering from a special illness. If the method is no longer able to provide good IVIVC, then the modification of a single instrumental parameter may not be successful and the in vitro method must be re-set in a complex manner in order to provide satisfactory results. In the present experiment, an optimization process was developed based on texture analysis measurements using five different tablets in order to predict their in vivo disintegration times, and the optimized texture analysis method was evaluated using independent tablets. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Monitoring The Crab Pulsar

    NASA Technical Reports Server (NTRS)

    Rots, Arnold H.; Swank, Jean (Technical Monitor)

    2001-01-01

    The monitoring of the X-ray pulses from the Crab pulsar is still ongoing at the time of this writing, and we hope to be able to continue the campaign for the life of the XTE mission. We have established beyond all doubt that: (1) the X-ray main pulse leads the radio pulse by approximately 300 microseconds, (2) this phase lag is constant and not influenced by glitches, (3) this lag does not depend on X-ray energy, (4) the relative phase of the two X-ray pulses does not vary, and (5) the spectral indices of primary, secondary, and inter-pulse are distinct and constant. At this time we are investigating whether the radio timing ephemeris can be replaced by an x-ray ephemeris and whether any long-time timing ephemeris can be established. If so, it would enable use to study variations in pulse arrival times at a longer time scales. Such a study is easier in x-rays than at radio wavelengths since the dispersion measure plays no role. These results were reported at the 2000 HEAD Meeting in Honolulu, HI. Travel was paid partly out of this grant. The remainder was applied toward the acquisition of a laptop computer that allows independent and fast analysis of all monitoring observations.

  10. Short-term outcome of 1,465 computer-navigated primary total knee replacements 2005–2008

    PubMed Central

    2011-01-01

    Background and purpose Improvement of positioning and alignment by the use of computer-assisted surgery (CAS) might improve longevity and function in total knee replacements, but there is little evidence. In this study, we evaluated the short-term results of computer-navigated knee replacements based on data from the Norwegian Arthroplasty Register. Patients and methods Primary total knee replacements without patella resurfacing, reported to the Norwegian Arthroplasty Register during the years 2005–2008, were evaluated. The 5 most common implants and the 3 most common navigation systems were selected. Cemented, uncemented, and hybrid knees were included. With the risk of revision for any cause as the primary endpoint and intraoperative complications and operating time as secondary outcomes, 1,465 computer-navigated knee replacements (CAS) and 8,214 conventionally operated knee replacements (CON) were compared. Kaplan-Meier survival analysis and Cox regression analysis with adjustment for age, sex, prosthesis brand, fixation method, previous knee surgery, preoperative diagnosis, and ASA category were used. Results Kaplan-Meier estimated survival at 2 years was 98% (95% CI: 97.5–98.3) in the CON group and 96% (95% CI: 95.0–97.8) in the CAS group. The adjusted Cox regression analysis showed a higher risk of revision in the CAS group (RR = 1.7, 95% CI: 1.1–2.5; p = 0.02). The LCS Complete knee had a higher risk of revision with CAS than with CON (RR = 2.1, 95% CI: 1.3–3.4; p = 0.004)). The differences were not statistically significant for the other prosthesis brands. Mean operating time was 15 min longer in the CAS group. Interpretation With the introduction of computer-navigated knee replacement surgery in Norway, the short-term risk of revision has increased for computer-navigated replacement with the LCS Complete. The mechanisms of failure of these implantations should be explored in greater depth, and in this study we have not been able to draw conclusions regarding causation. PMID:21504309

  11. SOPHAEROS code development and its application to falcon tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lajtha, G.; Missirlian, M.; Kissane, M.

    1996-12-31

    One of the key issues in source-term evaluation in nuclear reactor severe accidents is determination of the transport behavior of fission products released from the degrading core. The SOPHAEROS computer code is being developed to predict fission product transport in a mechanistic way in light water reactor circuits. These applications of the SOPHAEROS code to the Falcon experiments, among others not presented here, indicate that the numerical scheme of the code is robust, and no convergence problems are encountered. The calculation is also very fast being three times longer on a Sun SPARC 5 workstation than real time and typicallymore » {approx} 10 times faster than an identical calculation with the VICTORIA code. The study demonstrates that the SOPHAEROS 1.3 code is a suitable tool for prediction of the vapor chemistry and fission product transport with a reasonable level of accuracy. Furthermore, the fexibility of the code material data bank allows improvement of understanding of fission product transport and deposition in the circuit. Performing sensitivity studies with different chemical species or with different properties (saturation pressure, chemical equilibrium constants) is very straightforward.« less

  12. Development and learning of saccadic eye movements in 7- to 42-month-old children.

    PubMed

    Alahyane, Nadia; Lemoine-Lardennois, Christelle; Tailhefer, Coline; Collins, Thérèse; Fagard, Jacqueline; Doré-Mazars, Karine

    2016-01-01

    From birth, infants move their eyes to explore their environment, interact with it, and progressively develop a multitude of motor and cognitive abilities. The characteristics and development of oculomotor control in early childhood remain poorly understood today. Here, we examined reaction time and amplitude of saccadic eye movements in 93 7- to 42-month-old children while they oriented toward visual animated cartoon characters appearing at unpredictable locations on a computer screen over 140 trials. Results revealed that saccade performance is immature in children compared to a group of adults: Saccade reaction times were longer, and saccade amplitude relative to target location (10° eccentricity) was shorter. Results also indicated that performance is flexible in children. Although saccade reaction time decreased as age increased, suggesting developmental improvements in saccade control, saccade amplitude gradually improved over trials. Moreover, similar to adults, children were able to modify saccade amplitude based on the visual error made in the previous trial. This second set of results suggests that short visual experience and/or rapid sensorimotor learning are functional in children and can also affect saccade performance.

  13. Estimating the volume and age of water stored in global lakes using a geo-statistical approach

    PubMed Central

    Messager, Mathis Loïc; Lehner, Bernhard; Grill, Günther; Nedeva, Irena; Schmitt, Oliver

    2016-01-01

    Lakes are key components of biogeochemical and ecological processes, thus knowledge about their distribution, volume and residence time is crucial in understanding their properties and interactions within the Earth system. However, global information is scarce and inconsistent across spatial scales and regions. Here we develop a geo-statistical model to estimate the volume of global lakes with a surface area of at least 10 ha based on the surrounding terrain information. Our spatially resolved database shows 1.42 million individual polygons of natural lakes with a total surface area of 2.67 × 106 km2 (1.8% of global land area), a total shoreline length of 7.2 × 106 km (about four times longer than the world's ocean coastline) and a total volume of 181.9 × 103 km3 (0.8% of total global non-frozen terrestrial water stocks). We also compute mean and median hydraulic residence times for all lakes to be 1,834 days and 456 days, respectively. PMID:27976671

  14. Time averaging of NMR chemical shifts in the MLF peptide in the solid state.

    PubMed

    De Gortari, Itzam; Portella, Guillem; Salvatella, Xavier; Bajaj, Vikram S; van der Wel, Patrick C A; Yates, Jonathan R; Segall, Matthew D; Pickard, Chris J; Payne, Mike C; Vendruscolo, Michele

    2010-05-05

    Since experimental measurements of NMR chemical shifts provide time and ensemble averaged values, we investigated how these effects should be included when chemical shifts are computed using density functional theory (DFT). We measured the chemical shifts of the N-formyl-L-methionyl-L-leucyl-L-phenylalanine-OMe (MLF) peptide in the solid state, and then used the X-ray structure to calculate the (13)C chemical shifts using the gauge including projector augmented wave (GIPAW) method, which accounts for the periodic nature of the crystal structure, obtaining an overall accuracy of 4.2 ppm. In order to understand the origin of the difference between experimental and calculated chemical shifts, we carried out first-principles molecular dynamics simulations to characterize the molecular motion of the MLF peptide on the picosecond time scale. We found that (13)C chemical shifts experience very rapid fluctuations of more than 20 ppm that are averaged out over less than 200 fs. Taking account of these fluctuations in the calculation of the chemical shifts resulted in an accuracy of 3.3 ppm. To investigate the effects of averaging over longer time scales we sampled the rotameric states populated by the MLF peptides in the solid state by performing a total of 5 micros classical molecular dynamics simulations. By averaging the chemical shifts over these rotameric states, we increased the accuracy of the chemical shift calculations to 3.0 ppm, with less than 1 ppm error in 10 out of 22 cases. These results suggests that better DFT-based predictions of chemical shifts of peptides and proteins will be achieved by developing improved computational strategies capable of taking into account the averaging process up to the millisecond time scale on which the chemical shift measurements report.

  15. Application and Utility of iPads in Pediatric Tele-echocardiography.

    PubMed

    Colombo, Jamie N; Seckeler, Michael D; Barber, Brent J; Krupinski, Elizabeth A; Weinstein, Ronald S; Sisk, David; Lax, Daniela

    2016-05-01

    Telemedicine is used with increasing frequency to improve patient care in remote areas. The interpretation of medical imaging on iPad(®) (Apple, Cupertino, CA) tablets has been reported to be accurate. There are no studies on the use of iPads for interpretation of pediatric echocardiograms. We compared the quality of echo images, diagnostic accuracy, and review time using three different modalities: remote access on an iPad Air (iPad), remote access via a computer (Remote), and direct access on a computer linked through Ethernet to the server, the "gold standard" (Direct). Fifty consecutive archived pediatric echocardiograms were interpreted using the three modalities. Studies were analyzed blindly by three pediatric cardiologists; review time, diagnostic accuracy, and image quality were documented. Diagnostic accuracy was assessed by comparing the study diagnoses with the official diagnosis in the patient's chart. Discrepancies between diagnoses were graded as major (more than one grade difference) or minor (one grade difference in severity of lesion). There were no significant differences in accuracy among the three modalities. There was one major discrepancy (size of patent ductus arteriosus); all others were minor, hemodynamically insignificant. Image quality ratings were better for iPad than Remote; Direct had the highest ratings. Review times (mean [standard deviation] minutes) were longest for iPad (5.89 [3.87]) and then Remote (4.72 [2.69]), with Direct having the shortest times (3.52 [1.42]) (p < 0.0001). Pediatric echocardiograms can be interpreted using convenient, portable devices while preserving accuracy and quality with slightly longer review times (1-2 min). These findings are important in the current era of increasing need for mobile health.

  16. Two-dimensional computational modeling of high-speed transient flow in gun tunnel

    NASA Astrophysics Data System (ADS)

    Mohsen, A. M.; Yusoff, M. Z.; Hasini, H.; Al-Falahi, A.

    2018-03-01

    In this work, an axisymmetric numerical model was developed to investigate the transient flow inside a 7-meter-long free piston gun tunnel. The numerical solution of the gun tunnel was carried out using the commercial solver Fluent. The governing equations of mass, momentum, and energy were discretized using the finite volume method. The dynamic zone of the piston was modeled as a rigid body, and its motion was coupled with the hydrodynamic forces from the flow solution based on the six-degree-of-freedom solver. A comparison of the numerical data with the theoretical calculations and experimental measurements of a ground-based gun tunnel facility showed good agreement. The effects of parameters such as working gases and initial pressure ratio on the test conditions in the facility were examined. The pressure ratio ranged from 10 to 50, and gas combinations of air-air, helium-air, air-nitrogen, and air-CO2 were used. The results showed that steady nozzle reservoir conditions can be maintained for a longer duration when the initial conditions across the diaphragm are adjusted. It was also found that the gas combination of helium-air yielded the highest shock wave strength and speed, but a longer test time was achieved in the test section when using the CO2 test gas.

  17. A second-order accurate finite volume scheme with the discrete maximum principle for solving Richards’ equation on unstructured meshes

    DOE PAGES

    Svyatsky, Daniil; Lipnikov, Konstantin

    2017-03-18

    Richards’s equation describes steady-state or transient flow in a variably saturated medium. For a medium having multiple layers of soils that are not aligned with coordinate axes, a mesh fitted to these layers is no longer orthogonal and the classical two-point flux approximation finite volume scheme is no longer accurate. Here, we propose new second-order accurate nonlinear finite volume (NFV) schemes for the head and pressure formulations of Richards’ equation. We prove that the discrete maximum principles hold for both formulations at steady-state which mimics similar properties of the continuum solution. The second-order accuracy is achieved using high-order upwind algorithmsmore » for the relative permeability. Numerical simulations of water infiltration into a dry soil show significant advantage of the second-order NFV schemes over the first-order NFV schemes even on coarse meshes. Since explicit calculation of the Jacobian matrix becomes prohibitively expensive for high-order schemes due to build-in reconstruction and slope limiting algorithms, we study numerically the preconditioning strategy introduced recently in Lipnikov et al. (2016) that uses a stable approximation of the continuum Jacobian. Lastly, numerical simulations show that the new preconditioner reduces computational cost up to 2–3 times in comparison with the conventional preconditioners.« less

  18. Simulation of the Beating Heart Based on Physically Modeling aDeformable Balloon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohmer, Damien; Sitek, Arkadiusz; Gullberg, Grant T.

    2006-07-18

    The motion of the beating heart is complex and createsartifacts in SPECT and x-ray CT images. Phantoms such as the JaszczakDynamic Cardiac Phantom are used to simulate cardiac motion forevaluationof acquisition and data processing protocols used for cardiacimaging. Two concentric elastic membranes filled with water are connectedto tubing and pump apparatus for creating fluid flow in and out of theinner volume to simulate motion of the heart. In the present report, themovement of two concentric balloons is solved numerically in order tocreate a computer simulation of the motion of the moving membranes in theJaszczak Dynamic Cardiac Phantom. A system ofmore » differential equations,based on the physical properties, determine the motion. Two methods aretested for solving the system of differential equations. The results ofboth methods are similar providing a final shape that does not convergeto a trivial circular profile. Finally,a tomographic imaging simulationis performed by acquiring static projections of the moving shape andreconstructing the result to observe motion artifacts. Two cases aretaken into account: in one case each projection angle is sampled for ashort time interval and the other case is sampled for a longer timeinterval. The longer sampling acquisition shows a clear improvement indecreasing the tomographic streaking artifacts.« less

  19. Relaxation oscillations and hierarchy of feedbacks in MAPK signaling

    NASA Astrophysics Data System (ADS)

    Kochańczyk, Marek; Kocieniewski, Paweł; Kozłowska, Emilia; Jaruszewicz-Błońska, Joanna; Sparta, Breanne; Pargett, Michael; Albeck, John G.; Hlavacek, William S.; Lipniacki, Tomasz

    2017-01-01

    We formulated a computational model for a MAPK signaling cascade downstream of the EGF receptor to investigate how interlinked positive and negative feedback loops process EGF signals into ERK pulses of constant amplitude but dose-dependent duration and frequency. A positive feedback loop involving RAS and SOS, which leads to bistability and allows for switch-like responses to inputs, is nested within a negative feedback loop that encompasses RAS and RAF, MEK, and ERK that inhibits SOS via phosphorylation. This negative feedback, operating on a longer time scale, changes switch-like behavior into oscillations having a period of 1 hour or longer. Two auxiliary negative feedback loops, from ERK to MEK and RAF, placed downstream of the positive feedback, shape the temporal ERK activity profile but are dispensable for oscillations. Thus, the positive feedback introduces a hierarchy among negative feedback loops, such that the effect of a negative feedback depends on its position with respect to the positive feedback loop. Furthermore, a combination of the fast positive feedback involving slow-diffusing membrane components with slower negative feedbacks involving faster diffusing cytoplasmic components leads to local excitation/global inhibition dynamics, which allows the MAPK cascade to transmit paracrine EGF signals into spatially non-uniform ERK activity pulses.

  20. Revisiting the synoptic-scale predictability of severe European winter storms using ECMWF ensemble reforecasts

    NASA Astrophysics Data System (ADS)

    Pantillon, Florian; Knippertz, Peter; Corsmeier, Ulrich

    2017-10-01

    New insights into the synoptic-scale predictability of 25 severe European winter storms of the 1995-2015 period are obtained using the homogeneous ensemble reforecast dataset from the European Centre for Medium-Range Weather Forecasts. The predictability of the storms is assessed with different metrics including (a) the track and intensity to investigate the storms' dynamics and (b) the Storm Severity Index to estimate the impact of the associated wind gusts. The storms are well predicted by the whole ensemble up to 2-4 days ahead. At longer lead times, the number of members predicting the observed storms decreases and the ensemble average is not clearly defined for the track and intensity. The Extreme Forecast Index and Shift of Tails are therefore computed from the deviation of the ensemble from the model climate. Based on these indices, the model has some skill in forecasting the area covered by extreme wind gusts up to 10 days, which indicates a clear potential for early warnings. However, large variability is found between the individual storms. The poor predictability of outliers appears related to their physical characteristics such as explosive intensification or small size. Longer datasets with more cases would be needed to further substantiate these points.

  1. Automatic quantification framework to detect cracks in teeth

    PubMed Central

    Shah, Hina; Hernandez, Pablo; Budin, Francois; Chittajallu, Deepak; Vimort, Jean-Baptiste; Walters, Rick; Mol, André; Khan, Asma; Paniagua, Beatriz

    2018-01-01

    Studies show that cracked teeth are the third most common cause for tooth loss in industrialized countries. If detected early and accurately, patients can retain their teeth for a longer time. Most cracks are not detected early because of the discontinuous symptoms and lack of good diagnostic tools. Currently used imaging modalities like Cone Beam Computed Tomography (CBCT) and intraoral radiography often have low sensitivity and do not show cracks clearly. This paper introduces a novel method that can detect, quantify, and localize cracks automatically in high resolution CBCT (hr-CBCT) scans of teeth using steerable wavelets and learning methods. These initial results were created using hr-CBCT scans of a set of healthy teeth and of teeth with simulated longitudinal cracks. The cracks were simulated using multiple orientations. The crack detection was trained on the most significant wavelet coefficients at each scale using a bagged classifier of Support Vector Machines. Our results show high discriminative specificity and sensitivity of this method. The framework aims to be automatic, reproducible, and open-source. Future work will focus on the clinical validation of the proposed techniques on different types of cracks ex-vivo. We believe that this work will ultimately lead to improved tracking and detection of cracks allowing for longer lasting healthy teeth. PMID:29769755

  2. A second-order accurate finite volume scheme with the discrete maximum principle for solving Richards’ equation on unstructured meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svyatsky, Daniil; Lipnikov, Konstantin

    Richards’s equation describes steady-state or transient flow in a variably saturated medium. For a medium having multiple layers of soils that are not aligned with coordinate axes, a mesh fitted to these layers is no longer orthogonal and the classical two-point flux approximation finite volume scheme is no longer accurate. Here, we propose new second-order accurate nonlinear finite volume (NFV) schemes for the head and pressure formulations of Richards’ equation. We prove that the discrete maximum principles hold for both formulations at steady-state which mimics similar properties of the continuum solution. The second-order accuracy is achieved using high-order upwind algorithmsmore » for the relative permeability. Numerical simulations of water infiltration into a dry soil show significant advantage of the second-order NFV schemes over the first-order NFV schemes even on coarse meshes. Since explicit calculation of the Jacobian matrix becomes prohibitively expensive for high-order schemes due to build-in reconstruction and slope limiting algorithms, we study numerically the preconditioning strategy introduced recently in Lipnikov et al. (2016) that uses a stable approximation of the continuum Jacobian. Lastly, numerical simulations show that the new preconditioner reduces computational cost up to 2–3 times in comparison with the conventional preconditioners.« less

  3. Searching on the Run

    ERIC Educational Resources Information Center

    Tenopir, Carol

    2004-01-01

    With wireless connectivity and small laptop computers, people are no longer tied to the desktop for online searching. Handheld personal digital assistants (PDAs) offer even greater portability. So far, the most common uses of PDAs are as calendars and address books, or to interface with a laptop or desktop machine. More advanced PDAs, like…

  4. A Polarization Responsive System for Microwaves

    DTIC Science & Technology

    1980-12-01

    radioastronomy employs much longer wave- lengths than optics, but the electromagnetic wave formulation of polari- zation is the same for optics as it is for... Radioastronomy . New York: McGraw-Hill, 1966. 8. Kuck, D. J., D. Lawrie and A. H. Samek. High Speed Computer and Algorithm Organization. New York: Academic

  5. The Cost Effectiveness of 22 Approaches for Raising Student Achievement

    ERIC Educational Resources Information Center

    Yeh, Stuart S.

    2010-01-01

    Review of cost-effectiveness studies suggests that rapid assessment is more cost effective with regard to student achievement than comprehensive school reform (CSR), cross-age tutoring, computer-assisted instruction, a longer school day, increases in teacher education, teacher experience or teacher salaries, summer school, more rigorous math…

  6. 48 CFR 1252.237-70 - Qualifications of contractor employees.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... investigations according to DOT Order 1630.2B, Personnel Security Management. f. The Contractor shall immediately notify the contracting officer when an employee no longer requires access to DOT computer systems due to transfer, completion of a project retirement or termination of employment. g. The Contractor shall include...

  7. Big Data, Models and Tools | Transportation Research | NREL

    Science.gov Websites

    displacement, and greenhouse gas reduction scenarios. New Tool Accelerates Design of Electric Vehicle Batteries design better, safer, and longer-lasting lithium-ion batteries for electric-drive vehicles through the Computer-Aided Engineering for Electric Drive Vehicle Batteries (CAEBAT) project. This month, ANSYS

  8. Computer supplies insulation recipe for Cookie Company Roof

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Roofing contractors no longer have to rely on complicated calculations and educated guesses to determine cost-efficient levels of roof insulation. A simple hand-held calculator and printer offers seven different programs for fast figuring insulation thickness based on job type, roof size, tax rates, and heating and cooling cost factors.

  9. Students' Response to Traditional and Computer-Assisted Formative Feedback: A Comparative Case Study

    ERIC Educational Resources Information Center

    Denton, Philip; Madden, Judith; Roberts, Matthew; Rowe, Philip

    2008-01-01

    The national movement towards progress files, incorporating personal development planning and reflective learning, is supported by lecturers providing effective feedback to their students. Recent technological advances mean that higher education tutors are no longer obliged to return comments in the "traditional" manner, by annotating…

  10. A Study on Corporate Security Awareness and Compliance Behavior Intent

    ERIC Educational Resources Information Center

    Clark, Christine Y.

    2013-01-01

    Understanding the drivers to encourage employees' security compliance behavior is increasingly important in today's highly networked environment to protect computer and information assets of the company. The traditional approach for corporations to implement technology-based controls, to prevent security breaches is no longer sufficient.…

  11. Introducing Computer-Based Concept Mapping to Older Adults

    ERIC Educational Resources Information Center

    Calvo, Iñaki; Elorriaga, Jon A.; Arruarte, Ana; Larrañaga, Mikel; Gutiérrez, Julián

    2017-01-01

    The dramatic eruption of information and communication technology has had a remarkable effect on modern life, including the capacity to help older adults improve their quality of life and remain independent longer. However, while technology use is generally widespread, there is an observable underutilization by older people. There is sound…

  12. Nonremission and time to remission among remitters in major depressive disorder: Revisiting STAR*D.

    PubMed

    Mojtabai, Ramin

    2017-12-01

    Some individuals with major depressive disorder do not experience a remission even after one or more adequate treatment trials. In some others who experience remission, it happens at variable times. This study sought to estimate the prevalence of nonremission in a large sample of patient participating in the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial and to identify correlates of nonremission and time to remission among remitters. Using data from 3,606 participants of STAR*D, the study used cure regression modeling to estimate nonremission and jointly model correlates of nonremission and time to remission among the remitters. Overall, 14.7% of the STAR*D participants were estimated to be nonremitters. Among remitters, the rate of remission declined over time. Greater severity, poorer physical health, and poor adherence with treatments were associated with both nonremission and a longer time to remission among the remitters in multivariable analyses, whereas unemployment, not having higher education, and longer duration of current episode were uniquely associated with nonremission; whereas, treatment in specialty mental health settings, poorer mental health functioning, and greater impairment in role functioning with a longer time to remission among remitters. Poor treatment adherence and poor physical health appear to be common risk factors for both nonremission and longer time to remission, highlighting the importance of integrated care models that address both medical and mental healthcare needs and interventions aimed at improving treatment adherence. © 2017 Wiley Periodicals, Inc.

  13. Longer dialysis session length is associated with better intermediate outcomes and survival among patients on in-center three times per week hemodialysis: results from the Dialysis Outcomes and Practice Patterns Study (DOPPS)

    PubMed Central

    Tentori, Francesca; Zhang, Jinyao; Li, Yun; Karaboyas, Angelo; Kerr, Peter; Saran, Rajiv; Bommer, Juergen; Port, Friedrich; Akiba, Takashi; Pisoni, Ronald; Robinson, Bruce

    2012-01-01

    Background Longer dialysis session length (treatment time, TT) has been associated with better survival among hemodialysis (HD) patients. The impact of TT on clinical markers that may contribute to this survival advantage is not well known. Methods Using data from the international Dialysis Outcomes and Practice Patterns Study, we assessed the association of TT with clinical outcomes using both standard regression analyses and instrumental variable approaches. The study included 37 414 patients on in-center HD three times per week with prescribed TT from 120 to 420 min. Results Facility mean TT ranged from 214 min in the USA to 256 min in Australia–New Zealand. Accounting for country effects, mortality risk was lower for patients with longer TT {hazard ratio for every 30 min: all-cause mortality: 0.94 [95% confidence interval (CI): 0.92–0.97], cardiovascular mortality: 0.95 (95% CI: 0.91–0.98) and sudden death: 0.93 (95% CI: 0.88–0.98)}. Patients with longer TT had lower pre- and post-dialysis systolic blood pressure, greater intradialytic weight loss, higher hemoglobin (for the same erythropoietin dose), serum albumin and potassium and lower serum phosphorus and white blood cell counts. Similar associations were found using the instrumental variable approach, although the positive associations of TT with weight loss and potassium were lost. Conclusions Favorable levels of a variety of clinical markers may contribute to the better survival of patients receiving longer TT. These findings support longer TT prescription in the setting of in-center, three times per week HD. PMID:22431708

  14. Length asymmetry of the bovine digits.

    PubMed

    Muggli, E; Sauter-Louis, C; Braun, U; Nuss, K

    2011-06-01

    The lengths of the digital bones of the fore- and hind-limbs obtained post mortem from 40 cattle of different ages were measured using digital radiographs. The lengths of the individual digital bones and the overall length of the digit were determined using computer software. The lateral metacarpal/metatarsal condyle, and lateral P1 and P2 were significantly longer than their medial counterparts, whereas P3 of the medial digit was longer than its lateral partner. Measured from the cannon bone epiphysis to the tip of the pedal bone, the mean increased length of the lateral digit was 0.8 mm in the fore- and 1.5 mm in the hind-limb. When the lengths of the digital bones were summed, the mean length of the lateral digit was 1.8 mm longer in the fore-limb and 2.1 mm longer in the hind-limb. Based on these findings, it can be concluded that the lengths of the paired digits differ in cattle. The majority of cattle have longer lateral digits in the fore- and hind-limbs. This asymmetry might explain why the lateral hind-limb claws are predisposed to sole ulcers on hard surfaces. In the hind-limbs, the impact is transferred from the pelvis directly to the longer lateral digit. In the fore-limb claws, the tenomuscular attachment to the trunk may be involved in a more even weight distribution and in a shift of weight to the medial claw. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Estimation of T2* Relaxation Time of Breast Cancer: Correlation with Clinical, Imaging and Pathological Features

    PubMed Central

    Seo, Mirinae; Jahng, Geon-Ho; Sohn, Yu-Mee; Rhee, Sun Jung; Oh, Jang-Hoon; Won, Kyu-Yeoun

    2017-01-01

    Objective The purpose of this study was to estimate the T2* relaxation time in breast cancer, and to evaluate the association between the T2* value with clinical-imaging-pathological features of breast cancer. Materials and Methods Between January 2011 and July 2013, 107 consecutive women with 107 breast cancers underwent multi-echo T2*-weighted imaging on a 3T clinical magnetic resonance imaging system. The Student's t test and one-way analysis of variance were used to compare the T2* values of cancer for different groups, based on the clinical-imaging-pathological features. In addition, multiple linear regression analysis was performed to find independent predictive factors associated with the T2* values. Results Of the 107 breast cancers, 92 were invasive and 15 were ductal carcinoma in situ (DCIS). The mean T2* value of invasive cancers was significantly longer than that of DCIS (p = 0.029). Signal intensity on T2-weighted imaging (T2WI) and histologic grade of invasive breast cancers showed significant correlation with T2* relaxation time in univariate and multivariate analysis. Breast cancer groups with higher signal intensity on T2WI showed longer T2* relaxation time (p = 0.005). Cancer groups with higher histologic grade showed longer T2* relaxation time (p = 0.017). Conclusion The T2* value is significantly longer in invasive cancer than in DCIS. In invasive cancers, T2* relaxation time is significantly longer in higher histologic grades and high signal intensity on T2WI. Based on these preliminary data, quantitative T2* mapping has the potential to be useful in the characterization of breast cancer. PMID:28096732

  16. A biomechanical modeling guided simultaneous motion estimation and image reconstruction technique (SMEIR-Bio) for 4D-CBCT reconstruction

    NASA Astrophysics Data System (ADS)

    Huang, Xiaokun; Zhang, You; Wang, Jing

    2017-03-01

    Four-dimensional (4D) cone-beam computed tomography (CBCT) enables motion tracking of anatomical structures and removes artifacts introduced by motion. However, the imaging time/dose of 4D-CBCT is substantially longer/higher than traditional 3D-CBCT. We previously developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, to reconstruct high-quality 4D-CBCT from limited number of projections to reduce the imaging time/dose. However, the accuracy of SMEIR is limited in reconstructing low-contrast regions with fine structure details. In this study, we incorporate biomechanical modeling into the SMEIR algorithm (SMEIR-Bio), to improve the reconstruction accuracy at low-contrast regions with fine details. The efficacy of SMEIR-Bio is evaluated using 11 lung patient cases and compared to that of the original SMEIR algorithm. Qualitative and quantitative comparisons showed that SMEIR-Bio greatly enhances the accuracy of reconstructed 4D-CBCT volume in low-contrast regions, which can potentially benefit multiple clinical applications including the treatment outcome analysis.

  17. Reversibility in Quantum Models of Stochastic Processes

    NASA Astrophysics Data System (ADS)

    Gier, David; Crutchfield, James; Mahoney, John; James, Ryan

    Natural phenomena such as time series of neural firing, orientation of layers in crystal stacking and successive measurements in spin-systems are inherently probabilistic. The provably minimal classical models of such stochastic processes are ɛ-machines, which consist of internal states, transition probabilities between states and output values. The topological properties of the ɛ-machine for a given process characterize the structure, memory and patterns of that process. However ɛ-machines are often not ideal because their statistical complexity (Cμ) is demonstrably greater than the excess entropy (E) of the processes they represent. Quantum models (q-machines) of the same processes can do better in that their statistical complexity (Cq) obeys the relation Cμ >= Cq >= E. q-machines can be constructed to consider longer lengths of strings, resulting in greater compression. With code-words of sufficiently long length, the statistical complexity becomes time-symmetric - a feature apparently novel to this quantum representation. This result has ramifications for compression of classical information in quantum computing and quantum communication technology.

  18. Variable Star Signature Classification using Slotted Symbolic Markov Modeling

    NASA Astrophysics Data System (ADS)

    Johnston, K. B.; Peter, A. M.

    2017-01-01

    With the advent of digital astronomy, new benefits and new challenges have been presented to the modern day astronomer. No longer can the astronomer rely on manual processing, instead the profession as a whole has begun to adopt more advanced computational means. This paper focuses on the construction and application of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern classification algorithm for the identification of variable stars. A methodology for the reduction of stellar variable observations (time-domain data) into a novel feature space representation is introduced. The methodology presented will be referred to as Slotted Symbolic Markov Modeling (SSMM) and has a number of advantages which will be demonstrated to be beneficial; specifically to the supervised classification of stellar variables. It will be shown that the methodology outperformed a baseline standard methodology on a standardized set of stellar light curve data. The performance on a set of data derived from the LINEAR dataset will also be shown.

  19. Variable Star Signature Classification using Slotted Symbolic Markov Modeling

    NASA Astrophysics Data System (ADS)

    Johnston, Kyle B.; Peter, Adrian M.

    2016-01-01

    With the advent of digital astronomy, new benefits and new challenges have been presented to the modern day astronomer. No longer can the astronomer rely on manual processing, instead the profession as a whole has begun to adopt more advanced computational means. Our research focuses on the construction and application of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern classification algorithm for the identification of variable stars. A methodology for the reduction of stellar variable observations (time-domain data) into a novel feature space representation is introduced. The methodology presented will be referred to as Slotted Symbolic Markov Modeling (SSMM) and has a number of advantages which will be demonstrated to be beneficial; specifically to the supervised classification of stellar variables. It will be shown that the methodology outperformed a baseline standard methodology on a standardized set of stellar light curve data. The performance on a set of data derived from the LINEAR dataset will also be shown.

  20. An efficient model for coupling structural vibrations with acoustic radiation

    NASA Technical Reports Server (NTRS)

    Frendi, Abdelkader; Maestrello, Lucio; Ting, LU

    1993-01-01

    The scattering of an incident wave by a flexible panel is studied. The panel vibration is governed by the nonlinear plate equations while the loading on the panel, which is the pressure difference across the panel, depends on the reflected and transmitted waves. Two models are used to calculate this structural-acoustic interaction problem. One solves the three dimensional nonlinear Euler equations for the flow-field coupled with the plate equations (the fully coupled model). The second uses the linear wave equation for the acoustic field and expresses the load as a double integral involving the panel oscillation (the decoupled model). The panel oscillation governed by a system of integro-differential equations is solved numerically and the acoustic field is then defined by an explicit formula. Numerical results are obtained using the two models for linear and nonlinear panel vibrations. The predictions given by these two models are in good agreement but the computational time needed for the 'fully coupled model' is 60 times longer than that for 'the decoupled model'.

  1. The association between sedentary behaviors during weekdays and weekend with change in body composition in young adults

    PubMed Central

    Drenowatz, Clemens; DeMello, Madison M.; Shook, Robin P.; Hand, Gregory A.; Burgess, Stephanie; Blair, Steven N.

    2016-01-01

    Background High sedentary time has been considered an important chronic disease risk factor but there is only limited information on the association of specific sedentary behaviors on weekdays and weekend-days with body composition. The present study examines the prospective association of total sedentary time and specific sedentary behaviors during weekdays and the weekend with body composition in young adults. Methods A total of 332 adults (50% male; 27.7 ± 3.7 years) were followed over a period of 1 year. Time spent sedentary, excluding sleep (SED), and in physical activity (PA) during weekdays and weekend-days was objectively assessed every 3 months with a multi-sensor device over a period of at least 8 days. In addition, participants reported sitting time, TV time and non-work related time spent at the computer separately for weekdays and the weekend. Fat mass and fat free mass were assessed via dual x-ray absorptiometry and used to calculate percent body fat (%BF). Energy intake was estimated based on TDEE and change in body composition. Results Cross-sectional analyses showed a significant correlation between SED and body composition (0.18 ≤ r ≤ 0.34). Associations between body weight and specific sedentary behaviors were less pronounced and significant during weekdays only (r ≤ 0.16). Nevertheless, decrease in SED during weekends, rather than during weekdays, was significantly associated with subsequent decrease in %BF (β = 0.06, p <0.01). After adjusting for PA and energy intake, results for SED were no longer significant. Only the association between change in sitting time during weekends and subsequent %BF was independent from change in PA or energy intake (β%BF = 0.04, p = 0.01), while there was no significant association between TV or computer time and subsequent body composition. Conclusions The stronger prospective association between sedentary behavior during weekends with subsequent body composition emphasizes the importance of leisure time behavior in weight management. PMID:29546170

  2. The association between sedentary behaviors during weekdays and weekend with change in body composition in young adults.

    PubMed

    Drenowatz, Clemens; DeMello, Madison M; Shook, Robin P; Hand, Gregory A; Burgess, Stephanie; Blair, Steven N

    2016-01-01

    High sedentary time has been considered an important chronic disease risk factor but there is only limited information on the association of specific sedentary behaviors on weekdays and weekend-days with body composition. The present study examines the prospective association of total sedentary time and specific sedentary behaviors during weekdays and the weekend with body composition in young adults. A total of 332 adults (50% male; 27.7 ± 3.7 years) were followed over a period of 1 year. Time spent sedentary, excluding sleep (SED), and in physical activity (PA) during weekdays and weekend-days was objectively assessed every 3 months with a multi-sensor device over a period of at least 8 days. In addition, participants reported sitting time, TV time and non-work related time spent at the computer separately for weekdays and the weekend. Fat mass and fat free mass were assessed via dual x-ray absorptiometry and used to calculate percent body fat (%BF). Energy intake was estimated based on TDEE and change in body composition. Cross-sectional analyses showed a significant correlation between SED and body composition (0.18 ≤ r ≤ 0.34). Associations between body weight and specific sedentary behaviors were less pronounced and significant during weekdays only ( r ≤ 0.16). Nevertheless, decrease in SED during weekends, rather than during weekdays, was significantly associated with subsequent decrease in %BF ( β = 0.06, p <0.01). After adjusting for PA and energy intake, results for SED were no longer significant. Only the association between change in sitting time during weekends and subsequent %BF was independent from change in PA or energy intake (β %BF = 0.04, p = 0.01), while there was no significant association between TV or computer time and subsequent body composition. The stronger prospective association between sedentary behavior during weekends with subsequent body composition emphasizes the importance of leisure time behavior in weight management.

  3. A real-time moment-tensor inversion system (GRiD-MT-3D) using 3-D Green's functions

    NASA Astrophysics Data System (ADS)

    Nagao, A.; Furumura, T.; Tsuruoka, H.

    2016-12-01

    We developed a real-time moment-tensor inversion system using 3-D Green's functions (GRiD-MT-3D) by improving the current system (GRiD-MT; Tsuruoka et al., 2009), which uses 1-D Green's functions for longer periods than 20 s. Our moment-tensor inversion is applied to the real-time monitoring of earthquakes occurring beneath Kanto basin area. The basin, which is constituted of thick sediment layers, lies on the complex subduction of the Philippine-Sea Plate and the Pacific Plate that can significantly affect the seismic wave propagation. We compute 3-D Green's functions using finite-difference-method (FDM) simulations considering a 3-D velocity model, which is based on the Japan Integrated Velocity Structure Model (Koketsu et al., 2012), that includes crust, mantle, and subducting plates. The 3-D FDM simulations are computed over a volume of 468 km by 432 km by 120 km in the EW, NS, and depth directions, respectively, that is discretized into 0.25 km grids. Considering that the minimum S wave velocity of the sedimentary layer is 0.5 km/s, simulations can compute seismograms up to 0.5 Hz. We calculate Green's functions between 24,700 sources, which are distributed every 0.1° in the horizontal direction and every 9 km in depth direction, and 13 F-net stations. To compute this large number of Green's functions, we used the EIC parallel computer of ERI. The reciprocity theory, which switches the source and station positions, is used to reduce total computation costs. It took 156 hours to compute all the Green's functions. Results show that at long-periods (T>15 s), only small differences are observed between the 3-D and 1-D Green's functions as indicated by high correlation coefficients of 0.9 between the waveforms. However, at shorter periods (T<10 s), the differences become larger and the correlation coefficients drop to 0.5. The effect of the 3-D heterogeneous structure especially affects the Green's functions for the ray paths that across complex geological structures, such as the sedimentary basin or the subducting plates. After incorporation of the 3-D Green's functions in the GRiD-MT-3D system, we compare the results to the former GRiD-MT system to demonstrate the effectiveness of the new system in terms of variance reduction and accuracy of the moment-tensor estimation for much smaller events than the current one.

  4. Office workers' computer use patterns are associated with workplace stressors.

    PubMed

    Eijckelhof, Belinda H W; Huysmans, Maaike A; Blatter, Birgitte M; Leider, Priscilla C; Johnson, Peter W; van Dieën, Jaap H; Dennerlein, Jack T; van der Beek, Allard J

    2014-11-01

    This field study examined associations between workplace stressors and office workers' computer use patterns. We collected keyboard and mouse activities of 93 office workers (68F, 25M) for approximately two work weeks. Linear regression analyses examined the associations between self-reported effort, reward, overcommitment, and perceived stress and software-recorded computer use duration, number of short and long computer breaks, and pace of input device usage. Daily duration of computer use was, on average, 30 min longer for workers with high compared to low levels of overcommitment and perceived stress. The number of short computer breaks (30 s-5 min long) was approximately 20% lower for those with high compared to low effort and for those with low compared to high reward. These outcomes support the hypothesis that office workers' computer use patterns vary across individuals with different levels of workplace stressors. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. Simplified methods for computing total sediment discharge with the modified Einstein procedure

    USGS Publications Warehouse

    Colby, Bruce R.; Hubbell, David Wellington

    1961-01-01

    A procedure was presented in 1950 by H. A. Einstein for computing the total discharge of sediment particles of sizes that are in appreciable quantities in the stream bed. This procedure was modified by the U.S. Geological Survey and adapted to computing the total sediment discharge of a stream on the basis of samples of bed sediment, depth-integrated samples of suspended sediment, streamflow measurements, and water temperature. This paper gives simplified methods for computing total sediment discharge by the modified Einstein procedure. Each of four homographs appreciably simplifies a major step in the computations. Within the stated limitations, use of the homographs introduces much less error than is present in either the basic data or the theories on which the computations of total sediment discharge are based. The results are nearly as accurate mathematically as those that could be obtained from the longer and more complex arithmetic and algebraic computations of the Einstein procedure.

  6. Life Expectancy and Human Capital Investments: Evidence from Maternal Mortality Declines. NBER Working Paper No. 13947

    ERIC Educational Resources Information Center

    Jayachandran, Seema; Lleras-Muney, Adriana

    2008-01-01

    Longer life expectancy should encourage human capital accumulation, since a longer time horizon increases the value of investments that pay out over time. Previous work has been unable to determine the empirical importance of this life-expectancy effect due to the difficulty of isolating it from other effects of health on education. We examine a…

  7. How does temperature affect forest "fungus breath"? Diurnal non-exponential temperature-respiration relationship, and possible longer-term acclimation in fungal sporocarps

    Treesearch

    Erik A. Lilleskov

    2017-01-01

    Fungal respiration contributes substantially to ecosystem respiration, yet its field temperature response is poorly characterized. I hypothesized that at diurnal time scales, temperature-respiration relationships would be better described by unimodal than exponential models, and at longer time scales both Q10 and mass-specific respiration at 10 °...

  8. Whole-plant capacitance, embolism resistance and slow transpiration rates all contribute to longer desiccation times in woody angiosperms from arid and wet habitats

    USDA-ARS?s Scientific Manuscript database

    Low water potentials in xylem can result in damaging levels of cavitation, yet little is understood about which hydraulic traits have most influence in delaying the onset of hydraulic dysfunction during periods of drought. We examined three traits contributing to longer desiccation times in excised ...

  9. Racial/ethnic disparities in emergency department waiting time for stroke patients in the United States.

    PubMed

    Karve, Sudeep J; Balkrishnan, Rajesh; Mohammad, Yousef M; Levine, Deborah A

    2011-01-01

    Emergency department waiting time (EDWT), the time from arrival at the ED to evaluation by an emergency physician, is a critical component of acute stroke care. We assessed racial/ethnic differences in EDWT in a national sample of patients with ischemic or hemorrhagic stroke. We identified 543 ED visits for ischemic stroke (International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM] codes 433.x1, 434.xx, and 436.xx) and hemorrhagic stroke (ICD-9-CM codes 430.xx, 431.xx, and 432.xx) in persons age ≥ 18 years representing 2.1 million stroke-related ED visits in the United States using the National Hospital Ambulatory Medical Care Survey for years 1997-2000 and 2003-2005. Using linear regression (outcome, log-transformed EDWT) and logistic regression (outcome, EDWT > 10 minutes, based on National Institute of Neurological Disorders and Stroke guidelines), we adjusted associations between EDWT and race/ethnicity (non-Hispanic whites [designated whites herein], non-Hispanic blacks [blacks], and Hispanics) for age, sex, region, mode of transportation, insurance, hospital characteristics, triage status, hospital admission, stroke type, and survey year. Compared with whites, blacks had a longer EDWT in univariate analysis (67% longer, P = .03) and multivariate analysis (62% longer, P = .03), but Hispanics had a similar EDWT in both univariate analysis (31% longer, P = .65) and multivariate analysis (5% longer, P = .91). Longer EDWT was also seen with nonambulance mode of arrival, urban hospitals, or nonemergency triage. Race was significantly associated with EDWT > 10 minutes (whites, 55% [referent]; blacks, 70% [P = .03]; Hispanics, 62% [P = .53]). These differences persisted after adjustment (blacks: odds ratio [OR] = 2.08, 95% confidence interval [CI] = 1.05-4.09; Hispanics: OR = 1.07, 95% CI = 0.52-2.22). Blacks, but not Hispanics, had significantly longer EDWT than whites. The longer EDWT in black stroke patients may lead to treatment delays and sub-optimal stroke care. Published by Elsevier Inc.

  10. Computational science: shifting the focus from tools to models

    PubMed Central

    Hinsen, Konrad

    2014-01-01

    Computational techniques have revolutionized many aspects of scientific research over the last few decades. Experimentalists use computation for data analysis, processing ever bigger data sets. Theoreticians compute predictions from ever more complex models. However, traditional articles do not permit the publication of big data sets or complex models. As a consequence, these crucial pieces of information no longer enter the scientific record. Moreover, they have become prisoners of scientific software: many models exist only as software implementations, and the data are often stored in proprietary formats defined by the software. In this article, I argue that this emphasis on software tools over models and data is detrimental to science in the long term, and I propose a means by which this can be reversed. PMID:25309728

  11. An E-learning System based on Affective Computing

    NASA Astrophysics Data System (ADS)

    Duo, Sun; Song, Lu Xue

    In recent years, e-learning as a learning system is very popular. But the current e-learning systems cannot instruct students effectively since they do not consider the emotional state in the context of instruction. The emergence of the theory about "Affective computing" can solve this question. It can make the computer's intelligence no longer be a pure cognitive one. In this paper, we construct an emotional intelligent e-learning system based on "Affective computing". A dimensional model is put forward to recognize and analyze the student's emotion state and a virtual teacher's avatar is offered to regulate student's learning psychology with consideration of teaching style based on his personality trait. A "man-to-man" learning environment is built to simulate the traditional classroom's pedagogy in the system.

  12. Finite-data-size study on practical universal blind quantum computation

    NASA Astrophysics Data System (ADS)

    Zhao, Qiang; Li, Qiong

    2018-07-01

    The universal blind quantum computation with weak coherent pulses protocol is a practical scheme to allow a client to delegate a computation to a remote server while the computation hidden. However, in the practical protocol, a finite data size will influence the preparation efficiency in the remote blind qubit state preparation (RBSP). In this paper, a modified RBSP protocol with two decoy states is studied in the finite data size. The issue of its statistical fluctuations is analyzed thoroughly. The theoretical analysis and simulation results show that two-decoy-state case with statistical fluctuation is closer to the asymptotic case than the one-decoy-state case with statistical fluctuation. Particularly, the two-decoy-state protocol can achieve a longer communication distance than the one-decoy-state case in this statistical fluctuation situation.

  13. Personal use of work computers: distraction versus destruction.

    PubMed

    Mastrangelo, Paul M; Everton, Wendi; Jolton, Jeffery A

    2006-12-01

    To explore definitions, frequencies, and motivation for personal use of work computers, we analyzed 329 employees' responses to an online survey, which asked participants to self-report frequencies for 41 computer behaviors at work. This sample (65% female, 74% European ethnicity, mean age of 36 years) was formed by soliciting participants through Internet Usenet groups, emails, and listservs. Results support a distinction between computer use that is counterproductive and that which is merely not productive. Nonproductive Computer Use occurred more when employees were younger (r = -0.31, p < 0.01), had Internet access at work longer (r = +0.16, p < 0.01), and had faster Internet connections at work than at home (r = +0.14, p < 0.01). Counterproductive Computer Use occurred more when Internet access was newer (r = -0.16, p < 0.01) and employees knew others who had been warned about misuse (r = +0.11, p < 0.05). While most employees who engaged in computer counterproductivity also engaged in computer nonproductivity, the inverse was uncommon, suggesting the need to distinguish between the two when establishing computer policies and Internet accessibility.

  14. Daily rainfall forecasting for one year in a single run using Singular Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Unnikrishnan, Poornima; Jothiprakash, V.

    2018-06-01

    Effective modelling and prediction of smaller time step rainfall is reported to be very difficult owing to its highly erratic nature. Accurate forecast of daily rainfall for longer duration (multi time step) may be exceptionally helpful in the efficient planning and management of water resources systems. Identification of inherent patterns in a rainfall time series is also important for an effective water resources planning and management system. In the present study, Singular Spectrum Analysis (SSA) is utilized to forecast the daily rainfall time series pertaining to Koyna watershed in Maharashtra, India, for 365 days after extracting various components of the rainfall time series such as trend, periodic component, noise and cyclic component. In order to forecast the time series for longer time step (365 days-one window length), the signal and noise components of the time series are forecasted separately and then added together. The results of the study show that the method of SSA could extract the various components of the time series effectively and could also forecast the daily rainfall time series for longer duration such as one year in a single run with reasonable accuracy.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreedharan, Priya

    The sudden release of toxic contaminants that reach indoor spaces can be hazardousto building occupants. To respond effectively, the contaminant release must be quicklydetected and characterized to determine unobserved parameters, such as release locationand strength. Characterizing the release requires solving an inverse problem. Designinga robust real-time sensor system that solves the inverse problem is challenging becausethe fate and transport of contaminants is complex, sensor information is limited andimperfect, and real-time estimation is computationally constrained.This dissertation uses a system-level approach, based on a Bayes Monte Carloframework, to develop sensor-system design concepts and methods. I describe threeinvestigations that explore complex relationships amongmore » sensors, network architecture,interpretation algorithms, and system performance. The investigations use data obtainedfrom tracer gas experiments conducted in a real building. The influence of individual sensor characteristics on the sensor-system performance for binary-type contaminant sensors is analyzed. Performance tradeoffs among sensor accuracy, threshold level and response time are identified; these attributes could not be inferred without a system-level analysis. For example, more accurate but slower sensors are found to outperform less accurate but faster sensors. Secondly, I investigate how the sensor-system performance can be understood in terms of contaminant transport processes and the model representation that is used to solve the inverse problem. The determination of release location and mass are shown to be related to and constrained by transport and mixing time scales. These time scales explain performance differences among different sensor networks. For example, the effect of longer sensor response times is comparably less for releases with longer mixing time scales. The third investigation explores how information fusion from heterogeneous sensors may improve the sensor-system performance and offset the need for more contaminant sensors. Physics- and algorithm-based frameworks are presented for selecting and fusing information from noncontaminant sensors. The frameworks are demonstrated with door-position sensors, which are found to be more useful in natural airflow conditions, but which cannot compensate for poor placement of contaminant sensors. The concepts and empirical findings have the potential to help in the design of sensor systems for more complex building systems. The research has broader relevance to additional environmental monitoring problems, fault detection and diagnostics, and system design.« less

  16. Drought impacts on vegetation activity in the Mediterranean region: An assessment using remote sensing data and multi-scale drought indicators

    NASA Astrophysics Data System (ADS)

    Gouveia, C. M.; Trigo, R. M.; Beguería, S.; Vicente-Serrano, S. M.

    2017-04-01

    The present work analyzes the drought impacts on vegetation over the entire Mediterranean basin, with the purpose of determining the vegetation communities, regions and seasons at which vegetation is driven by drought. Our approach is based on the use of remote sensing data and a multi-scalar drought index. Correlation maps between fields of monthly Normalized Difference Vegetation Index (NDVI) and the Standardized Precipitation-Evapotranspiration Index (SPEI) at different time scales (1-24 months) were computed for representative months of winter (Feb), spring (May), summer (Aug) and fall (Nov). Results for the period from 1982 to 2006 show large areas highly controlled by drought, although presenting high spatial and seasonal differences, with a maximum influence in August and a minimum in February. The highest correlation values are observed in February for 3 months' time scale and in May for 6 and 12 months. The higher control of drought on vegetation in February and May is obtained mainly over the drier vegetation communities (Mediterranean Dry and Desertic) at shorter time scales (3 to 9 months). Additionally, in February the impact of drought on vegetation is lower for Temperate Oceanic and Continental vegetation types and takes place at longer time scales (18-24). The dependence of drought time-scale response with water balance, as obtained through a simple difference between precipitation and reference evapotranspiration, varies with vegetation communities. During February and November low water balance values correspond to shorter time scales over dry vegetation communities, whereas high water balance values implies longer time scales over Temperate Oceanic and Continental areas. The strong control of drought on vegetation observed for Mediterranean Dry and Desertic vegetation types located over areas with high negative values of water balance emphasizes the need for an early warning drought system covering the entire Mediterranean basin. We are confident that these results will provide a useful tool for drought management plans and play a relevant role in mitigating the impact of drought episodes.

  17. Factors associated with prolonged time to treatment failure with fulvestrant 500 mg in patients with post-menopausal estrogen receptor-positive advanced breast cancer: a sub-group analysis of the JBCRG-C06 Safari study.

    PubMed

    Kawaguchi, Hidetoshi; Masuda, Norikazu; Nakayama, Takahiro; Aogi, Kenjiro; Anan, Keisei; Ito, Yoshinori; Ohtani, Shoichiro; Sato, Nobuaki; Saji, Shigehira; Takano, Toshimi; Tokunaga, Eriko; Nakamura, Seigo; Hasegawa, Yoshie; Hattori, Masaya; Fujisawa, Tomomi; Morita, Satoshi; Yamaguchi, Miki; Yamashita, Hiroko; Yamashita, Toshinari; Yamamoto, Yutaka; Yotsumoto, Daisuke; Toi, Masakazu; Ohno, Shinji

    2018-01-01

    The JBCRG-C06 Safari study showed that earlier fulvestrant 500 mg (F500) use, a longer time from diagnosis to F500 use, and no prior palliative chemotherapy were associated with significantly longer time to treatment failure (TTF) among Japanese patients with estrogen receptor-positive (ER+) advanced breast cancer (ABC). The objective of this sub-group analysis was to further examine data from the Safari study, focusing on ER + and human epidermal growth factor receptor-negative (HER2-) cases. The Safari study (UMIN000015168) was a retrospective, multi-center cohort study, conducted in 1,072 patients in Japan taking F500 for ER + ABC. The sub-analysis included only patients administered F500 as second-line or later therapy (n = 960). Of these, 828 patients were HER2-. Results Multivariate analysis showed that advanced age (≥65 years; p = .035), longer time (≥3 years) from ABC diagnosis to F500 use (p < .001), no prior chemotherapy (p < .001), and F500 treatment line (p < .001) were correlated with prolonged TTF (median = 5.39 months). In ER+/HER2- patients receiving F500 as a second-line or later therapy, treatment line, advanced age, no prior palliative chemotherapy use, and a longer period from ABC diagnosis to F500 use were associated with longer TTF.

  18. Form and function of topologically associating genomic domains in budding yeast.

    PubMed

    Eser, Umut; Chandler-Brown, Devon; Ay, Ferhat; Straight, Aaron F; Duan, Zhijun; Noble, William Stafford; Skotheim, Jan M

    2017-04-11

    The genome of metazoan cells is organized into topologically associating domains (TADs) that have similar histone modifications, transcription level, and DNA replication timing. Although similar structures appear to be conserved in fission yeast, computational modeling and analysis of high-throughput chromosome conformation capture (Hi-C) data have been used to argue that the small, highly constrained budding yeast chromosomes could not have these structures. In contrast, herein we analyze Hi-C data for budding yeast and identify 200-kb scale TADs, whose boundaries are enriched for transcriptional activity. Furthermore, these boundaries separate regions of similarly timed replication origins connecting the long-known effect of genomic context on replication timing to genome architecture. To investigate the molecular basis of TAD formation, we performed Hi-C experiments on cells depleted for the Forkhead transcription factors, Fkh1 and Fkh2, previously associated with replication timing. Forkhead factors do not regulate TAD formation, but do promote longer-range genomic interactions and control interactions between origins near the centromere. Thus, our work defines spatial organization within the budding yeast nucleus, demonstrates the conserved role of genome architecture in regulating DNA replication, and identifies a molecular mechanism specifically regulating interactions between pericentric origins.

  19. Characteristic coupling time between axial and transverse energy modes for anti-hydrogen in magnetostatic traps

    NASA Astrophysics Data System (ADS)

    Zhong, Mike; Fajans, Joel

    2016-10-01

    For upcoming ALPHA collaboration laser spectroscopy and gravity experiments, the nature of the chaotic trajectories of individual antihydrogen atoms trapped in the octupole Ioffe magnetic trap is of importance. Of particular interest for experimental design is the coupling time between the axial and transverse modes of energy for the antihydrogen atoms. Using Monte Carlo simulations of semiclassical dynamics of antihydrogen trajectories, we quantify this characteristic coupling time between axial and transverse modes of energy. There appear to be two classes of trajectories: for orbits whose axial energy is higher than 10% of the total energy, the axial energy varies chaotically on the order of 1-10 seconds, whereas for orbits whose axial energy is around 10% of the total energy, the axial energy remains nearly constant on the order of 1000 seconds or longer. Furthermore, we search through parameter -space to find parameters of the magnetic trap that minimize and maximize this characteristic coupling time. This work was supported by the UC Berkeley Summer Undergraduate Research Fellowship, the Berkeley Research Computing program, the Department of Energy contract DE-FG02-06ER54904, and the National Science Foundation Grant 1500538-PHY.

  20. Low-dose x-ray tomography through a deep convolutional neural network

    DOE PAGES

    Yang, Xiaogang; De Andrade, Vincent; Scullin, William; ...

    2018-02-07

    Synchrotron-based X-ray tomography offers the potential of rapid large-scale reconstructions of the interiors of materials and biological tissue at fine resolution. However, for radiation sensitive samples, there remain fundamental trade-offs between damaging samples during longer acquisition times and reducing signals with shorter acquisition times. We present a deep convolutional neural network (CNN) method that increases the acquired X-ray tomographic signal by at least a factor of 10 during low-dose fast acquisition by improving the quality of recorded projections. Short exposure time projections enhanced with CNN show similar signal to noise ratios as compared with long exposure time projections and muchmore » lower noise and more structural information than low-dose fats acquisition without CNN. We optimized this approach using simulated samples and further validated on experimental nano-computed tomography data of radiation sensitive mouse brains acquired with a transmission X-ray microscopy. We demonstrate that automated algorithms can reliably trace brain structures in datasets collected with low dose-CNN. As a result, this method can be applied to other tomographic or scanning based X-ray imaging techniques and has great potential for studying faster dynamics in specimens.« less

  1. Sintering of viscous droplets under surface tension

    NASA Astrophysics Data System (ADS)

    Wadsworth, Fabian B.; Vasseur, Jérémie; Llewellin, Edward W.; Schauroth, Jenny; Dobson, Katherine J.; Scheu, Bettina; Dingwell, Donald B.

    2016-04-01

    We conduct experiments to investigate the sintering of high-viscosity liquid droplets. Free-standing cylinders of spherical glass beads are heated above their glass transition temperature, causing them to densify under surface tension. We determine the evolving volume of the bead pack at high spatial and temporal resolution. We use these data to test a range of existing models. We extend the models to account for the time-dependent droplet viscosity that results from non-isothermal conditions, and to account for non-zero final porosity. We also present a method to account for the initial distribution of radii of the pores interstitial to the liquid spheres, which allows the models to be used with no fitting parameters. We find a good agreement between the models and the data for times less than the capillary relaxation timescale. For longer times, we find an increasing discrepancy between the data and the model as the Darcy outgassing time-scale approaches the sintering timescale. We conclude that the decreasing permeability of the sintering system inhibits late-stage densification. Finally, we determine the residual, trapped gas volume fraction at equilibrium using X-ray computed tomography and compare this with theoretical values for the critical gas volume fraction in systems of overlapping spheres.

  2. Low-dose x-ray tomography through a deep convolutional neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaogang; De Andrade, Vincent; Scullin, William

    Synchrotron-based X-ray tomography offers the potential of rapid large-scale reconstructions of the interiors of materials and biological tissue at fine resolution. However, for radiation sensitive samples, there remain fundamental trade-offs between damaging samples during longer acquisition times and reducing signals with shorter acquisition times. We present a deep convolutional neural network (CNN) method that increases the acquired X-ray tomographic signal by at least a factor of 10 during low-dose fast acquisition by improving the quality of recorded projections. Short exposure time projections enhanced with CNN show similar signal to noise ratios as compared with long exposure time projections and muchmore » lower noise and more structural information than low-dose fats acquisition without CNN. We optimized this approach using simulated samples and further validated on experimental nano-computed tomography data of radiation sensitive mouse brains acquired with a transmission X-ray microscopy. We demonstrate that automated algorithms can reliably trace brain structures in datasets collected with low dose-CNN. As a result, this method can be applied to other tomographic or scanning based X-ray imaging techniques and has great potential for studying faster dynamics in specimens.« less

  3. Two dimensional kinetic analysis of electrostatic harmonic plasma waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca-Pongutá, E. C.; Ziebell, L. F.; Gaelzer, R.

    2016-06-15

    Electrostatic harmonic Langmuir waves are virtual modes excited in weakly turbulent plasmas, first observed in early laboratory beam-plasma experiments as well as in rocket-borne active experiments in space. However, their unequivocal presence was confirmed through computer simulated experiments and subsequently theoretically explained. The peculiarity of harmonic Langmuir waves is that while their existence requires nonlinear response, their excitation mechanism and subsequent early time evolution are governed by essentially linear process. One of the unresolved theoretical issues regards the role of nonlinear wave-particle interaction process over longer evolution time period. Another outstanding issue is that existing theories for these modes aremore » limited to one-dimensional space. The present paper carries out two dimensional theoretical analysis of fundamental and (first) harmonic Langmuir waves for the first time. The result shows that harmonic Langmuir wave is essentially governed by (quasi)linear process and that nonlinear wave-particle interaction plays no significant role in the time evolution of the wave spectrum. The numerical solutions of the two-dimensional wave spectra for fundamental and harmonic Langmuir waves are also found to be consistent with those obtained by direct particle-in-cell simulation method reported in the literature.« less

  4. Variation of Time Domain Failure Probabilities of Jack-up with Wave Return Periods

    NASA Astrophysics Data System (ADS)

    Idris, Ahmad; Harahap, Indra S. H.; Ali, Montassir Osman Ahmed

    2018-04-01

    This study evaluated failure probabilities of jack up units on the framework of time dependent reliability analysis using uncertainty from different sea states representing different return period of the design wave. Surface elevation for each sea state was represented by Karhunen-Loeve expansion method using the eigenfunctions of prolate spheroidal wave functions in order to obtain the wave load. The stochastic wave load was propagated on a simplified jack up model developed in commercial software to obtain the structural response due to the wave loading. Analysis of the stochastic response to determine the failure probability in excessive deck displacement in the framework of time dependent reliability analysis was performed by developing Matlab codes in a personal computer. Results from the study indicated that the failure probability increases with increase in the severity of the sea state representing a longer return period. Although the results obtained are in agreement with the results of a study of similar jack up model using time independent method at higher values of maximum allowable deck displacement, it is in contrast at lower values of the criteria where the study reported that failure probability decreases with increase in the severity of the sea state.

  5. Ovarian tissue cryopreservation by stepped vitrification and monitored by X-ray computed tomography.

    PubMed

    Corral, Ariadna; Clavero, Macarena; Gallardo, Miguel; Balcerzyk, Marcin; Amorim, Christiani A; Parrado-Gallego, Ángel; Dolmans, Marie-Madeleine; Paulini, Fernanda; Morris, John; Risco, Ramón

    2018-04-01

    Ovarian tissue cryopreservation is, in most cases, the only fertility preservation option available for female patients soon to undergo gonadotoxic treatment. To date, cryopreservation of ovarian tissue has been carried out by both traditional slow freezing method and vitrification, but even with the best techniques, there is still a considerable loss of follicle viability. In this report, we investigated a stepped cryopreservation procedure which combines features of slow cooling and vitrification (hereafter called stepped vitrification). Bovine ovarian tissue was used as a tissue model. Stepwise increments of the Me 2 SO concentration coupled with stepwise drops-in temperature in a device specifically designed for this purpose and X-ray computed tomography were combined to investigate loading times at each step, by monitoring the attenuation of the radiation proportional to Me 2 SO permeation. Viability analysis was performed in warmed tissues by immunohistochemistry. Although further viability tests should be conducted after transplantation, preliminary results are very promising. Four protocols were explored. Two of them showed a poor permeation of the vitrification solution (P1 and P2). The other two (P3 and P4), with higher permeation, were studied in deeper detail. Out of these two protocols, P4, with a longer permeation time at -40 °C, showed the same histological integrity after warming as fresh controls. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Streamflow characteristics at hydrologic bench-mark stations

    USGS Publications Warehouse

    Lawrence, C.L.

    1987-01-01

    The Hydrologic Bench-Mark Network was established in the 1960's. Its objectives were to document the hydrologic characteristics of representative undeveloped watersheds nationwide and to provide a comparative base for studying the effects of man on the hydrologic environment. The network, which consists of 57 streamflow gaging stations and one lake-stage station in 39 States, is planned for permanent operation. This interim report describes streamflow characteristics at each bench-mark site and identifies time trends in annual streamflow that have occurred during the data-collection period. The streamflow characteristics presented for each streamflow station are (1) flood and low-flow frequencies, (2) flow duration, (3) annual mean flow, and (4) the serial correlation coefficient for annual mean discharge. In addition, Kendall's tau is computed as an indicator of time trend in annual discharges. The period of record for most stations was 13 to 17 years, although several stations had longer periods of record. The longest period was 65 years for Merced River near Yosemite, Calif. Records of flow at 6 of 57 streamflow sites in the network showed a statistically significant change in annual mean discharge over the period of record, based on computations of Kendall's tau. The values of Kendall's tau ranged from -0.533 to 0.648. An examination of climatological records showed that changes in precipitation were most likely the cause for the change in annual mean discharge.

  7. The video watermarking container: efficient real-time transaction watermarking

    NASA Astrophysics Data System (ADS)

    Wolf, Patrick; Hauer, Enrico; Steinebach, Martin

    2008-02-01

    When transaction watermarking is used to secure sales in online shops by embedding transaction specific watermarks, the major challenge is embedding efficiency: Maximum speed by minimal workload. This is true for all types of media. Video transaction watermarking presents a double challenge. Video files not only are larger than for example music files of the same playback time. In addition, video watermarking algorithms have a higher complexity than algorithms for other types of media. Therefore online shops that want to protect their videos by transaction watermarking are faced with the problem that their servers need to work harder and longer for every sold medium in comparison to audio sales. In the past, many algorithms responded to this challenge by reducing their complexity. But this usually results in a loss of either robustness or transparency. This paper presents a different approach. The container technology separates watermark embedding into two stages: A preparation stage and the finalization stage. In the preparation stage, the video is divided into embedding segments. For each segment one copy marked with "0" and anther one marked with "1" is created. This stage is computationally expensive but only needs to be done once. In the finalization stage, the watermarked video is assembled from the embedding segments according to the watermark message. This stage is very fast and involves no complex computations. It thus allows efficient creation of individually watermarked video files.

  8. Autonomic Computing: Freedom or a Threat?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fink, Glenn A.; Frincke, Deb

    2007-12-01

    No longer is the question whether autonomic computing will gain general acceptance but when. Experts expect autonomic computing to be widely used within 10 years. When it does become mainstream, how will autonomics change system administration and corporations, and will the change be for better or worse? The answer depends on how well we anticipate the limitations of what autonomic systems are suited to do, whether we can collectively address the vulnerabilities of autonomic approaches as we draw upon the advantages, and whether administrators, companies, partners, and users are prepared for the transition. This article presents some design considerations tomore » address the first two issues and some suggested survival techniques for the third.« less

  9. An infrastructure with a unified control plane to integrate IP into optical metro networks to provide flexible and intelligent bandwidth on demand for cloud computing

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Hall, Trevor

    2012-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users and the nature of the Internet traffic will undertake a fundamental transformation. Consequently, the current Internet will no longer suffice for serving cloud traffic in metro areas. This work proposes an infrastructure with a unified control plane that integrates simple packet aggregation technology with optical express through the interoperation between IP routers and electrical traffic controllers in optical metro networks. The proposed infrastructure provides flexible, intelligent, and eco-friendly bandwidth on demand for cloud computing in metro areas.

  10. Visuospatial and Attentional Abilities Predict Driving Simulator Performance Among Older HIV-infected Adults

    PubMed Central

    Foley, J. M.; Gooding, A. L.; Thames, A. D.; Ettenhofer, M. L.; Kim, M. S.; Castellon, S. A.; Marcotte, T. D.; Sadek, J. R.; Heaton, R. K.; van Gorp, W. G.; Hinkin, C. H.

    2013-01-01

    Objectives To examine the effects of aging and neuropsychological (NP) impairment on driving simulator performance within a human immunodeficiency virus (HIV)-infected cohort. Methods Participants included 79 HIV-infected adults (n = 58 > age 50, n = 21 ≤ 40) who completed a NP battery and a personnel computer-based driving simulator task. Outcome variables included total completion time (time) and number of city blocks to complete the task (blocks). Results Compared to the younger group, the older group was less efficient in their route finding (blocks over optimum: 25.9 [20.1] vs 14.4 [16.9]; P = .02) and took longer to complete the task (time: 1297.6 [577.6] vs 804.4 [458.5] seconds; P = .001). Regression models within the older adult group indicated that visuospatial abilities (blocks: b = –0.40, P < .001; time: b = –0.40, P = .001) and attention (blocks: b = –0.49, P = .001; time: b = –0.42, P = .006) independently predicted simulator performance. The NP-impaired group performed more poorly on both time and blocks, compared to the NP normal group. Conclusions Older HIV-infected adults may be at risk of driving-related functional compromise secondary to HIV-associated neurocognitive decline. PMID:23314403

  11. A fast algorithm for computer aided collimation gamma camera (CACAO)

    NASA Astrophysics Data System (ADS)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Franck, D.; Pihet, P.; Ballongue, P.

    2000-08-01

    The computer aided collimation gamma camera is aimed at breaking down the resolution sensitivity trade-off of the conventional parallel hole collimator. It uses larger and longer holes, having an added linear movement at the acquisition sequence. A dedicated algorithm including shift and sum, deconvolution, parabolic filtering and rotation is described. Examples of reconstruction are given. This work shows that a simple and fast algorithm, based on a diagonal dominant approximation of the problem can be derived. Its gives a practical solution to the CACAO reconstruction problem.

  12. Reconsidering Return-to-Play Times: A Broader Perspective on Concussion Recovery

    PubMed Central

    D’Lauro, Christopher; Johnson, Brian R.; McGinty, Gerald; Allred, C. Dain; Campbell, Darren E.; Jackson, Jonathan C.

    2018-01-01

    Background: Return-to-play protocols describe stepwise, graduated recoveries for safe return from concussion; however, studies that comprehensively track return-to-play time are expensive to administer and heavily sampled from elite male contact-sport athletes. Purpose: To retrospectively assess probable recovery time for collegiate patients to return to play after concussion, especially for understudied populations, such as women and nonelite athletes. Study Design: Cohort study; Level of evidence, 3. Methods: Medical staff at a military academy logged a total of 512 concussion medical records over 38 months. Of these, 414 records included complete return-to-play protocols with return-to-play time, sex, athletic status, cause, and other data. Results: Overall mean return to play was 29.4 days. Sex and athletic status both affected return-to-play time. Men showed significantly shorter return to play than women, taking 24.7 days (SEM, 1.5 days) versus 35.5 days (SEM, 2.7 days) (P < .001). Intercollegiate athletes also reported quicker return-to-play times than nonintercollegiate athletes: 25.4 days (SEM, 2.6 days) versus 34.7 days (SEM, 1.6 days) (P = .002). These variables did not significantly interact. Conclusion: Mean recovery time across all groups (29.4 days) showed considerably longer return to play than the most commonly cited concussion recovery time window (7-10 days) for collegiate athletes. Understudied groups, such as women and nonelite athletes, demonstrated notably longer recovery times. The diversity of this sample population was associated with longer return-to-play times; it is unclear how other population-specific factors may have contributed. These inclusive return-to-play windows may indicate longer recovery times outside the population of elite athletes. PMID:29568786

  13. Electromechanical response times in the knee muscles in young and old women.

    PubMed

    Szpala, Agnieszka; Rutkowska-Kucharska, Alicja

    2017-12-01

    The aim of the study was to compare electromechanical response times [total reaction time (TRT), pre-motor time (PMT), and electromechanical delay] in the knee muscles in groups of young and older women during release of peak torque (PT). Fifty women (1 group approximately 20 years of age and the other approximately 60 years of age) participated in the study. PT and electromyographic activity were measured for flexors and extensors of the right and left knee in static conditions in response to a visual stimulus. Significantly longer TRTs (P = 0.05) and PMTs (P = 0.05) were found in the group of older women compared with the younger participants. Asymmetry was found between the older and the younger group of women in PT of knee flexors. Significantly longer TRT and PMT phases in the group of older women suggests a longer time for information processing in the central nervous system in older people. Muscle Nerve 56: E147-E153, 2017. © 2017 Wiley Periodicals, Inc.

  14. Measuring Sizes & Shapes of Galaxies

    NASA Astrophysics Data System (ADS)

    Kusmic, Samir; Willemn Holwerda, Benne

    2018-01-01

    Software is how galaxy morphometrics are calculated, cutting down on time needed to categorize galaxies. However, new surveys coming in the next decade is expected to count upwards of a thousand times more galaxies than with current surveys. This issue would create longer time consumption just processing data. In this research, we looked into how we can reduce the time it takes to get morphometric parameters in order to classify galaxies, but also how precise we can get with other findings. The software of choice is Source Extractor, known for taking a short amount of time, as well as being recently updated to get compute morphometric parameters. This test is being done by running CANDELS data, five fields in the J and H filters, through Source Extractor and then cross-correlating the new catalog with one created with GALFIT, obtained from van der Wel et al. 2014, and then with spectroscopic redshift data. With Source Extractor, we look at how many galaxies counted, how precise the computation, how to classify morphometry, and how the results stand with other findings. The run-time was approximately 10 hours when cross-correlated with GALFIT and approximately 8 hours with the spectroscopic redshift; these were expected times as Source Extractor and already faster than GALFIT's run-time by a large factor. As well, Source Extractor's recovery was large: 79.24\\% of GALFIT's count. However, the precision is highly variable. We have created two thresholds to see which would be better in order to combat this;we ended up picking an unbiased isophotal area threshold as the better choice. Still, with such a threshold, spread was relatively wide. However, comparing the parameters with redshift showed agreeable findings, however, not necessarily to the numerical value. From the results, we see Source Extractor as a good first-look, to be followed up by other software.

  15. Hybrid quantum computing with ancillas

    NASA Astrophysics Data System (ADS)

    Proctor, Timothy J.; Kendon, Viv

    2016-10-01

    In the quest to build a practical quantum computer, it is important to use efficient schemes for enacting the elementary quantum operations from which quantum computer programs are constructed. The opposing requirements of well-protected quantum data and fast quantum operations must be balanced to maintain the integrity of the quantum information throughout the computation. One important approach to quantum operations is to use an extra quantum system - an ancilla - to interact with the quantum data register. Ancillas can mediate interactions between separated quantum registers, and by using fresh ancillas for each quantum operation, data integrity can be preserved for longer. This review provides an overview of the basic concepts of the gate model quantum computer architecture, including the different possible forms of information encodings - from base two up to continuous variables - and a more detailed description of how the main types of ancilla-mediated quantum operations provide efficient quantum gates.

  16. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.

    PubMed

    Kriegeskorte, Nikolaus

    2015-11-24

    Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.

  17. What factors are associated with myopia in young adults? A survey study in Taiwan Military Conscripts.

    PubMed

    Lee, Yin-Yang; Lo, Chung-Ting; Sheu, Shwu-Jiuan; Lin, Julia L

    2013-02-05

    We investigated the independent impact of potential risk factors on myopia in young adults. A survey study was conducted with male military conscripts aged 18 to 24 years between February 2010 and March 2011 in Taiwan. The participants were examined using non-cycloplegic autorefraction and biometry. The participants provided data about potential risk factors, including age, parental myopia, education, near work, outdoor activity, and urbanization. Myopia was defined as the mean spherical equivalent of the right eye of ≤ 0.5 diopters (D). Among 5145 eligible participants, 5048 (98.11%) had refraction and questionnaire data available; 2316 (45.88%) of these received axial length examination. The prevalence of myopia was 86.1% with a mean refractive error of -3.66 D (SD = 2.73) and an axial length of 25.40 mm (SD = 1.38). Older age, having myopic parents, higher education level, more time spent reading, nearer reading distance, less outdoor activity, and higher urbanization level were associated with myopia and longer axial length. More computer use was related to longer axial length. All risk factors associated with myopia also were predictors of high myopia (≤ -6.0 D), with the exception of outdoor activity. Finally, an interaction analysis showed shorter axial length was associated with more time spent outdoors only at high urbanization level. Older age, parental myopia, higher education level, more near work, less outdoor activity, and higher urbanization level were independent predictors of myopia. These data provided evidence to the multifactorial nature of myopia in young men in Taiwan.

  18. Optimizing Fire Department Operations Through Work Schedule Analysis, Alternative Staffing, and Nonproductive Time Reduction

    DTIC Science & Technology

    2014-09-01

    hour work shift. A longer shift offers more time off between shifts, which can improve the employee’s family life , and personal emotional stress . On...Enhancing Work / Life Balance ,” Conn.L.Rev. 42 (2010): 1081–1527. 19 Nicole Jansen et al., “Need for Recovery from Work : Evaluating Short-Term Effects...24-hour work shift. A longer shift offers more time off between shifts that can improve the employee’s family life and personal emotional stress

  19. Is Intra-Articular Steroid Injection to the Temporomandibular Joint for Juvenile Idiopathic Arthritis More Effective and Efficient When Performed With Image Guidance?

    PubMed

    Resnick, Cory M; Vakilian, Pouya M; Kaban, Leonard B; Peacock, Zachary S

    2017-04-01

    To compare short-term outcomes and procedure times for intra-articular steroid injection (IASI) to the temporomandibular joint (TMJ) with and without the use of intraoperative image guidance for patients with juvenile idiopathic arthritis (JIA). This is a retrospective study of children with JIA who underwent TMJ IASI at Boston Children's Hospital (Boston, MA). Patients were divided into groups according to IASI technique: 1) "landmark" group if performed by an oral and maxillofacial surgeon using an anatomic landmark technique with no intraoperative image guidance or 2) "image-guided" group if performed by an interventional radiologist using intraoperative ultrasound and computed tomography. Predictor variables included IASI technique (landmark vs image guided), age, gender, JIA subtype, category of medications for arthritis, and presence of family history of autoimmune disease. Outcome variables were changes in patient-reported pain, maximal incisal opening (MIO), synovial enhancement ratio (ER), and total procedure time. Forty-five patients with 71 injected TMJs were included. Twenty-two patients with 36 injected TMJs were in the landmark group and 23 patients with 35 injected joints were in the image-guided group. There were no relevant differences in age, gender, family history of rheumatologic disease, or disease subtype between groups. There were no differences in resolution of pain (P = 1.00), increase in MIO (P = .975), or decrease in ER (P = .492) between groups, but procedure times averaged 49 minutes longer for the image-guided group (P < .008). There were no statistical differences in short-term outcomes, but procedure times were longer for the image-guided group. Although specific indications for the use of image guidance might exist, routine use of this procedure cannot be justified. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  20. [Body proportions of healthy and short stature adolescent girls].

    PubMed

    Milde, Katarzyna; Tomaszewski, Paweł; Majcher, Anna; Pyrżak, Beata; Stupnicki, Romuald

    2011-01-01

    Regularly conducted assessment of body proportions is of importance as early detection of possible growth disorders and immediate prevention may allow gathering an optimum of child's genetically conditioned level of development. To assess body proportions of adolescent girls, healthy or with growth deficiency. Three groups were studied: 104 healthy, short-statured girls (body height below the 10th percentile), 84 girls with Turner's syndrome (ZT) and 263 healthy girls of normal stature (between percentiles 25 and 75), all aged 11-15 years. The following measurements were conducted according to common anthropometric standards: body height, sitting body height, shoulder width, upper extremity length and lower extremity length - the last one was computed as the difference between standing and sitting body heights. All measurements were converted to logarithms and allometric linear regressions vs log body height were computed. The Turner girls proved to have allometrically shorter legs (p<0.001) and wider shoulders (p<0.001) compared with both groups of healthy girls, and longer upper extremities (p<0.001) compared with the girls of normal stature. Healthy, short-statured girls had longer lower extremities (p<0.001) as compared to other groups; they also had wider shoulders (p<0.001) and longer upper extremities (p<0.001) compared to healthy girls of normal height. Allometric relations of anthropometric measurements enable a deeper insight into the body proportions, especially in the growth period. The presented discrimination of Turner girls may serve as a screening test, and recommendation for further clinical treatment.

Top