Sample records for decrease computation time

  1. On computational methods for crashworthiness

    NASA Technical Reports Server (NTRS)

    Belytschko, T.

    1992-01-01

    The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.

  2. Fast parallel algorithms that compute transitive closure of a fuzzy relation

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik YA.

    1993-01-01

    The notion of a transitive closure of a fuzzy relation is very useful for clustering in pattern recognition, for fuzzy databases, etc. The original algorithm proposed by L. Zadeh (1971) requires the computation time O(n(sup 4)), where n is the number of elements in the relation. In 1974, J. C. Dunn proposed a O(n(sup 2)) algorithm. Since we must compute n(n-1)/2 different values s(a, b) (a not equal to b) that represent the fuzzy relation, and we need at least one computational step to compute each of these values, we cannot compute all of them in less than O(n(sup 2)) steps. So, Dunn's algorithm is in this sense optimal. For small n, it is ok. However, for big n (e.g., for big databases), it is still a lot, so it would be desirable to decrease the computation time (this problem was formulated by J. Bezdek). Since this decrease cannot be done on a sequential computer, the only way to do it is to use a computer with several processors working in parallel. We show that on a parallel computer, transitive closure can be computed in time O((log(sub 2)(n))2).

  3. The Overdominance of Computers

    ERIC Educational Resources Information Center

    Monke, Lowell W.

    2006-01-01

    Most schools are unwilling to consider decreasing computer use at school because they fear that without screen time, students will not be prepared for the demands of a high-tech 21st century. Monke argues that having young children spend a significant amount of time on computers in school is harmful, particularly when children spend so much…

  4. An Empirical Measure of Computer Security Strength for Vulnerability Remediation

    ERIC Educational Resources Information Center

    Villegas, Rafael

    2010-01-01

    Remediating all vulnerabilities on computer systems in a timely and cost effective manner is difficult given that the window of time between the announcement of a new vulnerability and an automated attack has decreased. Hence, organizations need to prioritize the vulnerability remediation process on their computer systems. The goal of this…

  5. Efficiency analysis of numerical integrations for finite element substructure in real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jinting; Lu, Liqiao; Zhu, Fei

    2018-01-01

    Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.

  6. Bifurcated method and apparatus for floating point addition with decreased latency time

    DOEpatents

    Farmwald, Paul M.

    1987-01-01

    Apparatus for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.

  7. Sources of spurious force oscillations from an immersed boundary method for moving-body problems

    NASA Astrophysics Data System (ADS)

    Lee, Jongho; Kim, Jungwoo; Choi, Haecheon; Yang, Kyung-Soo

    2011-04-01

    When a discrete-forcing immersed boundary method is applied to moving-body problems, it produces spurious force oscillations on a solid body. In the present study, we identify two sources of these force oscillations. One source is from the spatial discontinuity in the pressure across the immersed boundary when a grid point located inside a solid body becomes that of fluid with a body motion. The addition of mass source/sink together with momentum forcing proposed by Kim et al. [J. Kim, D. Kim, H. Choi, An immersed-boundary finite volume method for simulations of flow in complex geometries, Journal of Computational Physics 171 (2001) 132-150] reduces the spurious force oscillations by alleviating this pressure discontinuity. The other source is from the temporal discontinuity in the velocity at the grid points where fluid becomes solid with a body motion. The magnitude of velocity discontinuity decreases with decreasing the grid spacing near the immersed boundary. Four moving-body problems are simulated by varying the grid spacing at a fixed computational time step and at a constant CFL number, respectively. It is found that the spurious force oscillations decrease with decreasing the grid spacing and increasing the computational time step size, but they depend more on the grid spacing than on the computational time step size.

  8. WINCADRE (COMPUTER-AIDED DATA REVIEW AND EVALUATION)

    EPA Science Inventory

    WinCADRE (Computer-Aided Data Review and Evaluation) is a Windows -based program designed for computer-assisted data validation. WinCADRE is a powerful tool which significantly decreases data validation turnaround time. The electronic-data-deliverable format has been designed ...

  9. Body image dissatisfaction, physical activity and screen-time in Spanish adolescents.

    PubMed

    Añez, Elizabeth; Fornieles-Deu, Albert; Fauquet-Ars, Jordi; López-Guimerà, Gemma; Puntí-Vidal, Joaquim; Sánchez-Carracedo, David

    2018-01-01

    This cross-sectional study contributes to the literature on whether body dissatisfaction is a barrier/facilitator to engaging in physical activity and to investigate the impact of mass-media messages via computer-time on body dissatisfaction. High-school students ( N = 1501) reported their physical activity, computer-time (homework/leisure) and body dissatisfaction. Researchers measured students' weight and height. Analyses revealed that body dissatisfaction was negatively associated with physical activity on both genders, whereas computer-time was associated only with girls' body dissatisfaction. Specifically, as computer-homework increased, body dissatisfaction decreased; as computer-leisure increased, body dissatisfaction increased. Weight-related interventions should improve body image and physical activity simultaneously, while critical consumption of mass-media interventions should include a computer component.

  10. WINCADRE INORGANIC (WINDOWS COMPUTER-AIDED DATA REVIEW AND EVALUATION)

    EPA Science Inventory

    WinCADRE (Computer-Aided Data Review and Evaluation) is a Windows -based program designed for computer-assisted data validation. WinCADRE is a powerful tool which significantly decreases data validation turnaround time. The electronic-data-deliverable format has been designed in...

  11. Symplectic molecular dynamics simulations on specially designed parallel computers.

    PubMed

    Borstnik, Urban; Janezic, Dusanka

    2005-01-01

    We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.

  12. The viability of ADVANTG deterministic method for synthetic radiography generation

    NASA Astrophysics Data System (ADS)

    Bingham, Andrew; Lee, Hyoung K.

    2018-07-01

    Fast simulation techniques to generate synthetic radiographic images of high resolution are helpful when new radiation imaging systems are designed. However, the standard stochastic approach requires lengthy run time with poorer statistics at higher resolution. The investigation of the viability of a deterministic approach to synthetic radiography image generation was explored. The aim was to analyze a computational time decrease over the stochastic method. ADVANTG was compared to MCNP in multiple scenarios including a small radiography system prototype, to simulate high resolution radiography images. By using ADVANTG deterministic code to simulate radiography images the computational time was found to decrease 10 to 13 times compared to the MCNP stochastic approach while retaining image quality.

  13. Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Brown, Douglas L.

    1994-01-01

    In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.

  14. Parallel approach in RDF query processing

    NASA Astrophysics Data System (ADS)

    Vajgl, Marek; Parenica, Jan

    2017-07-01

    Parallel approach is nowadays a very cheap solution to increase computational power due to possibility of usage of multithreaded computational units. This hardware became typical part of nowadays personal computers or notebooks and is widely spread. This contribution deals with experiments how evaluation of computational complex algorithm of the inference over RDF data can be parallelized over graphical cards to decrease computational time.

  15. Computer-mediated support group intervention for parents.

    PubMed

    Bragadóttir, Helga

    2008-01-01

    The purpose of this study was to evaluate the feasibility of a computer-mediated support group (CMSG) intervention for parents whose children had been diagnosed with cancer. An evaluative one-group, before-and-after research design. A CMSG, an unstructured listserve group where participants used their E-mail for communication, was conducted over a 4-month period. Participation in the CMSG was offered to parents in Iceland whose children had completed cancer treatment in the past 5 years. Outcome measures were done: before the intervention (Time 1), after 2 months of intervention (Time 2) and after 4 months of intervention (Time 3) when the project ended. Measures included: demographic and background variables; health related vulnerability factors of parents: anxiety, depression, somatization, and stress; perceived mutual support; and use of the CMSG. Data were collected from November 2002 to June 2003. Twenty-one of 58 eligible parents participated in the study, with 71% retention rate for both post-tests. Mothers' depression decreased significantly from Time 2 to Time 3 (p<.03). Fathers' anxiety decreased significantly from Time 1 to Time 3 (p<.01). Fathers' stress decreased significantly from Time 2 to Time 3 (p<.02). To some extent, mothers and fathers perceived mutual support from participating in the CMSG. Both mothers and fathers used the CMSG by reading messages. Messages were primarily written by mothers. Study findings support further development of CMSGs for parents whose children have been diagnosed with cancer. Using computer technology for support is particularly useful for dispersed populations and groups that have restrictions on their time. Computer-mediated support groups have been shown to be a valuable addition to, or substitute for, a traditional face-to-face mutual support group and might suit both genders equally.

  16. Influence of computer work under time pressure on cardiac activity.

    PubMed

    Shi, Ping; Hu, Sijung; Yu, Hongliu

    2015-03-01

    Computer users are often under stress when required to complete computer work within a required time. Work stress has repeatedly been associated with an increased risk for cardiovascular disease. The present study examined the effects of time pressure workload during computer tasks on cardiac activity in 20 healthy subjects. Heart rate, time domain and frequency domain indices of heart rate variability (HRV) and Poincaré plot parameters were compared among five computer tasks and two rest periods. Faster heart rate and decreased standard deviation of R-R interval were noted in response to computer tasks under time pressure. The Poincaré plot parameters showed significant differences between different levels of time pressure workload during computer tasks, and between computer tasks and the rest periods. In contrast, no significant differences were identified for the frequency domain indices of HRV. The results suggest that the quantitative Poincaré plot analysis used in this study was able to reveal the intrinsic nonlinear nature of the autonomically regulated cardiac rhythm. Specifically, heightened vagal tone occurred during the relaxation computer tasks without time pressure. In contrast, the stressful computer tasks with added time pressure stimulated cardiac sympathetic activity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Computer usage and task-switching during resident's working day: Disruptive or not?

    PubMed

    Méan, Marie; Garnier, Antoine; Wenger, Nathalie; Castioni, Julien; Waeber, Gérard; Marques-Vidal, Pedro

    2017-01-01

    Recent implementation of electronic health records (EHR) has dramatically changed medical ward organization. While residents in general internal medicine use EHR systems half of their working time, whether computer usage impacts residents' workflow remains uncertain. We aimed to observe the frequency of task-switches occurring during resident's work and to assess whether computer usage was associated with task-switching. In a large Swiss academic university hospital, we conducted, between May 26 and July 24, 2015 a time-motion study to assess how residents in general internal medicine organize their working day. We observed 49 day and 17 evening shifts of 36 residents, amounting to 697 working hours. During day shifts, residents spent 5.4 hours using a computer (mean total working time: 11.6 hours per day). On average, residents switched 15 times per hour from a task to another. Task-switching peaked between 8:00-9:00 and 16:00-17:00. Task-switching was not associated with resident's characteristics and no association was found between task-switching and extra hours (Spearman r = 0.220, p = 0.137 for day and r = 0.483, p = 0.058 for evening shifts). Computer usage occurred more frequently at the beginning or ends of day shifts and was associated with decreased overall task-switching. Task-switching occurs very frequently during resident's working day. Despite the fact that residents used a computer half of their working time, computer usage was associated with decreased task-switching. Whether frequent task-switches and computer usage impact the quality of patient care and resident's work must be evaluated in further studies.

  18. Approximate Bayesian Computation in the estimation of the parameters of the Forbush decrease model

    NASA Astrophysics Data System (ADS)

    Wawrzynczak, A.; Kopka, P.

    2017-12-01

    Realistic modeling of the complicated phenomena as Forbush decrease of the galactic cosmic ray intensity is a quite challenging task. One aspect is a numerical solution of the Fokker-Planck equation in five-dimensional space (three spatial variables, the time and particles energy). The second difficulty arises from a lack of detailed knowledge about the spatial and time profiles of the parameters responsible for the creation of the Forbush decrease. Among these parameters, the central role plays a diffusion coefficient. Assessment of the correctness of the proposed model can be done only by comparison of the model output with the experimental observations of the galactic cosmic ray intensity. We apply the Approximate Bayesian Computation (ABC) methodology to match the Forbush decrease model to experimental data. The ABC method is becoming increasing exploited for dynamic complex problems in which the likelihood function is costly to compute. The main idea of all ABC methods is to accept samples as an approximate posterior draw if its associated modeled data are close enough to the observed one. In this paper, we present application of the Sequential Monte Carlo Approximate Bayesian Computation algorithm scanning the space of the diffusion coefficient parameters. The proposed algorithm is adopted to create the model of the Forbush decrease observed by the neutron monitors at the Earth in March 2002. The model of the Forbush decrease is based on the stochastic approach to the solution of the Fokker-Planck equation.

  19. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    PubMed Central

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-01-01

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301

  20. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time.

    PubMed

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-02-24

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90-94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7-5.6% per millisecond, with most satellites acquired successfully.

  1. Relationship between movement time and hip moment impulse in the sagittal plane during sit-to-stand movement: a combined experimental and computer simulation study.

    PubMed

    Inai, Takuma; Takabayashi, Tomoya; Edama, Mutsuaki; Kubo, Masayoshi

    2018-04-27

    The association between repetitive hip moment impulse and the progression of hip osteoarthritis is a recently recognized area of study. A sit-to-stand movement is essential for daily life and requires hip extension moment. Although a change in the sit-to-stand movement time may influence the hip moment impulse in the sagittal plane, this effect has not been examined. The purpose of this study was to clarify the relationship between sit-to-stand movement time and hip moment impulse in the sagittal plane. Twenty subjects performed the sit-to-stand movement at a self-selected natural speed. The hip, knee, and ankle joint angles obtained from experimental trials were used to perform two computer simulations. In the first simulation, the actual sit-to-stand movement time obtained from the experiment was entered. In the second simulation, sit-to-stand movement times ranging from 0.5 to 4.0 s at intervals of 0.25 s were entered. Hip joint moments and hip moment impulses in the sagittal plane during sit-to-stand movements were calculated for both computer simulations. The reliability of the simulation model was confirmed, as indicated by the similarities in the hip joint moment waveforms (r = 0.99) and the hip moment impulses in the sagittal plane between the first computer simulation and the experiment. In the second computer simulation, the hip moment impulse in the sagittal plane decreased with a decrease in the sit-to-stand movement time, although the peak hip extension moment increased with a decrease in the movement time. These findings clarify the association between the sit-to-stand movement time and hip moment impulse in the sagittal plane and may contribute to the prevention of the progression of hip osteoarthritis.

  2. Video and Computer Games in the '90s: Children's Time Commitment and Game Preference.

    ERIC Educational Resources Information Center

    Buchman, Debra D.; Funk, Jeanne B.

    1996-01-01

    Examined electronic game-playing habits of 900 children. Found that time commitment to game-playing decreased from fourth to eighth grade. Boys played more than girls. Preference for general entertainment games increased across grades while educational games preference decreased. Violent game popularity remained consistent; fantasy violence was…

  3. Application of multi-grid method on the simulation of incremental forging processes

    NASA Astrophysics Data System (ADS)

    Ramadan, Mohamad; Khaled, Mahmoud; Fourment, Lionel

    2016-10-01

    Numerical simulation becomes essential in manufacturing large part by incremental forging processes. It is a splendid tool allowing to show physical phenomena however behind the scenes, an expensive bill should be paid, that is the computational time. That is why many techniques are developed to decrease the computational time of numerical simulation. Multi-Grid method is a numerical procedure that permits to reduce computational time of numerical calculation by performing the resolution of the system of equations on several mesh of decreasing size which allows to smooth faster the low frequency of the solution as well as its high frequency. In this paper a Multi-Grid method is applied to cogging process in the software Forge 3. The study is carried out using increasing number of degrees of freedom. The results shows that calculation time is divide by two for a mesh of 39,000 nodes. The method is promising especially if coupled with Multi-Mesh method.

  4. Accelerating sino-atrium computer simulations with graphic processing units.

    PubMed

    Zhang, Hong; Xiao, Zheng; Lin, Shien-fong

    2015-01-01

    Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.

  5. Radiotherapy Monte Carlo simulation using cloud computing technology.

    PubMed

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-12-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  6. Reverse time migration by Krylov subspace reduced order modeling

    NASA Astrophysics Data System (ADS)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  7. Computer numerical control grinding of spiral bevel gears

    NASA Technical Reports Server (NTRS)

    Scott, H. Wayne

    1991-01-01

    The development of Computer Numerical Control (CNC) spiral bevel gear grinding has paved the way for major improvement in the production of precision spiral bevel gears. The object of the program was to decrease the setup, maintenance of setup, and pattern development time by 50 percent of the time required on conventional spiral bevel gear grinders. Details of the process are explained.

  8. Robust Duplication with Comparison Methods in Microcontrollers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather Marie; Baker, Zachary Kent; Fairbanks, Thomas D.

    Commercial microprocessors could be useful computational platforms in space systems, as long as the risk is bound. Many spacecraft are computationally constrained because all of the computation is done on a single radiation-hardened microprocessor. It is possible that a commercial microprocessor could be used for configuration, monitoring and background tasks that are not mission critical. Most commercial microprocessors are affected by radiation, including single-event effects (SEEs) that could be destructive to the component or corrupt the data. Part screening can help designers avoid components with destructive failure modes, and mitigation can suppress data corruption. We have been experimenting with amore » method for masking radiation-induced faults through the software executing on the microprocessor. While triple-modular redundancy (TMR) techniques are very effective at masking faults in software, the increased amount of execution time to complete the computation is not desirable. Here in this article we present a technique for combining duplication with compare (DWC) with TMR that decreases observable errors by as much as 145 times with only a 2.35 time decrease in performance.« less

  9. Robust Duplication with Comparison Methods in Microcontrollers

    DOE PAGES

    Quinn, Heather Marie; Baker, Zachary Kent; Fairbanks, Thomas D.; ...

    2016-01-01

    Commercial microprocessors could be useful computational platforms in space systems, as long as the risk is bound. Many spacecraft are computationally constrained because all of the computation is done on a single radiation-hardened microprocessor. It is possible that a commercial microprocessor could be used for configuration, monitoring and background tasks that are not mission critical. Most commercial microprocessors are affected by radiation, including single-event effects (SEEs) that could be destructive to the component or corrupt the data. Part screening can help designers avoid components with destructive failure modes, and mitigation can suppress data corruption. We have been experimenting with amore » method for masking radiation-induced faults through the software executing on the microprocessor. While triple-modular redundancy (TMR) techniques are very effective at masking faults in software, the increased amount of execution time to complete the computation is not desirable. Here in this article we present a technique for combining duplication with compare (DWC) with TMR that decreases observable errors by as much as 145 times with only a 2.35 time decrease in performance.« less

  10. Computational Methods for Stability and Control (COMSAC): The Time Has Come

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Biedron, Robert T.; Ball, Douglas N.; Bogue, David R.; Chung, James; Green, Bradford E.; Grismer, Matthew J.; Brooks, Gregory P.; Chambers, Joseph R.

    2005-01-01

    Powerful computational fluid dynamics (CFD) tools have emerged that appear to offer significant benefits as an adjunct to the experimental methods used by the stability and control community to predict aerodynamic parameters. The decreasing costs for and increasing availability of computing hours are making these applications increasingly viable as time goes on and the cost of computing continues to drop. This paper summarizes the efforts of four organizations to utilize high-end computational fluid dynamics (CFD) tools to address the challenges of the stability and control arena. General motivation and the backdrop for these efforts will be summarized as well as examples of current applications.

  11. Lanczos eigensolution method for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1991-01-01

    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

  12. Reservoir computing with a single time-delay autonomous Boolean node

    NASA Astrophysics Data System (ADS)

    Haynes, Nicholas D.; Soriano, Miguel C.; Rosin, David P.; Fischer, Ingo; Gauthier, Daniel J.

    2015-02-01

    We demonstrate reservoir computing with a physical system using a single autonomous Boolean logic element with time-delay feedback. The system generates a chaotic transient with a window of consistency lasting between 30 and 300 ns, which we show is sufficient for reservoir computing. We then characterize the dependence of computational performance on system parameters to find the best operating point of the reservoir. When the best parameters are chosen, the reservoir is able to classify short input patterns with performance that decreases over time. In particular, we show that four distinct input patterns can be classified for 70 ns, even though the inputs are only provided to the reservoir for 7.5 ns.

  13. SLMRACE: a noise-free RACE implementation with reduced computational time

    NASA Astrophysics Data System (ADS)

    Chauvin, Juliet; Provenzi, Edoardo

    2017-05-01

    We present a faster and noise-free implementation of the RACE algorithm. RACE has mixed characteristics between the famous Retinex model of Land and McCann and the automatic color equalization (ACE) color-correction algorithm. The original random spray-based RACE implementation suffers from two main problems: its computational time and the presence of noise. Here, we will show that it is possible to adapt two techniques recently proposed by Banić et al. to the RACE framework in order to drastically decrease the computational time and noise generation. The implementation will be called smart-light-memory-RACE (SLMRACE).

  14. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin

    2018-03-01

    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  15. SU-E-J-91: FFT Based Medical Image Registration Using a Graphics Processing Unit (GPU).

    PubMed

    Luce, J; Hoggarth, M; Lin, J; Block, A; Roeske, J

    2012-06-01

    To evaluate the efficiency gains obtained from using a Graphics Processing Unit (GPU) to perform a Fourier Transform (FT) based image registration. Fourier-based image registration involves obtaining the FT of the component images, and analyzing them in Fourier space to determine the translations and rotations of one image set relative to another. An important property of FT registration is that by enlarging the images (adding additional pixels), one can obtain translations and rotations with sub-pixel resolution. The expense, however, is an increased computational time. GPUs may decrease the computational time associated with FT image registration by taking advantage of their parallel architecture to perform matrix computations much more efficiently than a Central Processor Unit (CPU). In order to evaluate the computational gains produced by a GPU, images with known translational shifts were utilized. A program was written in the Interactive Data Language (IDL; Exelis, Boulder, CO) to performCPU-based calculations. Subsequently, the program was modified using GPU bindings (Tech-X, Boulder, CO) to perform GPU-based computation on the same system. Multiple image sizes were used, ranging from 256×256 to 2304×2304. The time required to complete the full algorithm by the CPU and GPU were benchmarked and the speed increase was defined as the ratio of the CPU-to-GPU computational time. The ratio of the CPU-to- GPU time was greater than 1.0 for all images, which indicates the GPU is performing the algorithm faster than the CPU. The smallest improvement, a 1.21 ratio, was found with the smallest image size of 256×256, and the largest speedup, a 4.25 ratio, was observed with the largest image size of 2304×2304. GPU programming resulted in a significant decrease in computational time associated with a FT image registration algorithm. The inclusion of the GPU may provide near real-time, sub-pixel registration capability. © 2012 American Association of Physicists in Medicine.

  16. Fast maximum likelihood estimation using continuous-time neural point process models.

    PubMed

    Lepage, Kyle Q; MacDonald, Christopher J

    2015-06-01

    A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.

  17. [Economic efficiency of computer monitoring of health].

    PubMed

    Il'icheva, N P; Stazhadze, L L

    2001-01-01

    Presents the method of computer monitoring of health, based on utilization of modern information technologies in public health. The method helps organize preventive activities of an outpatient clinic at a high level and essentially decrease the time and money loss. Efficiency of such preventive measures, increased number of computer and Internet users suggests that such methods are promising and further studies in this field are needed.

  18. Encapsulating model complexity and landscape-scale analyses of state-and-transition simulation models: an application of ecoinformatics and juniper encroachment in sagebrush steppe ecosystems

    USGS Publications Warehouse

    O'Donnell, Michael

    2015-01-01

    State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf

  19. GPU-accelerated computing for Lagrangian coherent structures of multi-body gravitational regimes

    NASA Astrophysics Data System (ADS)

    Lin, Mingpei; Xu, Ming; Fu, Xiaoyu

    2017-04-01

    Based on a well-established theoretical foundation, Lagrangian Coherent Structures (LCSs) have elicited widespread research on the intrinsic structures of dynamical systems in many fields, including the field of astrodynamics. Although the application of LCSs in dynamical problems seems straightforward theoretically, its associated computational cost is prohibitive. We propose a block decomposition algorithm developed on Compute Unified Device Architecture (CUDA) platform for the computation of the LCSs of multi-body gravitational regimes. In order to take advantage of GPU's outstanding computing properties, such as Shared Memory, Constant Memory, and Zero-Copy, the algorithm utilizes a block decomposition strategy to facilitate computation of finite-time Lyapunov exponent (FTLE) fields of arbitrary size and timespan. Simulation results demonstrate that this GPU-based algorithm can satisfy double-precision accuracy requirements and greatly decrease the time needed to calculate final results, increasing speed by approximately 13 times. Additionally, this algorithm can be generalized to various large-scale computing problems, such as particle filters, constellation design, and Monte-Carlo simulation.

  20. Machining fixture layout optimization using particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Dou, Jianping; Wang, Xingsong; Wang, Lei

    2011-05-01

    Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.

  1. Towards Probablistic Assessment of Hypobaric Decompression Sickness Treatment

    NASA Technical Reports Server (NTRS)

    Conkin, J.; Abercromby, A. F.; Feiveson, A. H.; Gernhardt, M. L.; Norcross, J. R.; Ploutz-Snyder, R.; Wessel, J. H., III

    2013-01-01

    INTRODUCTION: Pressure, oxygen (O2), and time are the pillars to effective treatment of decompression sickness (DCS). The NASA DCS Treatment Model links a decrease in computed bubble volume to the resolution of a symptom. The decrease in volume is realized in two stages: a) during the Boyle's Law compression and b) during subsequent dissolution of the gas phase by the O2 window. METHODS: The cumulative distribution of 154 symptoms that resolved during repressurization was described with a log-logistic density function of pressure difference (deltaP as psid) associated with symptom resolution and two other explanatory variables. The 154 symptoms originated from 119 cases of DCS during 969 exposures in 47 different altitude tests. RESULTS: The probability of symptom resolution [P(symptom resolution)] = 1 / (1+exp(- (ln(deltaP) - 1.682 + 1.089×AMB - 0.00395×SYMPTOM TIME) / 0.633)), where AMB is 1 when the subject ambulated as part of the altitude exposure or else 0 and SYMPTOM TIME is the elapsed time in min from start of the altitude exposure to recognition of a DCS symptom. The P(symptom resolution) was estimated from computed deltaP from the Tissue Bubble Dynamics Model based on the "effective" Boyle's Law change: P2 - P1 (deltaP, psid) = P1×V1/V2 - P1, where V1 is the computed volume of a spherical bubble in a unit volume of tissue at low pressure P1 and V2 is computed volume after a change to a higher pressure P2. V2 continues to decrease through time at P2, at a faster rate if 100% ground level O2 was breathed. The computed deltaP is the effective treatment pressure at any point in time as if the entire ?deltaP was just from Boyle's Law compression. DISCUSSION: Given the low probability of DCS during extravehicular activity and the prompt treatment of a symptom with options through the model it is likely that the symptom and gas phase will resolve with minimum resources and minimal impact on astronaut health, safety, and productivity.

  2. Teaching Electronic Health Record Communication Skills.

    PubMed

    Palumbo, Mary Val; Sandoval, Marie; Hart, Vicki; Drill, Clarissa

    2016-06-01

    This pilot study investigated nurse practitioner students' communication skills when utilizing the electronic health record during history taking. The nurse practitioner students (n = 16) were videotaped utilizing the electronic health record while taking health histories with standardized patients. The students were videotaped during two separate sessions during one semester. Two observers recorded the time spent (1) typing and talking, (2) typing only, and (3) looking at the computer without talking. Total history taking time, computer placement, and communication skills were also recorded. During the formative session, mean history taking time was 11.4 minutes, with 3.5 minutes engaged with the computer (30.6% of visit). During the evaluative session, mean history taking time was 12.4 minutes, with 2.95 minutes engaged with the computer (24% of visit). The percentage of time individuals spent changed over the two visits: typing and talking, -3.1% (P = .3); typing only, +12.8% (P = .038); and looking at the computer, -9.6% (P = .039). This study demonstrated that time spent engaged with the computer during a patient encounter does decrease with student practice and education. Therefore, students benefit from instruction on electronic health record-specific communication skills, and use of a simple mnemonic to reinforce this is suggested.

  3. A configurable distributed high-performance computing framework for satellite's TDI-CCD imaging simulation

    NASA Astrophysics Data System (ADS)

    Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang

    2010-11-01

    This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.

  4. Does a Computer Have an Arrow of Time?

    NASA Astrophysics Data System (ADS)

    Maroney, Owen J. E.

    2010-02-01

    Schulman (Entropy 7(4):221-233, 2005) has argued that Boltzmann’s intuition, that the psychological arrow of time is necessarily aligned with the thermodynamic arrow, is correct. Schulman gives an explicit physical mechanism for this connection, based on the brain being representable as a computer, together with certain thermodynamic properties of computational processes. Hawking (Physical Origins of Time Asymmetry, Cambridge University Press, Cambridge, 1994) presents similar, if briefer, arguments. The purpose of this paper is to critically examine the support for the link between thermodynamics and an arrow of time for computers. The principal arguments put forward by Schulman and Hawking will be shown to fail. It will be shown that any computational process that can take place in an entropy increasing universe, can equally take place in an entropy decreasing universe. This conclusion does not automatically imply a psychological arrow can run counter to the thermodynamic arrow. Some alternative possible explanations for the alignment of the two arrows will be briefly discussed.

  5. Prospective comparison of the usage of conventional film and PACS based computed radiography for portable chest x-ray imaging in a medical intensive care unit

    NASA Astrophysics Data System (ADS)

    Kundel, Harold L.; Seshadri, Sridhar B.; Langlotz, Curtis P.; Lanken, Paul N.; Horii, Steven C.; Polansky, Marcia; Kishore, Sheel; Finegold, Eric; Brikman, Inna; Bozzo, Mary T.; Redfern, Regina O.

    1995-05-01

    The purpose of this study was to compare the efficiency of image delivery, the effectiveness of image information transfer, and the timeliness of clinical actions in a medical intensive care unit (MICU) using either conventional screen-film imaging (SF-HC), computed radiography (CR-HC) or a CR based PACS. When the CR based PACS was in use, images could be viewed in the MICU on digital workstation (CR-WS) or in the radiology department as laser printed hard copy (CR-HC). Data were collected by daily interviews with the house-staff, by monitoring computer log-ons and other time stamped activities, and by observing film viewing times in the radiology department with surveillance cameras. The time at which image information was made available to the MICU physicians was decreased during the CR-PACS period as compared with either the SF-HC periods or the CR-HC periods but the image information was not accessed more quickly by the clinical staff. However, the time required to perform image related clinical actions for pulmonary and pleural problems was decreased when images were viewed on the workstation.

  6. Review of Real-Time Simulator and the Steps Involved for Implementation of a Model from MATLAB/SIMULINK to Real-Time

    NASA Astrophysics Data System (ADS)

    Mikkili, Suresh; Panda, Anup Kumar; Prattipati, Jayanthi

    2015-06-01

    Nowadays the researchers want to develop their model in real-time environment. Simulation tools have been widely used for the design and improvement of electrical systems since the mid twentieth century. The evolution of simulation tools has progressed in step with the evolution of computing technologies. In recent years, computing technologies have improved dramatically in performance and become widely available at a steadily decreasing cost. Consequently, simulation tools have also seen dramatic performance gains and steady cost decreases. Researchers and engineers now have the access to affordable, high performance simulation tools that were previously too cost prohibitive, except for the largest manufacturers. This work has introduced a specific class of digital simulator known as a real-time simulator by answering the questions "what is real-time simulation", "why is it needed" and "how it works". The latest trend in real-time simulation consists of exporting simulation models to FPGA. In this article, the Steps involved for implementation of a model from MATLAB to REAL-TIME are provided in detail.

  7. Unobtrusive measurement of daily computer use to detect mild cognitive impairment

    PubMed Central

    Kaye, Jeffrey; Mattek, Nora; Dodge, Hiroko H; Campbell, Ian; Hayes, Tamara; Austin, Daniel; Hatt, William; Wild, Katherine; Jimison, Holly; Pavel, Michael

    2013-01-01

    Background Mild disturbances of higher order activities of daily living are present in people diagnosed with mild cognitive impairment (MCI). These deficits may be difficult to detect among those still living independently. Unobtrusive continuous assessment of a complex activity such as home computer use may detect mild functional changes and identify MCI. We sought to determine whether long-term changes in remotely monitored computer use differ in persons with MCI in comparison to cognitively intact volunteers. Methods Participants enrolled in a longitudinal cohort study of unobtrusive in-home technologies to detect cognitive and motor decline in independently living seniors were assessed for computer usage (number of days with use, mean daily usage and coefficient of variation of use) measured by remotely monitoring computer session start and end times. Results Over 230,000 computer sessions from 113 computer users (mean age, 85; 38 with MCI) were acquired during a mean of 36 months. In mixed effects models there was no difference in computer usage at baseline between MCI and intact participants controlling for age, sex, education, race and computer experience. However, over time, between MCI and intact participants, there was a significant decrease in number of days with use (p=0.01), mean daily usage (~1% greater decrease/month; p=0.009) and an increase in day-to-day use variability (p=0.002). Conclusions Computer use change can be unobtrusively monitored and indicate individuals with MCI. With 79% of those 55–64 years old now online, this may be an ecologically valid and efficient approach to track subtle clinically meaningful change with aging. PMID:23688576

  8. Ray Next-Event Estimator Transport of Primary and Secondary Gamma Rays

    DTIC Science & Technology

    2011-03-01

    McGraw-Hill. Choppin, G. R., Liljenzin, J.-O., & Rydberg, J. (2002). Radiochemistry and Nuclear Chemistry (3rd ed.). Woburn, MA: Butterworth- Heinemann ...time-energy bins. Any performance enhancements (maybe parallel searching?) to the search routines decrease estimator computational time

  9. Feasibility of a special-purpose computer to solve the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Gritton, E. C.; King, W. S.; Sutherland, I.; Gaines, R. S.; Gazley, C., Jr.; Grosch, C.; Juncosa, M.; Petersen, H.

    1978-01-01

    Orders-of-magnitude improvements in computer performance can be realized with a parallel array of thousands of fast microprocessors. In this architecture, wiring congestion is minimized by limiting processor communication to nearest neighbors. When certain standard algorithms are applied to a viscous flow problem and existing LSI technology is used, performance estimates of this conceptual design show a dramatic decrease in computational time when compared to the CDC 7600.

  10. Development of efficient time-evolution method based on three-term recurrence relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akama, Tomoko, E-mail: a.tomo---s-b-l-r@suou.waseda.jp; Kobayashi, Osamu; Nanbu, Shinkoh, E-mail: shinkoh.nanbu@sophia.ac.jp

    The advantage of the real-time (RT) propagation method is a direct solution of the time-dependent Schrödinger equation which describes frequency properties as well as all dynamics of a molecular system composed of electrons and nuclei in quantum physics and chemistry. Its applications have been limited by computational feasibility, as the evaluation of the time-evolution operator is computationally demanding. In this article, a new efficient time-evolution method based on the three-term recurrence relation (3TRR) was proposed to reduce the time-consuming numerical procedure. The basic formula of this approach was derived by introducing a transformation of the operator using the arcsine function.more » Since this operator transformation causes transformation of time, we derived the relation between original and transformed time. The formula was adapted to assess the performance of the RT time-dependent Hartree-Fock (RT-TDHF) method and the time-dependent density functional theory. Compared to the commonly used fourth-order Runge-Kutta method, our new approach decreased computational time of the RT-TDHF calculation by about factor of four, showing the 3TRR formula to be an efficient time-evolution method for reducing computational cost.« less

  11. Computational model of retinal photocoagulation and rupture

    NASA Astrophysics Data System (ADS)

    Sramek, Christopher; Paulus, Yannis M.; Nomoto, Hiroyuki; Huie, Phil; Palanker, Daniel

    2009-02-01

    In patterned scanning laser photocoagulation, shorter duration (< 20 ms) pulses help reduce thermal damage beyond the photoreceptor layer, decrease treatment time and minimize pain. However, safe therapeutic window (defined as the ratio of rupture threshold power to that of light coagulation) decreases for shorter exposures. To quantify the extent of thermal damage in the retina, and maximize the therapeutic window, we developed a computational model of retinal photocoagulation and rupture. Model parameters were adjusted to match measured thresholds of vaporization, coagulation, and retinal pigment epithelial (RPE) damage. Computed lesion width agreed with histological measurements in a wide range of pulse durations and power. Application of ring-shaped beam profile was predicted to double the therapeutic window width for exposures in the range of 1 - 10 ms.

  12. Neural Underpinnings of Impaired Predictive Motor Timing in Children with Developmental Coordination Disorder

    ERIC Educational Resources Information Center

    Debrabant, Julie; Gheysen, Freja; Caeyenberghs, Karen; Van Waelvelde, Hilde; Vingerhoets, Guy

    2013-01-01

    A dysfunction in predictive motor timing is put forward to underlie DCD-related motor problems. Predictive timing allows for the pre-selection of motor programmes (except "program" in computers) in order to decrease processing load and facilitate reactions. Using functional magnetic resonance imaging (fMRI), this study investigated the neural…

  13. A post-processing algorithm for time domain pitch trackers

    NASA Astrophysics Data System (ADS)

    Specker, P.

    1983-01-01

    This paper describes a powerful post-processing algorithm for time-domain pitch trackers. On two successive passes, the post-processing algorithm eliminates errors produced during a first pass by a time-domain pitch tracker. During the second pass, incorrect pitch values are detected as outliers by computing the distribution of values over a sliding 80 msec window. During the third pass (based on artificial intelligence techniques), remaining pitch pulses are used as anchor points to reconstruct the pitch train from the original waveform. The algorithm produced a decrease in the error rate from 21% obtained with the original time domain pitch tracker to 2% for isolated words and sentences produced in an office environment by 3 male and 3 female talkers. In a noisy computer room errors decreased from 52% to 2.9% for the same stimuli produced by 2 male talkers. The algorithm is efficient, accurate, and resistant to noise. The fundamental frequency micro-structure is tracked sufficiently well to be used in extracting phonetic features in a feature-based recognition system.

  14. Dynamics of threading dislocations in porous heteroepitaxial GaN films

    NASA Astrophysics Data System (ADS)

    Gutkin, M. Yu.; Rzhavtsev, E. A.

    2017-12-01

    Behavior of threading dislocations in porous heteroepitaxial gallium nitride (GaN) films has been studied using computer simulation by the two-dimensional discrete dislocation dynamics approach. A computational scheme, where pores are modeled as cross sections of cylindrical cavities, elastically interacting with unidirectional parallel edge dislocations, which imitate threading dislocations, is used. Time dependences of coordinates and velocities of each dislocation from dislocation ensembles under investigation are obtained. Visualization of current structure of dislocation ensemble is performed in the form of a location map of dislocations at any time. It has been shown that the density of appearing dislocation structures significantly depends on the ratio of area of a pore cross section to area of the simulation region. In particular, increasing the portion of pores surface on the layer surface up to 2% should lead to about a 1.5-times decrease of the final density of threading dislocations, and increase of this portion up to 15% should lead to approximately a 4.5-times decrease of it.

  15. Dynamic remapping of parallel computations with varying resource demands

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.; Saltz, J. H.

    1986-01-01

    A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.

  16. The effects of an educational meeting and subsequent computer reminders on the ordering of laboratory tests by rheumatologists: an interrupted time series analysis.

    PubMed

    Lesuis, Nienke; den Broeder, Nathan; Boers, Nadine; Piek, Ester; Teerenstra, Steven; Hulscher, Marlies; van Vollenhoven, Ronald; den Broeder, Alfons A

    2017-01-01

    To examine the effects of an educational meeting and subsequent computer reminders on the number of ordered laboratory tests. Using interrupted time series analysis we assessed whether trends in the number of laboratory tests ordered by rheumatologists between September 2012 and September 2015 at the Sint Maartenskliniek (the Netherlands) changed following an educational meeting (September 2013) and the introduction of computer reminders into the Computerised Physician Order Entry System (July 2014). The analyses were done for the set of tests on which both interventions had focussed (intervention tests; complement, cryoglobulins, immunoglobins, myeloma protein) and a set of control tests unrelated to the interventions (alanine transferase, anti-cyclic citrullinated peptide, C-reactive protein, creatine, haemoglobin, leukocytes, mean corpuscular volume, rheumatoid factor and thrombocytes). At the start of the study, 101 intervention tests and 7660 control tests were ordered per month by the rheumatologists. After the educational meeting, both the level and trend of ordered intervention and control tests did not change significantly. After implementation of the reminders, the level of ordered intervention tests decreased by 85.0 tests (95%-CI -133.3 to -36.8, p<0.01), the level of control tests did not change following the introduction of reminders. In summary, an educational meeting alone was not effective in decreasing the number of ordered intervention tests, but the combination with computer reminders did result in a large decrease of those tests. Therefore, we recommend using computer reminders in addition to education if reduction of inappropriate test use is aimed for.

  17. Predicting Intracerebral Hemorrhage Growth With the Spot Sign: The Effect of Onset-to-Scan Time.

    PubMed

    Dowlatshahi, Dar; Brouwers, H Bart; Demchuk, Andrew M; Hill, Michael D; Aviv, Richard I; Ufholz, Lee-Anne; Reaume, Michael; Wintermark, Max; Hemphill, J Claude; Murai, Yasuo; Wang, Yongjun; Zhao, Xingquan; Wang, Yilong; Li, Na; Sorimachi, Takatoshi; Matsumae, Mitsunori; Steiner, Thorsten; Rizos, Timolaos; Greenberg, Steven M; Romero, Javier M; Rosand, Jonathan; Goldstein, Joshua N; Sharma, Mukul

    2016-03-01

    Hematoma expansion after acute intracerebral hemorrhage is common and is associated with early deterioration and poor clinical outcome. The computed tomographic angiography (CTA) spot sign is a promising predictor of expansion; however, frequency and predictive values are variable across studies, possibly because of differences in onset-to-CTA time. We performed a patient-level meta-analysis to define the relationship between onset-to-CTA time and frequency and predictive ability of the spot sign. We completed a systematic review for studies of CTA spot sign and hematoma expansion. We subsequently pooled patient-level data on the frequency and predictive values for significant hematoma expansion according to 5 predefined categorized onset-to-CTA times. We calculated spot-sign frequency both as raw and frequency-adjusted rates. Among 2051 studies identified, 12 met our inclusion criteria. Baseline hematoma volume, spot-sign status, and time-to-CTA were available for 1176 patients, and 1039 patients had follow-up computed tomographies for hematoma expansion analysis. The overall spot sign frequency was 26%, decreasing from 39% within 2 hours of onset to 13% beyond 8 hours (P<0.001). There was a significant decrease in hematoma expansion in spot-positive patients as onset-to-CTA time increased (P=0.004), with positive predictive values decreasing from 53% to 33%. The frequency of the CTA spot sign is inversely related to intracerebral hemorrhage onset-to-CTA time. Furthermore, the positive predictive value of the spot sign for significant hematoma expansion decreases as time-to-CTA increases. Our results offer more precise risk stratification for patients with acute intracerebral hemorrhage and will help refine clinical prediction rules for intracerebral hemorrhage expansion. © 2016 American Heart Association, Inc.

  18. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE PAGES

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    2016-10-20

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  19. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  20. Unobtrusive measurement of daily computer use to detect mild cognitive impairment.

    PubMed

    Kaye, Jeffrey; Mattek, Nora; Dodge, Hiroko H; Campbell, Ian; Hayes, Tamara; Austin, Daniel; Hatt, William; Wild, Katherine; Jimison, Holly; Pavel, Michael

    2014-01-01

    Mild disturbances of higher order activities of daily living are present in people diagnosed with mild cognitive impairment (MCI). These deficits may be difficult to detect among those still living independently. Unobtrusive continuous assessment of a complex activity such as home computer use may detect mild functional changes and identify MCI. We sought to determine whether long-term changes in remotely monitored computer use differ in persons with MCI in comparison with cognitively intact volunteers. Participants enrolled in a longitudinal cohort study of unobtrusive in-home technologies to detect cognitive and motor decline in independently living seniors were assessed for computer use (number of days with use, mean daily use, and coefficient of variation of use) measured by remotely monitoring computer session start and end times. More than 230,000 computer sessions from 113 computer users (mean age, 85 years; 38 with MCI) were acquired during a mean of 36 months. In mixed-effects models, there was no difference in computer use at baseline between MCI and intact participants controlling for age, sex, education, race, and computer experience. However, over time, between MCI and intact participants, there was a significant decrease in number of days with use (P = .01), mean daily use (∼1% greater decrease/month; P = .009), and an increase in day-to-day use variability (P = .002). Computer use change can be monitored unobtrusively and indicates individuals with MCI. With 79% of those 55 to 64 years old now online, this may be an ecologically valid and efficient approach to track subtle, clinically meaningful change with aging. Copyright © 2014 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  1. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  2. The Effects of Subjective Time Pressure and Individual Differences on Hypotheses Generation and Action Prioritization in Police Investigations

    ERIC Educational Resources Information Center

    Alison, Laurence; Doran, Bernadette; Long, Matthew L.; Power, Nicola; Humphrey, Amy

    2013-01-01

    When individuals perceive time pressure, they decrease the generation of diagnostic hypotheses and prioritize information. This article examines whether individual differences in (a) internal time urgency, (b) experience, and (c) fluid mental ability can moderate these effects. Police officers worked through a computer-based rape investigative…

  3. Validity of questionnaire self‐reports on computer, mouse and keyboard usage during a four‐week period

    PubMed Central

    Mikkelsen, Sigurd; Vilstrup, Imogen; Lassen, Christina Funch; Kryger, Ann Isabel; Thomsen, Jane Frølund; Andersen, Johan Hviid

    2007-01-01

    Objective To examine the validity and potential biases in self‐reports of computer, mouse and keyboard usage times, compared with objective recordings. Methods A study population of 1211 people was asked in a questionnaire to estimate the average time they had worked with computer, mouse and keyboard during the past four working weeks. During the same period, a software program recorded these activities objectively. The study was part of a one‐year follow‐up study from 2000–1 of musculoskeletal outcomes among Danish computer workers. Results Self‐reports on computer, mouse and keyboard usage times were positively associated with objectively measured activity, but the validity was low. Self‐reports explained only between a quarter and a third of the variance of objectively measured activity, and were even lower for one measure (keyboard time). Self‐reports overestimated usage times. Overestimation was large at low levels and declined with increasing levels of objectively measured activity. Mouse usage time proportion was an exception with a near 1:1 relation. Variability in objectively measured activity, arm pain, gender and age influenced self‐reports in a systematic way, but the effects were modest and sometimes in different directions. Conclusion Self‐reported durations of computer activities are positively associated with objective measures but they are quite inaccurate. Studies using self‐reports to establish relations between computer work times and musculoskeletal pain could be biased and lead to falsely increased or decreased risk estimates. PMID:17387136

  4. Validity of questionnaire self-reports on computer, mouse and keyboard usage during a four-week period.

    PubMed

    Mikkelsen, Sigurd; Vilstrup, Imogen; Lassen, Christina Funch; Kryger, Ann Isabel; Thomsen, Jane Frølund; Andersen, Johan Hviid

    2007-08-01

    To examine the validity and potential biases in self-reports of computer, mouse and keyboard usage times, compared with objective recordings. A study population of 1211 people was asked in a questionnaire to estimate the average time they had worked with computer, mouse and keyboard during the past four working weeks. During the same period, a software program recorded these activities objectively. The study was part of a one-year follow-up study from 2000-1 of musculoskeletal outcomes among Danish computer workers. Self-reports on computer, mouse and keyboard usage times were positively associated with objectively measured activity, but the validity was low. Self-reports explained only between a quarter and a third of the variance of objectively measured activity, and were even lower for one measure (keyboard time). Self-reports overestimated usage times. Overestimation was large at low levels and declined with increasing levels of objectively measured activity. Mouse usage time proportion was an exception with a near 1:1 relation. Variability in objectively measured activity, arm pain, gender and age influenced self-reports in a systematic way, but the effects were modest and sometimes in different directions. Self-reported durations of computer activities are positively associated with objective measures but they are quite inaccurate. Studies using self-reports to establish relations between computer work times and musculoskeletal pain could be biased and lead to falsely increased or decreased risk estimates.

  5. [Electronic medical records: Evolution of physician-patient relationship in the Primary Care clinic].

    PubMed

    Pérez-Santonja, T; Gómez-Paredes, L; Álvarez-Montero, S; Cabello-Ballesteros, L; Mombiela-Muruzabal, M T

    2017-04-01

    The introduction of electronic medical records and computer media in clinics, has influenced the physician-patient relationship. These modifications have many advantages, but there is concern that the computer has become too important, going from a working tool to the centre of our attention during the clinical interview, decreasing doctor interaction with the patient. The objective of the study was to estimate the percentage of time that family physicians spend on computer media compared to interpersonal communication with the patient, and whether this time is modified depending on different variables such as, doctor's age or reason for the consultation. An observational and descriptive study was conducted for 10 weeks, with 2 healthcare centres involved. The researchers attended all doctor- patient interviews, recording the patient time in and out of the consultation. Each time the doctor fixed his gaze on computer media the time was clocked. A total of 436 consultations were collected. The doctors looked at the computer support a median 38.33% of the total duration of an interview. Doctors of 45 years and older spent more time fixing their eyes on computer media (P<.05). Family physicians used almost 40% of the consultation time looking at computer media, and depends on age of physician, number of queries, and number of medical appointments. Copyright © 2016 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España, S.L.U. All rights reserved.

  6. Nursing benefits of using an automated injection system for ictal brain single photon emission computed tomography.

    PubMed

    Vonhofen, Geraldine; Evangelista, Tonya; Lordeon, Patricia

    2012-04-01

    The traditional method of administering radioactive isotopes to pediatric patients undergoing ictal brain single photon emission computed tomography testing has been by manual injections. This method presents certain challenges for nursing, including time requirements and safety risks. This quality improvement project discusses the implementation of an automated injection system for isotope administration and its impact on staffing, safety, and nursing satisfaction. It was conducted in an epilepsy monitoring unit at a large urban pediatric facility. Results of this project showed a decrease in the number of nurses exposed to radiation and improved nursing satisfaction with the use of the automated injection system. In addition, there was a decrease in the number of nursing hours required during ictal brain single photon emission computed tomography testing.

  7. Neural network approach to proximity effect corrections in electron-beam lithography

    NASA Astrophysics Data System (ADS)

    Frye, Robert C.; Cummings, Kevin D.; Rietman, Edward A.

    1990-05-01

    The proximity effect, caused by electron beam backscattering during resist exposure, is an important concern in writing submicron features. It can be compensated by appropriate local changes in the incident beam dose, but computation of the optimal correction usually requires a prohibitively long time. We present an example of such a computation on a small test pattern, which we performed by an iterative method. We then used this solution as a training set for an adaptive neural network. After training, the network computed the same correction as the iterative method, but in a much shorter time. Correcting the image with a software based neural network resulted in a decrease in the computation time by a factor of 30, and a hardware based network enhanced the computation speed by more than a factor of 1000. Both methods had an acceptably small error of 0.5% compared to the results of the iterative computation. Additionally, we verified that the neural network correctly generalized the solution of the problem to include patterns not contained in its training set.

  8. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  9. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  10. Estimating the expected value of partial perfect information in health economic evaluations using integrated nested Laplace approximation.

    PubMed

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2016-10-15

    The Expected Value of Perfect Partial Information (EVPPI) is a decision-theoretic measure of the 'cost' of parametric uncertainty in decision making used principally in health economic decision making. Despite this decision-theoretic grounding, the uptake of EVPPI calculations in practice has been slow. This is in part due to the prohibitive computational time required to estimate the EVPPI via Monte Carlo simulations. However, recent developments have demonstrated that the EVPPI can be estimated by non-parametric regression methods, which have significantly decreased the computation time required to approximate the EVPPI. Under certain circumstances, high-dimensional Gaussian Process (GP) regression is suggested, but this can still be prohibitively expensive. Applying fast computation methods developed in spatial statistics using Integrated Nested Laplace Approximations (INLA) and projecting from a high-dimensional into a low-dimensional input space allows us to decrease the computation time for fitting these high-dimensional GP, often substantially. We demonstrate that the EVPPI calculated using our method for GP regression is in line with the standard GP regression method and that despite the apparent methodological complexity of this new method, R functions are available in the package BCEA to implement it simply and efficiently. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  11. Numerical solutions of 3-dimensional Navier-Stokes equations for closed bluff-bodies

    NASA Technical Reports Server (NTRS)

    Abolhassani, J. S.; Tiwari, S. N.

    1985-01-01

    The Navier-Stokes equations are solved numerically. These equations are unsteady, compressible, viscous, and three-dimensional without neglecting any terms. The time dependency of the governing equations allows the solution to progress naturally for an arbitrary initial guess to an asymptotic steady state, if one exists. The equations are transformed from physical coordinates to the computational coordinates, allowing the solution of the governing equations in a rectangular parallelepiped domain. The equations are solved by the MacCormack time-split technique which is vectorized and programmed to run on the CDc VPS 32 computer. The codes are written in 32-bit (half word) FORTRAN, which provides an approximate factor of two decreasing in computational time and doubles the memory size compared to the 54-bit word size.

  12. Intercomparison of Recent Anomaly Time-Series of OLR as Observed by CERES and Computed Using AIRS Products

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Molnar, Gyula; Iredell, Lena; Loeb, Norman G.

    2011-01-01

    This paper compares recent spatial and temporal anomaly time series of OLR as observed by CERES and computed based on AIRS retrieved surface and atmospheric geophysical parameters over the 7 year time period September 2002 through February 2010. This time period is marked by a substantial decrease of OLR, on the order of +/-0.1 W/sq m/yr, averaged over the globe, and very large spatial variations of changes in OLR in the tropics, with local values ranging from -2.8 W/sq m/yr to +3.1 W/sq m/yr. Global and Tropical OLR both began to decrease significantly at the onset of a strong La Ni a in mid-2007. Late 2009 is characterized by a strong El Ni o, with a corresponding change in sign of both Tropical and Global OLR anomalies. The spatial patterns of the 7 year short term changes in AIRS and CERES OLR have a spatial correlation of 0.97 and slopes of the linear least squares fits of anomaly time series averaged over different spatial regions agree on the order of +/-0.01 W/sq m/yr. This essentially perfect agreement of OLR anomaly time series derived from observations by two different instruments, determined in totally independent and different manners, implies that both sets of results must be highly stable. This agreement also validates the anomaly time series of the AIRS derived products used to compute OLR and furthermore indicates that anomaly time series of AIRS derived products can be used to explain the factors contributing to anomaly time series of OLR.

  13. Hiding the Disk and Network Latency of Out-of-Core Visualization

    NASA Technical Reports Server (NTRS)

    Ellsworth, David

    2001-01-01

    This paper describes an algorithm that improves the performance of application-controlled demand paging for out-of-core visualization by hiding the latency of reading data from both local disks or disks on remote servers. The performance improvements come from better overlapping the computation with the page reading process, and by performing multiple page reads in parallel. The paper includes measurements that show that the new multithreaded paging algorithm decreases the time needed to compute visualizations by one third when using one processor and reading data from local disk. The time needed when using one processor and reading data from remote disk decreased by two thirds. Visualization runs using data from remote disk actually ran faster than ones using data from local disk because the remote runs were able to make use of the remote server's high performance disk array.

  14. Update of patient-specific maxillofacial implant.

    PubMed

    Owusu, James A; Boahene, Kofi

    2015-08-01

    Patient-specific implant (PSI) is a personalized approach to reconstructive and esthetic surgery. This is particularly useful in maxillofacial surgery in which restoring the complex three-dimensional (3D) contour can be quite challenging. In certain situations, the best results can only be achieved with implants custom-made to fit a particular need. Significant progress has been made over the past decade in the design and manufacture of maxillofacial PSIs. Computer-aided design (CAD)/computer-aided manufacturing (CAM) technology is rapidly advancing and has provided new options for fabrication of PSIs with better precision. Maxillofacial PSIs can now be designed using preoperative imaging data as input into CAD software. The designed implant is then fabricated using a CAM technique such as 3D printing. This approach increases precision and decreases or completely eliminates the need for intraoperative modification of implants. The use of CAD/CAM-produced PSIs for maxillofacial reconstruction and augmentation can significantly improve contour outcomes and decrease operating time. CAD/CAM technology allows timely and precise fabrication of maxillofacial PSIs. This approach is gaining increasing popularity in maxillofacial reconstructive surgery. Continued advances in CAD technology and 3D printing are bound to improve the cost-effectiveness and decrease the production time of maxillofacial PSIs.

  15. Incomplete Spontaneous Recovery from Airway Obstruction During Inhaled Anesthesia Induction: A Computational Simulation.

    PubMed

    Kuo, Alexander S; Vijjeswarapu, Mary A; Philip, James H

    2016-03-01

    Inhaled induction with spontaneous respiration is a technique used for difficult airways. One of the proposed advantages is if airway patency is lost, the anesthetic agent will spontaneously redistribute until anesthetic depth is reduced and airway patency can be recovered. There are little and conflicting clinical or experimental data regarding the kinetics of this anesthetic technique. We used computer simulation to investigate this situation. We used GasMan, a computer simulation of inhaled anesthetic kinetics. For each simulation, alveolar ventilation was initiated with a set anesthetic induction concentration. When the vessel-rich group level reached the simulation specified airway obstruction threshold, alveolar ventilation was set at 0 to simulate complete airway obstruction. The time until the vessel-rich group anesthetic level decreased below the airway obstruction threshold was designated time to spontaneous recovery. We varied the parameters for each simulation, exploring the use of sevoflurane and halothane, airway obstruction threshold from 0.5 to 2 minimum alveolar concentration (MAC), anesthetic induction concentration 2 to 4 MAC sevoflurane and 4 to 6 MAC halothane, cardiac output 2.5 to 10 L/min, functional residual capacity 1.5 to 3.5 L, and relative vessel-rich group perfusion 67% to 85%. In each simulation, there were 3 general phases: anesthetic wash-in, obstruction and overshoot, and then slow redistribution. During the first 2 phases, there was a large gradient between the alveolar and vessel-rich group. Alveolar do not reflect vessel-rich group anesthetic levels until the late third phase. Time to spontaneous recovery varied between 35 and 749 seconds for sevoflurane and 13 and 222 seconds for halothane depending on the simulation parameters. Halothane had a faster time to spontaneous recovery because of the lower alveolar gradient and less overshoot of the vessel-rich group, not faster redistribution. Higher airway obstruction thresholds, decreased anesthetic induction, and higher cardiac output reduced time to spontaneous recovery. To a lesser effect, decreased functional residual capacity and the decreased relative vessel-rich groups' perfusion also reduced the time to spontaneous recovery. Spontaneous recovery after complete airway obstruction during inhaled induction is plausible, but the recovery time is highly variable and depends on the clinical and physiologic situation. These results emphasize that induction is a non-steady-state situation, thus effect-site anesthetic levels should be modeled in future research, not alveolar concentration. Finally, this study provides an example of using computer simulation to explore situations that are difficult to investigate clinically.

  16. Effect of chronic right ventricular apical pacing on left ventricular function.

    PubMed

    O'Keefe, James H; Abuissa, Hussam; Jones, Philip G; Thompson, Randall C; Bateman, Timothy M; McGhie, A Iain; Ramza, Brian M; Steinhaus, David M

    2005-03-15

    The determinants of change in left ventricular (LV) ejection fraction (EF) over time in patients with impaired LV function at baseline have not been clearly established. Using a nuclear database to assess changes in LV function over time, we included patients with a baseline LVEF of 25% to 40% on a gated single-photon emission computed tomographic study at rest and only if second-gated photon emission computed tomography performed approximately 18 months after the initial study showed an improvement in LVEF at rest of > or =10 points or a decrease in LVEF at rest of > or =7 points. In all, 148 patients qualified for the EF increase group and 59 patients for the EF decrease group. LVEF on average increased from 33 +/- 4% to 51 +/- 8% in the EF increase group and decreased from 35 +/- 4% to 25 +/- 5% in the EF decrease group. The strongest multivariable predictor of improvement of LVEF was beta-blocker therapy (odds ratio 3.9, p = 0.002). The strongest independent predictor of LVEF decrease was the presence of a permanent right ventricular apical pacemaker (odds ratio 6.6, p = 0.002). Thus, this study identified beta-blocker therapy as the major independent predictor for improvement in LVEF of > or =10 points, whereas a permanent pacemaker (right ventricular apical pacing) was the strongest predictor of a LVEF decrease of > or =7 points.

  17. Multi-GPGPU Tsunami simulation at Toyama-bay

    NASA Astrophysics Data System (ADS)

    Furuyama, Shoichi; Ueda, Yuki

    2017-07-01

    Accelerated multi General Purpose Graphics Processing Unit (GPGPU) calculation for Tsunami run-up simulation was achieved at the wide area (whole Toyama-bay in Japan) by faster computation technique. Toyama-bay has active-faults at the sea-bed. It has a high possibility to occur earthquakes and Tsunami waves in the case of the huge earthquake, that's why to predict the area of Tsunami run-up is important for decreasing damages to residents by the disaster. However it is very hard task to achieve the simulation by the computer resources problem. A several meter's order of the high resolution calculation is required for the running-up Tsunami simulation because artificial structures on the ground such as roads, buildings, and houses are very small. On the other hand the huge area simulation is also required. In the Toyama-bay case the area is 42 [km] × 15 [km]. When 5 [m] × 5 [m] size computational cells are used for the simulation, over 26,000,000 computational cells are generated. To calculate the simulation, a normal CPU desktop computer took about 10 hours for the calculation. An improvement of calculation time is important problem for the immediate prediction system of Tsunami running-up, as a result it will contribute to protect a lot of residents around the coastal region. The study tried to decrease this calculation time by using multi GPGPU system which is equipped with six NVIDIA TESLA K20xs, InfiniBand network connection between computer nodes by MVAPICH library. As a result 5.16 times faster calculation was achieved on six GPUs than one GPU case and it was 86% parallel efficiency to the linear speed up.

  18. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    PubMed

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  19. The Case for Modular Redundancy in Large-Scale High Performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2009-01-01

    Recent investigations into resilience of large-scale high-performance computing (HPC) systems showed a continuous trend of decreasing reliability and availability. Newly installed systems have a lower mean-time to failure (MTTF) and a higher mean-time to recover (MTTR) than their predecessors. Modular redundancy is being used in many mission critical systems today to provide for resilience, such as for aerospace and command \\& control systems. The primary argument against modular redundancy for resilience in HPC has always been that the capability of a HPC system, and respective return on investment, would be significantly reduced. We argue that modular redundancy can significantly increasemore » compute node availability as it removes the impact of scale from single compute node MTTR. We further argue that single compute nodes can be much less reliable, and therefore less expensive, and still be highly available, if their MTTR/MTTF ratio is maintained.« less

  20. Integration of active pauses and pattern of muscular activity during computer work.

    PubMed

    St-Onge, Nancy; Samani, Afshin; Madeleine, Pascal

    2017-09-01

    Submaximal isometric muscle contractions have been reported to increase variability of muscle activation during computer work; however, other types of active contractions may be more beneficial. Our objective was to determine which type of active pause vs. rest is more efficient in changing muscle activity pattern during a computer task. Asymptomatic regular computer users performed a standardised 20-min computer task four times, integrating a different type of pause: sub-maximal isometric contraction, dynamic contraction, postural exercise and rest. Surface electromyographic (SEMG) activity was recorded bilaterally from five neck/shoulder muscles. Root-mean-square decreased with isometric pauses in the cervical paraspinals, upper trapezius and middle trapezius, whereas it increased with rest. Variability in the pattern of muscular activity was not affected by any type of pause. Overall, no detrimental effects on the level of SEMG during active pauses were found suggesting that they could be implemented without a cost on activation level or variability. Practitioner Summary: We aimed to determine which type of active pause vs. rest is best in changing muscle activity pattern during a computer task. Asymptomatic computer users performed a standardised computer task integrating different types of pauses. Muscle activation decreased with isometric pauses in neck/shoulder muscles, suggesting their implementation during computer work.

  1. Improvements in floating point addition/subtraction operations

    DOEpatents

    Farmwald, P.M.

    1984-02-24

    Apparatus is described for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.

  2. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO.

    PubMed

    Hernandez-Vicen, Juan; Martinez, Santiago; Garcia-Haro, Juan Miguel; Balaguer, Carlos

    2018-03-25

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.

  3. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO

    PubMed Central

    2018-01-01

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid. PMID:29587392

  4. Kinetic modeling of cellulosic biomass to ethanol via simultaneous saccharification and fermentation: Part I. Accommodation of intermittent feeding and analysis of staged reactors.

    PubMed

    Shao, Xiongjun; Lynd, Lee; Wyman, Charles; Bakker, André

    2009-01-01

    The model of South et al. [South et al. (1995) Enzyme Microb Technol 17(9): 797-803] for simultaneous saccharification of fermentation of cellulosic biomass is extended and modified to accommodate intermittent feeding of substrate and enzyme, cascade reactor configurations, and to be more computationally efficient. A dynamic enzyme adsorption model is found to be much more computationally efficient than the equilibrium model used previously, thus increasing the feasibility of incorporating the kinetic model in a computational fluid dynamic framework in the future. For continuous or discretely fed reactors, it is necessary to use particle conversion in conversion-dependent hydrolysis rate laws rather than reactor conversion. Whereas reactor conversion decreases due to both reaction and exit of particles from the reactor, particle conversion decreases due to reaction only. Using the modified models, it is predicted that cellulose conversion increases with decreasing feeding frequency (feedings per residence time, f). A computationally efficient strategy for modeling cascade reactors involving a modified rate constant is shown to give equivalent results relative to an exhaustive approach considering the distribution of particles in each successive fermenter.

  5. Human short-term exposure to electromagnetic fields emitted by mobile phones decreases computer-assisted visual reaction time.

    PubMed

    Mortazavi, S M J; Rouintan, M S; Taeb, S; Dehghan, N; Ghaffarpanah, A A; Sadeghi, Z; Ghafouri, F

    2012-06-01

    The worldwide dramatic increase in mobile phone use has generated great concerns about the detrimental effects of microwave radiations emitted by these communication devices. Reaction time plays a critical role in performing tasks necessary to avoid hazards. As far as we know, this study is the first survey that reports decreased reaction time after exposure to electromagnetic fields generated by a high specific absorption rate mobile phone. It is also the first study in which previous history of mobile phone use is taken into account. The aim of this study was to assess both the acute and chronic effects of electromagnetic fields emitted by mobile phones on reaction time in university students. Visual reaction time (VRT) of young university students was recorded with a simple blind computer-assisted-VRT test, before and after a 10 min real/sham exposure to electromagnetic fields of mobile phones. Participants were 160 right-handed university students aged 18-31. To assess the effect of chronic exposures, the reaction time in sham-exposed phases were compared among low level, moderate and frequent users of mobile phones. The mean ± SD reaction time after real exposure and sham exposure were 286.78 ± 31.35 ms and 295.86 ± 32.17 ms (P < 0.001), respectively. The age of students did not significantly alter the reaction time either in talk or in standby mode. The reaction time either in talk or in standby mode was shorter in male students. The students' VRT was significantly affected by exposure to electromagnetic fields emitted by a mobile phone. It can be concluded that these exposures cause decreased reaction time, which may lead to a better response to different hazards. In this light, this phenomenon might decrease the chances of human errors and fatal accidents.

  6. Electromagnetic Navigational Bronchoscopy Reduces the Time Required for Localization and Resection of Lung Nodules.

    PubMed

    Bolton, William David; Cochran, Thomas; Ben-Or, Sharon; Stephenson, James E; Ellis, William; Hale, Allyson L; Binks, Andrew P

    The aims of the study were to evaluate electromagnetic navigational bronchoscopy (ENB) and computed tomography-guided placement as localization techniques for minimally invasive resection of small pulmonary nodules and determine whether electromagnetic navigational bronchoscopy is a safer and more effective method than computed tomography-guided localization. We performed a retrospective review of our thoracic surgery database to identify patients who underwent minimally invasive resection for a pulmonary mass and used either electromagnetic navigational bronchoscopy or computed tomography-guided localization techniques between July 2011 and May 2015. Three hundred eighty-three patients had a minimally invasive resection during our study period, 117 of whom underwent electromagnetic navigational bronchoscopy or computed tomography localization (electromagnetic navigational bronchoscopy = 81; computed tomography = 36). There was no significant difference between computed tomography and electromagnetic navigational bronchoscopy patient groups with regard to age, sex, race, pathology, nodule size, or location. Both computed tomography and electromagnetic navigational bronchoscopy were 100% successful at localizing the mass, and there was no difference in the type of definitive surgical resection (wedge, segmentectomy, or lobectomy) (P = 0.320). Postoperative complications occurred in 36% of all patients, but there were no complications related to the localization procedures. In terms of localization time and surgical time, there was no difference between groups. However, the down/wait time between localization and resection was significant (computed tomography = 189 minutes; electromagnetic navigational bronchoscopy = 27 minutes); this explains why the difference in total time (sum of localization, down, and surgery) was significant (P < 0.001). We found electromagnetic navigational bronchoscopy to be as safe and effective as computed tomography-guided wire placement and to provide a significantly decreased down time between localization and surgical resection.

  7. Documentation of a numerical code for the simulation of variable density ground-water flow in three dimensions

    USGS Publications Warehouse

    Kuiper, L.K.

    1985-01-01

    A numerical code is documented for the simulation of variable density time dependent groundwater flow in three dimensions. The groundwater density, although variable with distance, is assumed to be constant in time. The Integrated Finite Difference grid elements in the code follow the geologic strata in the modeled area. If appropriate, the determination of hydraulic head in confining beds can be deleted to decrease computation time. The strongly implicit procedure (SIP), successive over-relaxation (SOR), and eight different preconditioned conjugate gradient (PCG) methods are used to solve the approximating equations. The use of the computer program that performs the calculations in the numerical code is emphasized. Detailed instructions are given for using the computer program, including input data formats. An example simulation and the Fortran listing of the program are included. (USGS)

  8. Prolonged Screen Viewing Times and Sociodemographic Factors among Pregnant Women: A Cross-Sectional Survey in China

    PubMed Central

    Liu, Dengyuan; Rao, Yunshuang; Zeng, Huan; Zhang, Fan; Wang, Lu; Xie, Yaojie; Sharma, Manoj; Zhao, Yong

    2018-01-01

    Objectives: This study aimed to assess the prevalence of prolonged television, computer, and mobile phone viewing times and examined related sociodemographic factors among Chinese pregnant women. Methods: In this study, a cross-sectional survey was implemented among 2400 Chinese pregnant women in 16 hospitals of 5 provinces from June to August in 2015, and the response rate of 97.76%. We excluded women with serious complications and cognitive disorders. The women were asked about their television, computer, and mobile phone viewing during pregnancy. Prolonged television watching or computer viewing was defined as spending more than two hours on television or computer viewing per day. Prolonged mobile phone viewing was watching more than one hour on mobile phone per day. Results: Among 2345 pregnant women, about 25.1% reported prolonged television viewing, 20.6% reported prolonged computer viewing, and 62.6% reported prolonged mobile phone viewing. Pregnant women with long mobile phone viewing times were likely have long TV (Estimate = 0.080, Standard Error (SE) = 0.016, p < 0.001) and computer viewing times (Estimate = 0.053, SE = 0.022, p = 0.015). Pregnant women with long TV (Estimate = 0.134, SE = 0.027, p < 0.001) and long computer viewing times (Estimate = 0.049, SE = 0.020, p = 0.015) were likely have long mobile phone viewing times. Pregnant women with long TV viewing times were less likely to have long computer viewing times (Estimate = −0.032, SE = 0.015, p = 0.035), and pregnant women with long computer viewing times were less likely have long TV viewing times (Estimate = −0.059, SE = 0.028, p = 0.035). Pregnant women in their second pregnancy had lower prolonged computer viewing times than those in their first pregnancy (Odds Ratio (OR) 0.56, 95% Confidence Interval (CI) 0.42–0.74). Pregnant women in their second pregnancy were more likely have longer prolonged mobile phone viewing times than those in their first pregnancy (OR 1.25, 95% CI 1.01–1.55). Conclusions: The high prevalence rate of prolonged TV, computer, and mobile phone viewing times was common for pregnant women in their first and second pregnancy. This study preliminarily explored the relationship between sociodemographic factors and prolonged screen time to provide some indication for future interventions related to decreasing screen-viewing times during pregnancy in China. PMID:29495439

  9. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  10. A computational model for telomere-dependent cell-replicative aging.

    PubMed

    Portugal, R D; Land, M G P; Svaiter, B F

    2008-01-01

    Telomere shortening provides a molecular basis for the Hayflick limit. Recent data suggest that telomere shortening also influence mitotic rate. We propose a stochastic growth model of this phenomena, assuming that cell division in each time interval is a random process which probability decreases linearly with telomere shortening. Computer simulations of the proposed stochastic telomere-regulated model provides good approximation of the qualitative growth of cultured human mesenchymal stem cells.

  11. Longitudinal changes in physical activity and sedentary time in adults around retirement age: what is the moderating role of retirement status, gender and educational level?

    PubMed

    Van Dyck, Delfien; Cardon, Greet; De Bourdeaudhuij, Ilse

    2016-10-28

    The start of retirement is an important stage in an (older) adult's life and can affect physical activity (PA) and/or sedentary behaviors, making it an ideal period to implement health interventions. To identify the most optimal timing of such interventions it is important to determine how PA and sedentary behaviors change not only when making the transition to retirement, but also during the first years of retirement. The main study aim was to examine whether PA and sedentary behaviors change differently in retiring adults compared with recently retired adults. A second aim was to examine potential moderating effects of gender and educational level. A longitudinal study was conducted in Ghent, Belgium. Baseline measurements took place in 2012-2013 and follow-up data were collected 2 years later. In total, 446 adults provided complete data at both time points. Of the participants 105 adults were not retired at baseline but retired between baseline and follow-up (i.e. retiring) and 341 were already retired at baseline (i.e. recently retired). All participants completed a questionnaire on PA, sedentary behaviors, socio-demographic factors and physical functioning. Repeated measures MANOVAs were conducted in SPSS 22.0. to analyze the data. Leisure-time cycling increased over time in retiring adults, but decreased in recently retired adults (p < 0.01). (Voluntary) work-related walking and moderate-to-vigorous PA decreased strongly in retiring adults, while slight increases were found in recently retired adults (p < 0.001 and p < 0.01). Passive transport decreased more strongly in recently retired than in retiring adults (p < 0.05), and computer use increased more in retiring adults than in the recently retired group (p < 0.001). Low-educated recently retired adults had the strongest decrease in walking for transport (p < 0.05) and strongest increase in TV viewing time (p < 0.01) and computer use (p < 0.10). For gender, almost no moderating effects were found. Future interventions should focus on PA and/or specific sedentary behaviors in retiring adults, but should definitely include long-term follow-up, as recently retired adults seem to be prone to lapse into an unhealthy lifestyle. Specific attention should be paid to low-educated adults as they are particularly susceptible to a decrease in PA and increased TV viewing time and computer use.

  12. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  13. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  14. Real -time dispatching modelling for trucks with different capacities in open pit mines / Modelowanie w czasie rzeczywistym przewozów ciężarówek o różnej ładowności w kopalni odkrywkowej

    NASA Astrophysics Data System (ADS)

    Ahangaran, Daryoush Kaveh; Yasrebi, Amir Bijan; Wetherelt, Andy; Foster, Patrick

    2012-10-01

    Application of fully automated systems for truck dispatching plays a major role in decreasing the transportation costs which often represent the majority of costs spent on open pit mining. Consequently, the application of a truck dispatching system has become fundamentally important in most of the world's open pit mines. Recent experiences indicate that by decreasing a truck's travelling time and the associated waiting time of its associated shovel then due to the application of a truck dispatching system the rate of production will be considerably improved. Computer-based truck dispatching systems using algorithms, advanced and accurate software are examples of these innovations. Developing an algorithm of a computer- based program appropriated to a specific mine's conditions is considered as one of the most important activities in connection with computer-based dispatching in open pit mines. In this paper the changing trend of programming and dispatching control algorithms and automation conditions will be discussed. Furthermore, since the transportation fleet of most mines use trucks with different capacities, innovative methods, operational optimisation techniques and the best possible methods for developing the required algorithm for real-time dispatching are selected by conducting research on mathematical-based planning methods. Finally, a real-time dispatching model compatible with the requirement of trucks with different capacities is developed by using two techniques of flow networks and integer programming.

  15. Improved safety of retinal photocoagulation with a shaped beam and modulated pulse

    NASA Astrophysics Data System (ADS)

    Sramek, Christopher; Brown, Jefferson; Paulus, Yannis M.; Nomoto, Hiroyuki; Palanker, Daniel

    2010-02-01

    Shorter pulse durations help confine thermal damage during retinal photocoagulation, decrease treatment time and minimize pain. However, safe therapeutic window (the ratio of threshold powers for rupture and mild coagulation) decreases with shorter exposures. A ring-shaped beam enables safer photocoagulation than conventional beams by reducing the maximum temperature in the center of the spot. Similarly, a temporal pulse modulation decreasing its power over time improves safety by maintaining constant temperature for a significant portion of the pulse. Optimization of the beam and pulse shapes was performed using a computational model. In vivo experiments were performed to verify the predicted improvement. With each of these approaches, the pulse duration can be decreased by a factor of two, from 20 ms down to 10 ms while maintaining the same therapeutic window.

  16. Two-Dimensional Sequential and Concurrent Finite Element Analysis of Unstiffened and Stiffened Aluminum and Composite Panels with Hole

    NASA Technical Reports Server (NTRS)

    Razzaq, Zia; Prasad, Venkatesh

    1988-01-01

    The results of a detailed investigation of the distribution of stresses in aluminum and composite panels subjected to uniform end shortening are presented. The focus problem is a rectangular panel with two longitudinal stiffeners, and an inner stiffener discontinuous at a central hole in the panel. The influence of the stiffeners on the stresses is evaluated through a two-dimensional global finite element analysis in the absence or presence of the hole. Contrary to the physical feel, it is found that the maximum stresses from the glocal analysis for both stiffened aluminum and composite panels are greater than the corresponding stresses for the unstiffened panels. The inner discontinuous stiffener causes a greater increase in stresses than the reduction provided by the two outer stiffeners. A detailed layer-by-layer study of stresses around the hole is also presented for both unstiffened and stiffened composite panels. A parallel equation solver is used for the global system of equations since the computational time is far less than that using a sequential scheme. A parallel Choleski method with up to 16 processors is used on Flex/32 Multicomputer at NASA Langley Research Center. The parallel computing results are summarized and include the computational times, speedups, bandwidths, and their inter-relationships for the panel problems. It is found that the computational time for the Choleski method decreases with a decrease in bandwidth, and better speedups result as the bandwidth increases.

  17. Probability of survival during accidental immersion in cold water.

    PubMed

    Wissler, Eugene H

    2003-01-01

    Estimating the probability of survival during accidental immersion in cold water presents formidable challenges for both theoreticians and empirics. A number of theoretical models have been developed assuming that death occurs when the central body temperature, computed using a mathematical model, falls to a certain level. This paper describes a different theoretical approach to estimating the probability of survival. The human thermal model developed by Wissler is used to compute the central temperature during immersion in cold water. Simultaneously, a survival probability function is computed by solving a differential equation that defines how the probability of survival decreases with increasing time. The survival equation assumes that the probability of occurrence of a fatal event increases as the victim's central temperature decreases. Generally accepted views of the medical consequences of hypothermia and published reports of various accidents provide information useful for defining a "fatality function" that increases exponentially with decreasing central temperature. The particular function suggested in this paper yields a relationship between immersion time for 10% probability of survival and water temperature that agrees very well with Molnar's empirical observations based on World War II data. The method presented in this paper circumvents a serious difficulty with most previous models--that one's ability to survive immersion in cold water is determined almost exclusively by the ability to maintain a high level of shivering metabolism.

  18. Computational cognitive modeling of the temporal dynamics of fatigue from sleep loss.

    PubMed

    Walsh, Matthew M; Gunzelmann, Glenn; Van Dongen, Hans P A

    2017-12-01

    Computational models have become common tools in psychology. They provide quantitative instantiations of theories that seek to explain the functioning of the human mind. In this paper, we focus on identifying deep theoretical similarities between two very different models. Both models are concerned with how fatigue from sleep loss impacts cognitive processing. The first is based on the diffusion model and posits that fatigue decreases the drift rate of the diffusion process. The second is based on the Adaptive Control of Thought - Rational (ACT-R) cognitive architecture and posits that fatigue decreases the utility of candidate actions leading to microlapses in cognitive processing. A biomathematical model of fatigue is used to control drift rate in the first account and utility in the second. We investigated the predicted response time distributions of these two integrated computational cognitive models for performance on a psychomotor vigilance test under conditions of total sleep deprivation, simulated shift work, and sustained sleep restriction. The models generated equivalent predictions of response time distributions with excellent goodness-of-fit to the human data. More importantly, although the accounts involve different modeling approaches and levels of abstraction, they represent the effects of fatigue in a functionally equivalent way: in both, fatigue decreases the signal-to-noise ratio in decision processes and decreases response inhibition. This convergence suggests that sleep loss impairs psychomotor vigilance performance through degradation of the quality of cognitive processing, which provides a foundation for systematic investigation of the effects of sleep loss on other aspects of cognition. Our findings illustrate the value of treating different modeling formalisms as vehicles for discovery.

  19. Fourth order scheme for wavelet based solution of Black-Scholes equation

    NASA Astrophysics Data System (ADS)

    Finěk, Václav

    2017-12-01

    The present paper is devoted to the numerical solution of the Black-Scholes equation for pricing European options. We apply the Crank-Nicolson scheme with Richardson extrapolation for time discretization and Hermite cubic spline wavelets with four vanishing moments for space discretization. This scheme is the fourth order accurate both in time and in space. Computational results indicate that the Crank-Nicolson scheme with Richardson extrapolation significantly decreases the amount of computational work. We also numerically show that optimal convergence rate for the used scheme is obtained without using startup procedure despite the data irregularities in the model.

  20. Computation and projection of spiral wave trajectories during atrial fibrillation: a computational study.

    PubMed

    Pashaei, Ali; Bayer, Jason; Meillet, Valentin; Dubois, Rémi; Vigmond, Edward

    2015-03-01

    To show how atrial fibrillation rotor activity on the heart surface manifests as phase on the torso, fibrillation was induced on a geometrically accurate computer model of the human atria. The Hilbert transform, time embedding, and filament detection were compared. Electrical activity on the epicardium was used to compute potentials on different surfaces from the atria to the torso. The Hilbert transform produces erroneous phase when pacing for longer than the action potential duration. The number of phase singularities, frequency content, and the dominant frequency decreased with distance from the heart, except for the convex hull. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Application-oriented offloading in heterogeneous networks for mobile cloud computing

    NASA Astrophysics Data System (ADS)

    Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.

    2018-04-01

    Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.

  2. Memory as Perception of the Past: Compressed Time inMind and Brain.

    PubMed

    Howard, Marc W

    2018-02-01

    In the visual system retinal space is compressed such that acuity decreases further from the fovea. Different forms of memory may rely on a compressed representation of time, manifested as decreased accuracy for events that happened further in the past. Neurophysiologically, "time cells" show receptive fields in time. Analogous to the compression of visual space, time cells show less acuity for events further in the past. Behavioral evidence suggests memory can be accessed by scanning a compressed temporal representation, analogous to visual search. This suggests a common computational language for visual attention and memory retrieval. In this view, time functions like a scaffolding that organizes memories in much the same way that retinal space functions like a scaffolding for visual perception. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. [A fast iterative algorithm for adaptive histogram equalization].

    PubMed

    Cao, X; Liu, X; Deng, Z; Jiang, D; Zheng, C

    1997-01-01

    In this paper, we propose an iterative algorthm called FAHE., which is based on the relativity between the current local histogram and the one before the sliding window moving. Comparing with the basic AHE, the computing time of FAHE is decreased from 5 hours to 4 minutes on a 486dx/33 compatible computer, when using a 65 x 65 sliding window for a 512 x 512 with 8 bits gray-level range.

  4. Emerging technology in surgical education: combining real-time augmented reality and wearable computing devices.

    PubMed

    Ponce, Brent A; Menendez, Mariano E; Oladeji, Lasun O; Fryberger, Charles T; Dantuluri, Phani K

    2014-11-01

    The authors describe the first surgical case adopting the combination of real-time augmented reality and wearable computing devices such as Google Glass (Google Inc, Mountain View, California). A 66-year-old man presented to their institution for a total shoulder replacement after 5 years of progressive right shoulder pain and decreased range of motion. Throughout the surgical procedure, Google Glass was integrated with the Virtual Interactive Presence and Augmented Reality system (University of Alabama at Birmingham, Birmingham, Alabama), enabling the local surgeon to interact with the remote surgeon within the local surgical field. Surgery was well tolerated by the patient and early surgical results were encouraging, with an improvement of shoulder pain and greater range of motion. The combination of real-time augmented reality and wearable computing devices such as Google Glass holds much promise in the field of surgery. Copyright 2014, SLACK Incorporated.

  5. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less

  6. Aerodynamic shape optimization using preconditioned conjugate gradient methods

    NASA Technical Reports Server (NTRS)

    Burgreen, Greg W.; Baysal, Oktay

    1993-01-01

    In an effort to further improve upon the latest advancements made in aerodynamic shape optimization procedures, a systematic study is performed to examine several current solution methodologies as applied to various aspects of the optimization procedure. It is demonstrated that preconditioned conjugate gradient-like methodologies dramatically decrease the computational efforts required for such procedures. The design problem investigated is the shape optimization of the upper and lower surfaces of an initially symmetric (NACA-012) airfoil in inviscid transonic flow and at zero degree angle-of-attack. The complete surface shape is represented using a Bezier-Bernstein polynomial. The present optimization method then automatically obtains supercritical airfoil shapes over a variety of freestream Mach numbers. Furthermore, the best optimization strategy examined resulted in a factor of 8 decrease in computational time as well as a factor of 4 decrease in memory over the most efficient strategies in current use.

  7. Computer-assisted navigation in orthopedic surgery.

    PubMed

    Mavrogenis, Andreas F; Savvidou, Olga D; Mimidis, George; Papanastasiou, John; Koulalis, Dimitrios; Demertzis, Nikolaos; Papagelopoulos, Panayiotis J

    2013-08-01

    Computer-assisted navigation has a role in some orthopedic procedures. It allows the surgeons to obtain real-time feedback and offers the potential to decrease intra-operative errors and optimize the surgical result. Computer-assisted navigation systems can be active or passive. Active navigation systems can either perform surgical tasks or prohibit the surgeon from moving past a predefined zone. Passive navigation systems provide intraoperative information, which is displayed on a monitor, but the surgeon is free to make any decisions he or she deems necessary. This article reviews the available types of computer-assisted navigation, summarizes the clinical applications and reviews the results of related series using navigation, and informs surgeons of the disadvantages and pitfalls of computer-assisted navigation in orthopedic surgery. Copyright 2013, SLACK Incorporated.

  8. Optical signal processing using photonic reservoir computing

    NASA Astrophysics Data System (ADS)

    Salehi, Mohammad Reza; Dehyadegari, Louiza

    2014-10-01

    As a new approach to recognition and classification problems, photonic reservoir computing has such advantages as parallel information processing, power efficient and high speed. In this paper, a photonic structure has been proposed for reservoir computing which is investigated using a simple, yet, non-partial noisy time series prediction task. This study includes the application of a suitable topology with self-feedbacks in a network of SOA's - which lends the system a strong memory - and leads to adjusting adequate parameters resulting in perfect recognition accuracy (100%) for noise-free time series, which shows a 3% improvement over previous results. For the classification of noisy time series, the rate of accuracy showed a 4% increase and amounted to 96%. Furthermore, an analytical approach was suggested to solve rate equations which led to a substantial decrease in the simulation time, which is an important parameter in classification of large signals such as speech recognition, and better results came up compared with previous works.

  9. Image Registration of Cone-Beam Computer Tomography and Preprocedural Computer Tomography Aids in Localization of Adrenal Veins and Decreasing Radiation Dose in Adrenal Vein Sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busser, Wendy M. H., E-mail: wendy.busser@radboudumc.nl; Arntz, Mark J.; Jenniskens, Sjoerd F. M.

    2015-08-15

    PurposeWe assessed whether image registration of cone-beam computed tomography (CT) (CBCT) and contrast-enhanced CT (CE-CT) images indicating the locations of the adrenal veins can aid in increasing the success rate of first-attempts adrenal vein sampling (AVS) and therefore decreasing patient radiation dose.Materials and Methods CBCT scans were acquired in the interventional suite (Philips Allura Xper FD20) and rigidly registered to the vertebra in previously acquired CE-CT. Adrenal vein locations were marked on the CT image and superimposed with live fluoroscopy and digital-subtraction angiography (DSA) to guide the AVS. Seventeen first attempts at AVS were performed with image registration and retrospectivelymore » compared with 15 first attempts without image registration performed earlier by the same 2 interventional radiologists. First-attempt AVS was considered successful when both adrenal vein samples showed representative cortisol levels. Sampling time, dose-area product (DAP), number of DSA runs, fluoroscopy time, and skin dose were recorded.ResultsWithout image registration, the first attempt at sampling was successful in 8 of 15 procedures indicating a success rate of 53.3 %. This increased to 76.5 % (13 of 17) by adding CBCT and CE-CT image registration to AVS procedures (p = 0.266). DAP values (p = 0.001) and DSA runs (p = 0.026) decreased significantly by adding image registration guidance. Sampling and fluoroscopy times and skin dose showed no significant changes.ConclusionGuidance based on registration of CBCT and previously acquired diagnostic CE-CT can aid in enhancing localization of the adrenal veins thereby increasing the success rate of first-attempt AVS with a significant decrease in the number of used DSA runs and, consequently, radiation dose required.« less

  10. Optimization and large scale computation of an entropy-based moment closure

    NASA Astrophysics Data System (ADS)

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  11. Optimization and large scale computation of an entropy-based moment closure

    DOE PAGES

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, M N, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P N, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which aremore » used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M N algorithm that do not appear for the P N algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M N to P N decreases.« less

  12. Conditionally Active Min-Max Limit Regulators

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay (Inventor); May, Ryan D. (Inventor)

    2017-01-01

    A conditionally active limit regulator may be used to regulate the performance of engines or other limit regulated systems. A computing system may determine whether a variable to be limited is within a predetermined range of a limit value as a first condition. The computing system may also determine whether a current rate of increase or decrease of the variable to be limited is great enough that the variable will reach the limit within a predetermined period of time with no other changes as a second condition. When both conditions are true, the computing system may activate a simulated or physical limit regulator.

  13. A multi-GPU real-time dose simulation software framework for lung radiotherapy.

    PubMed

    Santhanam, A P; Min, Y; Neelakkantan, H; Papp, N; Meeks, S L; Kupelian, P A

    2012-09-01

    Medical simulation frameworks facilitate both the preoperative and postoperative analysis of the patient's pathophysical condition. Of particular importance is the simulation of radiation dose delivery for real-time radiotherapy monitoring and retrospective analyses of the patient's treatment. In this paper, a software framework tailored for the development of simulation-based real-time radiation dose monitoring medical applications is discussed. A multi-GPU-based computational framework coupled with inter-process communication methods is introduced for simulating the radiation dose delivery on a deformable 3D volumetric lung model and its real-time visualization. The model deformation and the corresponding dose calculation are allocated among the GPUs in a task-specific manner and is performed in a pipelined manner. Radiation dose calculations are computed on two different GPU hardware architectures. The integration of this computational framework with a front-end software layer and back-end patient database repository is also discussed. Real-time simulation of the dose delivered is achieved at once every 120 ms using the proposed framework. With a linear increase in the number of GPU cores, the computational time of the simulation was linearly decreased. The inter-process communication time also improved with an increase in the hardware memory. Variations in the delivered dose and computational speedup for variations in the data dimensions are investigated using D70 and D90 as well as gEUD as metrics for a set of 14 patients. Computational speed-up increased with an increase in the beam dimensions when compared with a CPU-based commercial software while the error in the dose calculation was <1%. Our analyses show that the framework applied to deformable lung model-based radiotherapy is an effective tool for performing both real-time and retrospective analyses.

  14. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  15. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  16. The semantic distance task: Quantifying semantic distance with semantic network path length.

    PubMed

    Kenett, Yoed N; Levi, Effi; Anaki, David; Faust, Miriam

    2017-09-01

    Semantic distance is a determining factor in cognitive processes, such as semantic priming, operating upon semantic memory. The main computational approach to compute semantic distance is through latent semantic analysis (LSA). However, objections have been raised against this approach, mainly in its failure at predicting semantic priming. We propose a novel approach to computing semantic distance, based on network science methodology. Path length in a semantic network represents the amount of steps needed to traverse from 1 word in the network to the other. We examine whether path length can be used as a measure of semantic distance, by investigating how path length affect performance in a semantic relatedness judgment task and recall from memory. Our results show a differential effect on performance: Up to 4 steps separating between word-pairs, participants exhibit an increase in reaction time (RT) and decrease in the percentage of word-pairs judged as related. From 4 steps onward, participants exhibit a significant decrease in RT and the word-pairs are dominantly judged as unrelated. Furthermore, we show that as path length between word-pairs increases, success in free- and cued-recall decreases. Finally, we demonstrate how our measure outperforms computational methods measuring semantic distance (LSA and positive pointwise mutual information) in predicting participants RT and subjective judgments of semantic strength. Thus, we provide a computational alternative to computing semantic distance. Furthermore, this approach addresses key issues in cognitive theory, namely the breadth of the spreading activation process and the effect of semantic distance on memory retrieval. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the standard algorithm. When the utmost accuracy must be achieved, the modified algorithm extracts atoms more conservatively but still exhibits computational gains over classical MPD. The MPD++ algorithm was demonstrated using an over-complete dictionary on real life data. Computational times were reduced by factors of 1.9 and 44 for the emphases of accuracy and performance, respectively. The modified algorithm extracted similar amounts of energy compared to classical MPD. The degree of the improvement in computational time depends on the complexity of the data, the initialization parameters, and the breadth of the dictionary. The results of the research confirm that the three modifications successfully improved the scalability and computational efficiency of the MPD algorithm. Correlation Thresholding decreased the time complexity by reducing the dictionary size. Multiple Atom Extraction also reduced the time complexity by decreasing the number of iterations required for a stopping criterion to be reached. The Course-Fine Grids technique enabled complicated atoms with numerous variable parameters to be effectively represented in the dictionary. Due to the nature of the three proposed modifications, they are capable of being stacked and have cumulative effects on the reduction of the time complexity.

  18. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Mattmann, C. A.; Waliser, D. E.; Kim, J.; Loikith, P.; Lee, H.; McGibbney, L. J.; Whitehall, K. D.

    2014-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark. Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk, and makes iterative algorithms feasible. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 100 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning (ML) based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. The goals of SciSpark are to: (1) Decrease the time to compute comparison statistics and plots from minutes to seconds; (2) Allow for interactive exploration of time-series properties over seasons and years; (3) Decrease the time for satellite data ingestion into RCMES to hours; (4) Allow for Level-2 comparisons with higher-order statistics or PDF's in minutes to hours; and (5) Move RCMES into a near real time decision-making platform. We will report on: the architecture and design of SciSpark, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning (sharding) of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF's, and first metrics quantifying parallel speedups and memory & disk usage.

  19. Evaluating the impact of computer-generated rounding reports on physician workflow in the nursing home: a feasibility time-motion study.

    PubMed

    Thorpe-Jamison, Patrice T; Culley, Colleen M; Perera, Subashan; Handler, Steven M

    2013-05-01

    To determine the feasibility and impact of a computer-generated rounding report on physician rounding time and perceived barriers to providing clinical care in the nursing home (NH) setting. Three NHs located in Pittsburgh, PA. Ten attending NH physicians. Time-motion method to record the time taken to gather data (pre-rounding), to evaluate patients (rounding), and document their findings/develop an assessment and plan (post-rounding). Additionally, surveys were used to determine the physicians' perception of barriers to providing optimal clinical care, as well as physician satisfaction before and after the use of a computer-generated rounding report. Ten physicians were observed during half-day sessions both before and 4 weeks after they were introduced to a computer-generated rounding report. A total of 69 distinct patients were evaluated during the 20 physician observation sessions. Each physician evaluated, on average, four patients before implementation and three patients after implementation. The observations showed a significant increase (P = .03) in the pre-rounding time, and no significant difference in the rounding (P = .09) or post-rounding times (P = .29). Physicians reported that information was more accessible (P = .03) following the implementation of the computer-generated rounding report. Most (80%) physicians stated that they would prefer to use the computer-generated rounding report rather than the paper-based process. The present study provides preliminary data suggesting that the use of a computer-generated rounding report can decrease some perceived barriers to providing optimal care in the NH. Although the rounding report did not improve rounding time efficiency, most NH physicians would prefer to use the computer-generated report rather than the current paper-based process. Improving the accuracy and harmonization of medication information with the electronic medication administration record and rounding reports, as well as improving facility network speeds might improve the effectiveness of this technology. Copyright © 2013 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.

  20. Computed tomography-based tissue-engineered scaffolds in craniomaxillofacial surgery.

    PubMed

    Smith, M H; Flanagan, C L; Kemppainen, J M; Sack, J A; Chung, H; Das, S; Hollister, S J; Feinberg, S E

    2007-09-01

    Tissue engineering provides an alternative modality allowing for decreased morbidity of donor site grafting and decreased rejection of less compatible alloplastic tissues. Using image-based design and computer software, a precisely sized and shaped scaffold for osseous tissue regeneration can be created via selective laser sintering. Polycaprolactone has been used to create a condylar ramus unit (CRU) scaffold for application in temporomandibular joint reconstruction in a Yucatan minipig animal model. Following sacrifice, micro-computed tomography and histology was used to demonstrate the efficacy of this particular scaffold design. A proof-of-concept surgery has demonstrated cartilaginous tissue regeneration along the articulating surface with exuberant osseous tissue formation. Bone volumes and tissue mineral density at both the 1 and 3 month time points demonstrated significant new bone growth interior and exterior to the scaffold. Computationally designed scaffolds can support masticatory function in a large animal model as well as both osseous and cartilage regeneration. Our group is continuing to evaluate multiple implant designs in both young and mature Yucatan minipig animals. 2007 John Wiley & Sons, Ltd.

  1. Decreasing excessive media usage while increasing physical activity: a single-subject research study.

    PubMed

    Larwin, Karen H; Larwin, David A

    2008-11-01

    The Kaiser Family Foundation released a report entitled Kids and Media Use in the United States that concluded that children's use of media--including television, computers, Internet, video games, and phones--may be one of the primary contributor's to the poor fitness and obesity of many of today's adolescents. The present study examines the potential of increasing physical activity and decreasing media usage in a 14-year-old adolescent female by making time spent on the Internet and/or cell phone contingent on physical activity. Results of this investigation indicate that requiring the participant to earn her media-usage time did correspond with an increase in physical activity and a decrease in media-usage time relative to baseline measures. Five weeks after cessation of the intervention, the participant's new level of physical activity was still being maintained. One year after the study, the participant's level of physical activity continued to increase.

  2. The interplay of attention economics and computer-aided detection marks in screening mammography

    NASA Astrophysics Data System (ADS)

    Schwartz, Tayler M.; Sridharan, Radhika; Wei, Wei; Lukyanchenko, Olga; Geiser, William; Whitman, Gary J.; Haygood, Tamara Miner

    2016-03-01

    Introduction: According to attention economists, overabundant information leads to decreased attention for individual pieces of information. Computer-aided detection (CAD) alerts radiologists to findings potentially associated with breast cancer but is notorious for creating an abundance of false-positive marks. We suspected that increased CAD marks do not lengthen mammogram interpretation time, as radiologists will selectively disregard these marks when present in larger numbers. We explore the relevance of attention economics in mammography by examining how the number of CAD marks affects interpretation time. Methods: We performed a retrospective review of bilateral digital screening mammograms obtained between January 1, 2011 and February 28, 2014, using only weekend interpretations to decrease distractions and the likelihood of trainee participation. We stratified data according to reader and used ANOVA to assess the relationship between number of CAD marks and interpretation time. Results: Ten radiologists, with median experience after residency of 12.5 years (range 6 to 24,) interpreted 1849 mammograms. When accounting for number of images, Breast Imaging Reporting and Data System category, and breast density, increasing numbers of CAD marks was correlated with longer interpretation time only for the three radiologists with the fewest years of experience (median 7 years.) Conclusion: For the 7 most experienced readers, increasing CAD marks did not lengthen interpretation time. We surmise that as CAD marks increase, the attention given to individual marks decreases. Experienced radiologists may rapidly dismiss larger numbers of CAD marks as false-positive, having learned that devoting extra attention to such marks does not improve clinical detection.

  3. Accelerating Demand Paging for Local and Remote Out-of-Core Visualization

    NASA Technical Reports Server (NTRS)

    Ellsworth, David

    2001-01-01

    This paper describes a new algorithm that improves the performance of application-controlled demand paging for the out-of-core visualization of data sets that are on either local disks or disks on remote servers. The performance improvements come from better overlapping the computation with the page reading process, and by performing multiple page reads in parallel. The new algorithm can be applied to many different visualization algorithms since application-controlled demand paging is not specific to any visualization algorithm. The paper includes measurements that show that the new multi-threaded paging algorithm decreases the time needed to compute visualizations by one third when using one processor and reading data from local disk. The time needed when using one processor and reading data from remote disk decreased by up to 60%. Visualization runs using data from remote disk ran about as fast as ones using data from local disk because the remote runs were able to make use of the remote server's high performance disk array.

  4. Investigating the Relationship between the Half-Life Decay of the Height and the Coefficient of Restitution of Bouncing Balls Using a Microcomputer-Based Laboratory

    ERIC Educational Resources Information Center

    Amrani, D.

    2010-01-01

    This pedagogical activity is aimed at students using a computer-learning environment with advanced tools for data analysis. It investigates the relationship between the coefficient of restitution and the way the heights of different bouncing balls decrease in a number of bounces with time. The time between successive ball bounces, or…

  5. High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs

    NASA Technical Reports Server (NTRS)

    Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.

    2014-01-01

    This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.

  6. Using reactive transport codes to provide mechanistic biogeochemistry representations in global land surface models: CLM-PFLOTRAN 1.0

    DOE PAGES

    Tang, G.; Yuan, F.; Bisht, G.; ...

    2015-12-17

    We explore coupling to a configurable subsurface reactive transport code as a flexible and extensible approach to biogeochemistry in land surface models; our goal is to facilitate testing of alternative models and incorporation of new understanding. A reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant uptake is used as an example. We implement the reactions in the open-source PFLOTRAN code, coupled with the Community Land Model (CLM), and test at Arctic, temperate, and tropical sites. To make the reaction network designed for use in explicit time stepping in CLM compatible with the implicit time stepping used in PFLOTRAN,more » the Monod substrate rate-limiting function with a residual concentration is used to represent the limitation of nitrogen availability on plant uptake and immobilization. To achieve accurate, efficient, and robust numerical solutions, care needs to be taken to use scaling, clipping, or log transformation to avoid negative concentrations during the Newton iterations. With a tight relative update tolerance to avoid false convergence, an accurate solution can be achieved with about 50 % more computing time than CLM in point mode site simulations using either the scaling or clipping methods. The log transformation method takes 60–100 % more computing time than CLM. The computing time increases slightly for clipping and scaling; it increases substantially for log transformation for half saturation decrease from 10 −3 to 10 −9 mol m −3, which normally results in decreasing nitrogen concentrations. The frequent occurrence of very low concentrations (e.g. below nanomolar) can increase the computing time for clipping or scaling by about 20 %; computing time can be doubled for log transformation. Caution needs to be taken in choosing the appropriate scaling factor because a small value caused by a negative update to a small concentration may diminish the update and result in false convergence even with very tight relative update tolerance. As some biogeochemical processes (e.g., methane and nitrous oxide production and consumption) involve very low half saturation and threshold concentrations, this work provides insights for addressing nonphysical negativity issues and facilitates the representation of a mechanistic biogeochemical description in earth system models to reduce climate prediction uncertainty.« less

  7. Improving the visualization of 3D ultrasound data with 3D filtering

    NASA Astrophysics Data System (ADS)

    Shamdasani, Vijay; Bae, Unmin; Managuli, Ravi; Kim, Yongmin

    2005-04-01

    3D ultrasound imaging is quickly gaining widespread clinical acceptance as a visualization tool that allows clinicians to obtain unique views not available with traditional 2D ultrasound imaging and an accurate understanding of patient anatomy. The ability to acquire, manipulate and interact with the 3D data in real time is an important feature of 3D ultrasound imaging. Volume rendering is often used to transform the 3D volume into 2D images for visualization. Unlike computed tomography (CT) and magnetic resonance imaging (MRI), volume rendering of 3D ultrasound data creates noisy images in which surfaces cannot be readily discerned due to speckles and low signal-to-noise ratio. The degrading effect of speckles is especially severe when gradient shading is performed to add depth cues to the image. Several researchers have reported that smoothing the pre-rendered volume with a 3D convolution kernel, such as 5x5x5, can significantly improve the image quality, but at the cost of decreased resolution. In this paper, we have analyzed the reasons for the improvement in image quality with 3D filtering and determined that the improvement is due to two effects. The filtering reduces speckles in the volume data, which leads to (1) more accurate gradient computation and better shading and (2) decreased noise during compositing. We have found that applying a moderate-size smoothing kernel (e.g., 7x7x7) to the volume data before gradient computation combined with some smoothing of the volume data (e.g., with a 3x3x3 lowpass filter) before compositing yielded images with good depth perception and no appreciable loss in resolution. Providing the clinician with the flexibility to control both of these effects (i.e., shading and compositing) independently could improve the visualization of the 3D ultrasound data. Introducing this flexibility into the ultrasound machine requires 3D filtering to be performed twice on the volume data, once before gradient computation and again before compositing. 3D filtering of an ultrasound volume containing millions of voxels requires a large amount of computation, and doing it twice decreases the number of frames that can be visualized per second. To address this, we have developed several techniques to make computation efficient. For example, we have used the moving average method to filter a 128x128x128 volume with a 3x3x3 boxcar kernel in 17 ms on a single MAP processor running at 400 MHz. The same methods reduced the computing time on a Pentium 4 running at 3 GHz from 110 ms to 62 ms. We believe that our proposed method can improve 3D ultrasound visualization without sacrificing resolution and incurring an excessive computing time.

  8. Resource Sharing in Times of Retrenchment.

    ERIC Educational Resources Information Center

    Sloan, Bernard G.

    1992-01-01

    Discusses the impact of decreases in revenues on the resource-sharing activities of ILLINET Online and the Illinois Library Computer Systems Organization (ILCSO). Strategies for successfully coping with fiscal crises are suggested, including reducing levels of service and initiating user fees for interlibrary loans and faxing photocopied journal…

  9. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1992-01-01

    The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  10. Computational Study of Scenarios Regarding Explosion Risk Mitigation

    NASA Astrophysics Data System (ADS)

    Vlasin, Nicolae-Ioan; Mihai Pasculescu, Vlad; Florea, Gheorghe-Daniel; Cornel Suvar, Marius

    2016-10-01

    Exploration in order to discover new deposits of natural gas, upgrading techniques to exploit these resources and new ways to convert the heat capacity of these gases into industrial usable energy is the research areas of great interest around the globe. But all activities involving the handling of natural gas (exploitation, transport, combustion) are subjected to the same type of risk: the risk to explosion. Experiments carried out physical scenarios to determine ways to reduce this risk can be extremely costly, requiring suitable premises, equipment and apparatus, manpower, time and, not least, presenting the risk of personnel injury. Taking in account the above mentioned, the present paper deals with the possibility of studying the scenarios of gas explosion type events in virtual domain, exemplifying by performing a computer simulation of a stoichiometric air - methane explosion (methane is the main component of natural gas). The advantages of computer-assisted imply are the possibility of using complex virtual geometries of any form as the area of deployment phenomenon, the use of the same geometry for an infinite number of settings of initial parameters as input, total elimination the risk of personnel injury, decrease the execution time etc. Although computer simulations are hardware resources consuming and require specialized personnel to use the CFD (Computational Fluid Dynamics) techniques, the costs and risks associated with these methods are greatly diminished, presenting, in the same time, a major benefit in terms of execution time.

  11. Use of a Tracing Task to Assess Visuomotor Performance: Effects of Age, Sex, and Handedness

    PubMed Central

    2013-01-01

    Background. Visuomotor abnormalities are common in aging and age-related disease, yet difficult to quantify. This study investigated the effects of healthy aging, sex, and handedness on the performance of a tracing task. Participants (n = 150, aged 21–95 years, 75 females) used a stylus to follow a moving target around a circle on a tablet computer with their dominant and nondominant hands. Participants also performed the Trail Making Test (a measure of executive function). Methods. Deviations from the circular path were computed to derive an “error” time series. For each time series, absolute mean, variance, and complexity index (a proposed measure of system functionality and adaptability) were calculated. Using the moving target and stylus coordinates, the percentage of task time within the target region and the cumulative micropause duration (a measure of motion continuity) were computed. Results. All measures showed significant effects of aging (p < .0005). Post hoc age group comparisons showed that with increasing age, the absolute mean and variance of the error increased, complexity index decreased, percentage of time within the target region decreased, and cumulative micropause duration increased. Only complexity index showed a significant difference between dominant versus nondominant hands within each age group (p < .0005). All measures showed relationships to the Trail Making Test (p < .05). Conclusions. Measures derived from a tracing task identified performance differences in healthy individuals as a function of age, sex, and handedness. Studies in populations with specific neuromotor syndromes are warranted to test the utility of measures based on the dynamics of tracking a target as a clinical assessment tool. PMID:23388876

  12. Email notification combined with off site signing substantially reduces resident approval to faculty verification time.

    PubMed

    Deitte, Lori A; Moser, Patricia P; Geller, Brian S; Sistrom, Chris L

    2011-06-01

    Attending radiologist signature time (AST) is a variable and modifiable component of overall report turnaround time. Delays in finalized reports have potential to undermine radiologists' value as consultants and adversely affect patient care. This study was performed to evaluate the impact of notebook computer distribution and daily automated e-mail notification on reducing AST. Two simultaneous interventions were initiated in the authors' radiology department in February 2010. These included the distribution of a notebook computer with preloaded software for each attending radiologist to sign radiology reports and daily automated e-mail notifications for unsigned reports. The digital dictation system archive and the radiology information system were queried for all radiology reports produced from January 2009 through August 2010. The time between resident approval and attending radiologist signature before and after the intervention was analyzed. Potential unintended "side effects" of the intervention were also studied. Resident-authored reports were signed, on average, 2.53 hours sooner after the intervention. This represented a highly significant (P = .003) decrease in AST with all else held equal. Postintervention reports were authored by residents at the same rate (about 70%). An unintended "side effect" was that attending radiologists were less likely to make changes to resident-authored reports after the intervention. E-mail notification combined with offsite signing can reduce AST substantially. Notebook computers with preloaded software streamline the process of accessing, editing, and signing reports. The observed decrease in AST reflects a positive change in the timeliness of report signature. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  13. A heuristic re-mapping algorithm reducing inter-level communication in SAMR applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steensland, Johan; Ray, Jaideep

    2003-07-01

    This paper aims at decreasing execution time for large-scale structured adaptive mesh refinement (SAMR) applications by proposing a new heuristic re-mapping algorithm and experimentally showing its effectiveness in reducing inter-level communication. Tests were done for five different SAMR applications. The overall goal is to engineer a dynamically adaptive meta-partitioner capable of selecting and configuring the most appropriate partitioning strategy at run-time based on current system and application state. Such a metapartitioner can significantly reduce execution times for general SAMR applications. Computer simulations of physical phenomena are becoming increasingly popular as they constitute an important complement to real-life testing. In manymore » cases, such simulations are based on solving partial differential equations by numerical methods. Adaptive methods are crucial to efficiently utilize computer resources such as memory and CPU. But even with adaption, the simulations are computationally demanding and yield huge data sets. Thus parallelization and the efficient partitioning of data become issues of utmost importance. Adaption causes the workload to change dynamically, calling for dynamic (re-) partitioning to maintain efficient resource utilization. The proposed heuristic algorithm reduced inter-level communication substantially. Since the complexity of the proposed algorithm is low, this decrease comes at a relatively low cost. As a consequence, we draw the conclusion that the proposed re-mapping algorithm would be useful to lower overall execution times for many large SAMR applications. Due to its usefulness and its parameterization, the proposed algorithm would constitute a natural and important component of the meta-partitioner.« less

  14. Use of models to map potential capture of surface water

    USGS Publications Warehouse

    Leake, Stanley A.

    2006-01-01

    The effects of ground-water withdrawals on surface-water resources and riparian vegetation have become important considerations in water-availability studies. Ground water withdrawn by a well initially comes from storage around the well, but with time can eventually increase inflow to the aquifer and (or) decrease natural outflow from the aquifer. This increased inflow and decreased outflow is referred to as “capture.” For a given time, capture can be expressed as a fraction of withdrawal rate that is accounted for as increased rates of inflow and decreased rates of outflow. The time frames over which capture might occur at different locations commonly are not well understood by resource managers. A ground-water model, however, can be used to map potential capture for areas and times of interest. The maps can help managers visualize the possible timing of capture over large regions. The first step in the procedure to map potential capture is to run a ground-water model in steady-state mode without withdrawals to establish baseline total flow rates at all sources and sinks. The next step is to select a time frame and appropriate withdrawal rate for computing capture. For regional aquifers, time frames of decades to centuries may be appropriate. The model is then run repeatedly in transient mode, each run with one well in a different model cell in an area of interest. Differences in inflow and outflow rates from the baseline conditions for each model run are computed and saved. The differences in individual components are summed and divided by the withdrawal rate to obtain a single capture fraction for each cell. Values are contoured to depict capture fractions for the time of interest. Considerations in carrying out the analysis include use of realistic physical boundaries in the model, understanding the degree of linearity of the model, selection of an appropriate time frame and withdrawal rate, and minimizing error in the global mass balance of the model.

  15. SAPIENS: Spreading Activation Processor for Information Encoded in Network Structures. Technical Report No. 296.

    ERIC Educational Resources Information Center

    Ortony, Andrew; Radin, Dean I.

    The product of researchers' efforts to develop a computer processor which distinguishes between relevant and irrelevant information in the database, Spreading Activation Processor for Information Encoded in Network Structures (SAPIENS) exhibits (1) context sensitivity, (2) efficiency, (3) decreasing activation over time, (4) summation of…

  16. Computer vision syndrome (CVS) - Thermographic Analysis

    NASA Astrophysics Data System (ADS)

    Llamosa-Rincón, L. E.; Jaime-Díaz, J. M.; Ruiz-Cardona, D. F.

    2017-01-01

    The use of computers has reported an exponential growth in the last decades, the possibility of carrying out several tasks for both professional and leisure purposes has contributed to the great acceptance by the users. The consequences and impact of uninterrupted tasks with computers screens or displays on the visual health, have grabbed researcher’s attention. When spending long periods of time in front of a computer screen, human eyes are subjected to great efforts, which in turn triggers a set of symptoms known as Computer Vision Syndrome (CVS). Most common of them are: blurred vision, visual fatigue and Dry Eye Syndrome (DES) due to unappropriate lubrication of ocular surface when blinking decreases. An experimental protocol was de-signed and implemented to perform thermographic studies on healthy human eyes during exposure to dis-plays of computers, with the main purpose of comparing the existing differences in temperature variations of healthy ocular surfaces.

  17. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    PubMed Central

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  18. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

  19. Development of small scale cluster computer for numerical analysis

    NASA Astrophysics Data System (ADS)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  20. Fast calculation of the `ILC norm' in iterative learning control

    NASA Astrophysics Data System (ADS)

    Rice, Justin K.; van Wingerden, Jan-Willem

    2013-06-01

    In this paper, we discuss and demonstrate a method for the exploitation of matrix structure in computations for iterative learning control (ILC). In Barton, Bristow, and Alleyne [International Journal of Control, 83(2), 1-8 (2010)], a special insight into the structure of the lifted convolution matrices involved in ILC is used along with a modified Lanczos method to achieve very fast computational bounds on the learning convergence, by calculating the 'ILC norm' in ? computational complexity. In this paper, we show how their method is equivalent to a special instance of the sequentially semi-separable (SSS) matrix arithmetic, and thus can be extended to many other computations in ILC, and specialised in some cases to even faster methods. Our SSS-based methodology will be demonstrated on two examples: a linear time-varying example resulting in the same ? complexity as in Barton et al., and a linear time-invariant example where our approach reduces the computational complexity to ?, thus decreasing the computation time, for an example, from the literature by a factor of almost 100. This improvement is achieved by transforming the norm computation via a linear matrix inequality into a check of positive definiteness - which allows us to further exploit the almost-Toeplitz properties of the matrix, and additionally provides explicit upper and lower bounds on the norm of the matrix, instead of the indirect Ritz estimate. These methods are now implemented in a MATLAB toolbox, freely available on the Internet.

  1. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  2. Precise and fast spatial-frequency analysis using the iterative local Fourier transform.

    PubMed

    Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook

    2016-09-19

    The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.

  3. GRAMPS: An Automated Ambulatory Geriatric Record

    PubMed Central

    Hammond, Kenric W.; King, Carol A.; Date, Vishvanath V.; Prather, Robert J.; Loo, Lawrence; Siddiqui, Khwaja

    1988-01-01

    GRAMPS (Geriatric Record and Multidisciplinary Planning System) is an interactive MUMPS system developed for VA outpatient use. It allows physicians to effectively document care in problem-oriented format with structured narrative and free text, eliminating handwritten input. We evaluated the system in a one-year controlled cohort study. When the computer, was used, appointment times averaged 8.2 minutes longer (32.6 vs. 24.4 minutes) compared to control visits with the same physicians. Computer use was associated with better quality of care as measured in the management of a common problem, hypertension, as well as decreased overall costs of care. When a faster computer was installed, data entry times improved, suggesting that slower processing had accounted for a substantial portion of the observed difference in appointment lengths. The GRAMPS system was well-accepted by providers. The modular design used in GRAMPS has been extended to medical-care applications in Nursing and Mental Health.

  4. Tomographic analysis of reactive flow induced pore structure changes in column experiments

    NASA Astrophysics Data System (ADS)

    Cai, Rong; Lindquist, W. Brent; Um, Wooyong; Jones, Keith W.

    2009-09-01

    We utilize synchrotron X-ray computed micro-tomography to capture and quantify snapshots in time of dissolution and secondary precipitation in the microstructure of Hanford sediments exposed to simulated caustic waste in flow-column experiments. The experiment is complicated somewhat as logistics dictated that the column spent significant amounts of time in a sealed state (acting as a batch reactor). Changes accompanying a net reduction in porosity of 4% were quantified including: (1) a 25% net decrease in pores resulting from a 38% loss in the number of pores <10-4mm in volume and a 13% increase in the number of pores of larger size; and (2) a 38% decrease in the number of throats. The loss of throats resulted in decreased coordination number for pores of all sizes and significant reduction in the number of pore pathways.

  5. The study on the parallel processing based time series correlation analysis of RBC membrane flickering in quantitative phase imaging

    NASA Astrophysics Data System (ADS)

    Lee, Minsuk; Won, Youngjae; Park, Byungjun; Lee, Seungrag

    2017-02-01

    Not only static characteristics but also dynamic characteristics of the red blood cell (RBC) contains useful information for the blood diagnosis. Quantitative phase imaging (QPI) can capture sample images with subnanometer scale depth resolution and millisecond scale temporal resolution. Various researches have been used QPI for the RBC diagnosis, and recently many researches has been developed to decrease the process time of RBC information extraction using QPI by the parallel computing algorithm, however previous studies are interested in the static parameters such as morphology of the cells or simple dynamic parameters such as root mean square (RMS) of the membrane fluctuations. Previously, we presented a practical blood test method using the time series correlation analysis of RBC membrane flickering with QPI. However, this method has shown that there is a limit to the clinical application because of the long computation time. In this study, we present an accelerated time series correlation analysis of RBC membrane flickering using the parallel computing algorithm. This method showed consistent fractal scaling exponent results of the surrounding medium and the normal RBC with our previous research.

  6. Computer simulations of structural transitions in large ferrofluid aggregates

    NASA Astrophysics Data System (ADS)

    Yoon, Mina; Tomanek, David

    2003-03-01

    We have developed a quaternion molecular dynamics formalism to study structural transitions in systems of ferrofluid particles in colloidal suspensions. Our approach takes advantage of the viscous damping provided by the surrounding liquid and enables us to study the time evolution of these systems over milli-second time periods as a function of the number of particles, initial geometry, and an externally applied magnetic field. Our computer simulations for aggregates containing tens to hundreds of ferrofluid particles suggest that these systems relax to the global optimum structure in a step-wise manner. During the relaxation process, the potential energy decreases by two mechanisms, which occur on different time scales. Short time periods associated with structural relaxations within a given morphology are followed by much slower processes that generally lead to a simpler morphology. We discuss possible applications of these externally driven structural transitions for targeted medication delivery.

  7. Project Energise: Using participatory approaches and real time computer prompts to reduce occupational sitting and increase work time physical activity in office workers.

    PubMed

    Gilson, Nicholas D; Ng, Norman; Pavey, Toby G; Ryde, Gemma C; Straker, Leon; Brown, Wendy J

    2016-11-01

    This efficacy study assessed the added impact real time computer prompts had on a participatory approach to reduce occupational sedentary exposure and increase physical activity. Quasi-experimental. 57 Australian office workers (mean [SD]; age=47 [11] years; BMI=28 [5]kg/m 2 ; 46 men) generated a menu of 20 occupational 'sit less and move more' strategies through participatory workshops, and were then tasked with implementing strategies for five months (July-November 2014). During implementation, a sub-sample of workers (n=24) used a chair sensor/software package (Sitting Pad) that gave real time prompts to interrupt desk sitting. Baseline and intervention sedentary behaviour and physical activity (GENEActiv accelerometer; mean work time percentages), and minutes spent sitting at desks (Sitting Pad; mean total time and longest bout) were compared between non-prompt and prompt workers using a two-way ANOVA. Workers spent close to three quarters of their work time sedentary, mostly sitting at desks (mean [SD]; total desk sitting time=371 [71]min/day; longest bout spent desk sitting=104 [43]min/day). Intervention effects were four times greater in workers who used real time computer prompts (8% decrease in work time sedentary behaviour and increase in light intensity physical activity; p<0.01). Respective mean differences between baseline and intervention total time spent sitting at desks, and the longest bout spent desk sitting, were 23 and 32min/day lower in prompt than in non-prompt workers (p<0.01). In this sample of office workers, real time computer prompts facilitated the impact of a participatory approach on reductions in occupational sedentary exposure, and increases in physical activity. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  8. Simulation of the Velocity and Temperature Distribution of Inhalation Thermal Injury in a Human Upper Airway Model by Application of Computational Fluid Dynamics.

    PubMed

    Chang, Yang; Zhao, Xiao-zhuo; Wang, Cheng; Ning, Fang-gang; Zhang, Guo-an

    2015-01-01

    Inhalation injury is an important cause of death after thermal burns. This study was designed to simulate the velocity and temperature distribution of inhalation thermal injury in the upper airway in humans using computational fluid dynamics. Cervical computed tomography images of three Chinese adults were imported to Mimics software to produce three-dimensional models. After grids were established and boundary conditions were defined, the simulation time was set at 1 minute and the gas temperature was set to 80 to 320°C using ANSYS software (ANSYS, Canonsburg, PA) to simulate the velocity and temperature distribution of inhalation thermal injury. Cross-sections were cut at 2-mm intervals, and maximum airway temperature and velocity were recorded for each cross-section. The maximum velocity peaked in the lower part of the nasal cavity and then decreased with air flow. The velocities in the epiglottis and glottis were higher than those in the surrounding areas. Further, the maximum airway temperature decreased from the nasal cavity to the trachea. Computational fluid dynamics technology can be used to simulate the velocity and temperature distribution of inhaled heated air.

  9. Evolution of complexity following a quantum quench in free field theory

    NASA Astrophysics Data System (ADS)

    Alves, Daniel W. F.; Camilo, Giancarlo

    2018-06-01

    Using a recent proposal of circuit complexity in quantum field theories introduced by Jefferson and Myers, we compute the time evolution of the complexity following a smooth mass quench characterized by a time scale δ t in a free scalar field theory. We show that the dynamics has two distinct phases, namely an early regime of approximately linear evolution followed by a saturation phase characterized by oscillations around a mean value. The behavior is similar to previous conjectures for the complexity growth in chaotic and holographic systems, although here we have found that the complexity may grow or decrease depending on whether the quench increases or decreases the mass, and also that the time scale for saturation of the complexity is of order δ t (not parametrically larger).

  10. Force-reflection and shared compliant control in operating telemanipulators with time delay

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Hannaford, Blake; Bejczy, Antal K.

    1992-01-01

    The performance of an advanced telemanipulation system in the presence of a wide range of time delays between a master control station and a slave robot is quantified. The contemplated applications include multiple satellite links to LEO, geosynchronous operation, spacecraft local area networks, and general-purpose computer-based short-distance designs. The results of high-precision peg-in-hole tasks performed by six test operators indicate that task performance decreased linearly with introduced time delays for both kinesthetic force feedback (KFF) and shared compliant control (SCC). The rate of this decrease was substantially improved with SCC compared to KFF. Task performance at delays above 1 s was not possible using KFF. SCC enabled task performance for such delays, which are realistic values for ground-controlled remote manipulation of telerobots in space.

  11. Reliability Growth in Space Life Support Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2014-01-01

    A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.

  12. Long term statistics (1845-2014) of daily runoff maxima, monthly rainfall and runoff in the Adda basin (Italian Alps) under natural and anthropogenic changes.

    NASA Astrophysics Data System (ADS)

    Ranzi, Roberto; Goatelli, Federica; Castioni, Camilla; Tomirotti, Massimo; Crespi, Alice; Mattea, Enrico; Brunetti, Michele; Maugeri, Maurizio

    2017-04-01

    A new time series of daily runoff reconstructed at the inflow in the Como Lake in the Italian Alps is presented. The time series covers a 170 years time period and includes the two largest floods ever recorded for the region: the 1868 and 1987 ones. Statistics of annual maxima show a decrease which is not statistically significant and a decrease of annual runoff which is statistically significant, instead. To investigate the possible reasons of such changes monthly temperature and precipitation are analysed. Decrease of runoff peaks can be justified by the increase of reservoir storage volumes. Evapotranspiration indexes based on monthly temperature indicate an increase of evapotranspiration losses as a possible cause of runoff decrease. Secular precipitation series for the Adda basin are then computed by a methodology projecting observational data onto a high-resolution grid (30-arc-second, DEM GTOPO30). It is based on the assumption that the spatio-temporal behaviour of a meteorological variable over a given area can be described by superimposing two fields: the climatological normals over a reference period, i.e. the climatologies, and the departure from them, i.e. the anomalies. The two fields can be reconstructed independently and are based on different datasets. To compute the precipitation climatologies all the available stations within the Adda basin are considered while, for the anomalies, only the longest and the most homogeneous records are selected. To this aim, a great effort was made to extend these series to the past as much as possible, also by digitising the historical records available from the hardcopy archives. The climatological values at each DEM cell of the Adda basin are obtained by a local weighted linear regression of precipitation versus elevation (LWLR) taking into account the closest stations with similar geographical characteristics to those of the cell itself. The anomaly field is obtained by a weighted average of the anomalies of neighbouring stations considering both the distance and the elevation differences between the stations and the considered cell. Finally, the secular precipitation records at each DEM cell of the Adda basin are computed by multiplying the local estimated anomalies for the corresponding climatological values. A statistically significant decreasing trend of precipitation results from the Man Kendall and Sen-Theil tests.

  13. The prediction of three-dimensional liquid-propellant rocket nozzle admittances

    NASA Technical Reports Server (NTRS)

    Bell, W. A.; Zinn, B. T.

    1973-01-01

    Crocco's three-dimensional nozzle admittance theory is extended to be applicable when the amplitudes of the combustor and nozzle oscillations increase or decrease with time. An analytical procedure and a computer program for determining nozzle admittance values from the extended theory are presented and used to compute the admittances of a family of liquid-propellant rocket nozzles. The calculated results indicate that the nozzle geometry entrance Mach number and temporal decay coefficient significantly affect the nozzle admittance values. The theoretical predictions are shown to be in good agreement with available experimental data.

  14. Applying mathematical modeling to create job rotation schedules for minimizing occupational noise exposure.

    PubMed

    Tharmmaphornphilas, Wipawee; Green, Benjamin; Carnahan, Brian J; Norman, Bryan A

    2003-01-01

    This research developed worker schedules by using administrative controls and a computer programming model to reduce the likelihood of worker hearing loss. By rotating the workers through different jobs during the day it was possible to reduce their exposure to hazardous noise levels. Computer simulations were made based on data collected in a real setting. Worker schedules currently used at the site are compared with proposed worker schedules from the computer simulations. For the worker assignment plans found by the computer model, the authors calculate a significant decrease in time-weighted average (TWA) sound level exposure. The maximum daily dose that any worker is exposed to is reduced by 58.8%, and the maximum TWA value for the workers is reduced by 3.8 dB from the current schedule.

  15. Generalization of Posture Training to Computer Workstations in an Applied Setting

    ERIC Educational Resources Information Center

    Sigurdsson, Sigurdur O.; Ring, Brandon M.; Needham, Mick; Boscoe, James H.; Silverman, Kenneth

    2011-01-01

    Improving employees' posture may decrease the risk of musculoskeletal disorders. The current paper is a systematic replication and extension of Sigurdsson and Austin (2008), who found that an intervention consisting of information, real-time feedback, and self-monitoring improved participant posture at mock workstations. In the current study,…

  16. Decreasing the temporal complexity for nonlinear, implicit reduced-order models by forecasting

    DOE PAGES

    Carlberg, Kevin; Ray, Jaideep; van Bloemen Waanders, Bart

    2015-02-14

    Implicit numerical integration of nonlinear ODEs requires solving a system of nonlinear algebraic equations at each time step. Each of these systems is often solved by a Newton-like method, which incurs a sequence of linear-system solves. Most model-reduction techniques for nonlinear ODEs exploit knowledge of system's spatial behavior to reduce the computational complexity of each linear-system solve. However, the number of linear-system solves for the reduced-order simulation often remains roughly the same as that for the full-order simulation. We propose exploiting knowledge of the model's temporal behavior to (1) forecast the unknown variable of the reduced-order system of nonlinear equationsmore » at future time steps, and (2) use this forecast as an initial guess for the Newton-like solver during the reduced-order-model simulation. To compute the forecast, we propose using the Gappy POD technique. As a result, the goal is to generate an accurate initial guess so that the Newton solver requires many fewer iterations to converge, thereby decreasing the number of linear-system solves in the reduced-order-model simulation.« less

  17. Multiple-relaxation-time lattice Boltzmann study of the magnetic field effects on natural convection of non-Newtonian fluids

    NASA Astrophysics Data System (ADS)

    Yang, Xuguang; Wang, Lei

    In this paper, the magnetic field effects on natural convection of power-law non-Newtonian fluids in rectangular enclosures are numerically studied by the multiple-relaxation-time (MRT) lattice Boltzmann method (LBM). To maintain the locality of the LBM, a local computing scheme for shear rate is used. Thus, all simulations can be easily performed on the Graphics Processing Unit (GPU) using NVIDIA’s CUDA, and high computational efficiency can be achieved. The numerical simulations presented here span a wide range of thermal Rayleigh number (104≤Ra≤106), Hartmann number (0≤Ha≤20), power-law index (0.5≤n≤1.5) and aspect ratio (0.25≤AR≤4.0) to identify the different flow patterns and temperature distributions. The results show that the heat transfer rate is increased with the increase of thermal Rayleigh number, while it is decreased with the increase of Hartmann number, and the average Nusselt number is found to decrease with an increase in the power-law index. Moreover, the effects of aspect ratio have also investigated in detail.

  18. Metal surface corrosion grade estimation from single image

    NASA Astrophysics Data System (ADS)

    Chen, Yijun; Qi, Lin; Sun, Huyuan; Fan, Hao; Dong, Junyu

    2018-04-01

    Metal corrosion can cause many problems, how to quickly and effectively assess the grade of metal corrosion and timely remediation is a very important issue. Typically, this is done by trained surveyors at great cost. Assisting them in the inspection process by computer vision and artificial intelligence would decrease the inspection cost. In this paper, we propose a dataset of metal surface correction used for computer vision detection and present a comparison between standard computer vision techniques by using OpenCV and deep learning method for automatic metal surface corrosion grade estimation from single image on this dataset. The test has been performed by classifying images and calculating the accuracy for the two different approaches.

  19. Atmospheric changes caused by galactic cosmic rays over the period 1960-2010

    NASA Astrophysics Data System (ADS)

    Jackman, Charles H.; Marsh, Daniel R.; Kinnison, Douglas E.; Mertens, Christopher J.; Fleming, Eric L.

    2016-05-01

    The Specified Dynamics version of the Whole Atmosphere Community Climate Model (SD-WACCM) and the Goddard Space Flight Center two-dimensional (GSFC 2-D) models are used to investigate the effect of galactic cosmic rays (GCRs) on the atmosphere over the 1960-2010 time period. The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) computation of the GCR-caused ionization rates are used in these simulations. GCR-caused maximum NOx increases of 4-15 % are computed in the Southern polar troposphere with associated ozone increases of 1-2 %. NOx increases of ˜ 1-6 % are calculated for the lower stratosphere with associated ozone decreases of 0.2-1 %. The primary impact of GCRs on ozone was due to their production of NOx. The impact of GCRs varies with the atmospheric chlorine loading, sulfate aerosol loading, and solar cycle variation. Because of the interference between the NOx and ClOx ozone loss cycles (e.g., the ClO + NO2+ M → ClONO2+ M reaction) and the change in the importance of ClOx in the ozone budget, GCRs cause larger atmospheric impacts with less chlorine loading. GCRs also cause larger atmospheric impacts with less sulfate aerosol loading and for years closer to solar minimum. GCR-caused decreases of annual average global total ozone (AAGTO) were computed to be 0.2 % or less with GCR-caused column ozone increases between 1000 and 100 hPa of 0.08 % or less and GCR-caused column ozone decreases between 100 and 1 hPa of 0.23 % or less. Although these computed ozone impacts are small, GCRs provide a natural influence on ozone and need to be quantified over long time periods. This result serves as a lower limit because of the use of the ionization model NAIRAS/HZETRN which underestimates the ion production by neglecting electromagnetic and muon branches of the cosmic ray induced cascade. This will be corrected in future works.

  20. 18F-fluorodeoxyglucose positron emission tomography/computed tomography enables the detection of recurrent same-site deep vein thrombosis by illuminating recently formed, neutrophil-rich thrombus.

    PubMed

    Hara, Tetsuya; Truelove, Jessica; Tawakol, Ahmed; Wojtkiewicz, Gregory R; Hucker, William J; MacNabb, Megan H; Brownell, Anna-Liisa; Jokivarsi, Kimmo; Kessinger, Chase W; Jaff, Michael R; Henke, Peter K; Weissleder, Ralph; Jaffer, Farouc A

    2014-09-23

    Accurate detection of recurrent same-site deep vein thrombosis (DVT) is a challenging clinical problem. Because DVT formation and resolution are associated with a preponderance of inflammatory cells, we investigated whether noninvasive (18)F-fluorodeoxyglucose (FDG)-positron emission tomography (PET) imaging could identify inflamed, recently formed thrombi and thereby improve the diagnosis of recurrent DVT. We established a stasis-induced DVT model in murine jugular veins and also a novel model of recurrent stasis DVT in mice. C57BL/6 mice (n=35) underwent ligation of the jugular vein to induce stasis DVT. FDG-PET/computed tomography (CT) was performed at DVT time points of day 2, 4, 7, 14, or 2+16 (same-site recurrent DVT at day 2 overlying a primary DVT at day 16). Antibody-based neutrophil depletion was performed in a subset of mice before DVT formation and FDG-PET/CT. In a clinical study, 38 patients with lower extremity DVT or controls undergoing FDG-PET were analyzed. Stasis DVT demonstrated that the highest FDG signal occurred at day 2, followed by a time-dependent decrease (P<0.05). Histological analyses demonstrated that thrombus neutrophils (P<0.01), but not macrophages, correlated with thrombus PET signal intensity. Neutrophil depletion decreased FDG signals in day 2 DVT in comparison with controls (P=0.03). Recurrent DVT demonstrated significantly higher FDG uptake than organized day 14 DVT (P=0.03). The FDG DVT signal in patients also exhibited a time-dependent decrease (P<0.01). Noninvasive FDG-PET/CT identifies neutrophil-dependent thrombus inflammation in murine DVT, and demonstrates a time-dependent signal decrease in both murine and clinical DVT. FDG-PET/CT may offer a molecular imaging strategy to accurately diagnose recurrent DVT. © 2014 American Heart Association, Inc.

  1. Effect of the time window on the heat-conduction information filtering model

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Song, Wen-Jun; Hou, Lei; Zhang, Yi-Lu; Liu, Jian-Guo

    2014-05-01

    Recommendation systems have been proposed to filter out the potential tastes and preferences of the normal users online, however, the physics of the time window effect on the performance is missing, which is critical for saving the memory and decreasing the computation complexity. In this paper, by gradually expanding the time window, we investigate the impact of the time window on the heat-conduction information filtering model with ten similarity measures. The experimental results on the benchmark dataset Netflix indicate that by only using approximately 11.11% recent rating records, the accuracy could be improved by an average of 33.16% and the diversity could be improved by 30.62%. In addition, the recommendation performance on the dataset MovieLens could be preserved by only considering approximately 10.91% recent records. Under the circumstance of improving the recommendation performance, our discoveries possess significant practical value by largely reducing the computational time and shortening the data storage space.

  2. The role played by self-orientational properties in nematics of colloids with molecules axially symmetric.

    PubMed

    Alarcón-Waess, O

    2010-04-14

    The self-orientational structure factor as well as the short-time self-orientational diffusion coefficient is computed for colloids composed by nonspherical molecules. To compute the short-time dynamics the hydrodynamic interactions are not taken into account. The hard molecules with at least one symmetry axis considered are: rods, spherocylinders, and tetragonal parallelepipeds. Because both orientational properties in study are written in terms of the second and fourth order parameters, these automatically hold the features of the order parameters. That is, they present a discontinuity for first order transitions, determining in this way the spinodal line. In order to analyze the nematic phase only, we choose the appropriate values for the representative quantities that characterize the molecules. Different formalisms are used to compute the structural properties: de Gennes-Landau approach, Smoluchowski equation and computer simulations. Some of the necessary inputs are taken from literature. Our results show that the self-orientational properties play an important role in the characterization and the localization of axially symmetric phases. While the self-structure decreases throughout the nematics, the short-time self-diffusion does not decrease but rather increases. We study the evolution of the second and fourth order parameters; we find different responses for axial and biaxial nematics, predicting the possibility of a biaxial nematics in tetragonal parallelepiped molecules. By considering the second order in the axial-biaxial phase transition, with the support of the self-orientational structure factor, we are able to propose the density at which this occurs. The short-time dynamics is able to predict a different value in the axial and the biaxial phases. Because the different behavior of the fourth order parameter, the diffusion coefficient is lower for a biaxial phase than for an axial one. Therefore the self-structure factor is able to localize continuous phase transitions involving axially symmetric phases and the short-time self-orientational diffusion is able to distinguish the ordered phase by considering the degree of alignment, that is, axial or biaxial.

  3. Effect of yoga on musculoskeletal discomfort and motor functions in professional computer users.

    PubMed

    Telles, Shirley; Dash, Manoj; Naveen, K V

    2009-01-01

    The self-rated musculoskeletal discomfort, hand grip strength, tapping speed, and low back and hamstring flexibility (based on a sit and reach task) were assessed in 291 professional computer users. They were then randomized as Yoga (YG; n=146) and Wait-list control (WL; n=145) groups. Follow-up assessments for both groups were after 60 days during which the YG group practiced yoga for 60 minutes daily, for 5 days in a week. The WL group spent the same time in their usual recreational activities. At the end of 60 days, the YG group (n=62) showed a significant decrease in the frequency, intensity and degree of interference due to musculoskeletal discomfort, an increase in bilateral hand grip strength, the right hand tapping speed, and low back and hamstring flexibility (repeated measures ANOVA and post hoc analysis with Bonferroni adjustment). In contrast, the WL group (n=56) showed an increase in musculoskeletal discomfort and a decrease in left hand tapping speed. The results suggest that yoga practice is a useful addition to the routine of professional computer users.

  4. Statistics of Advective Stretching in Three-dimensional Incompressible Flows

    NASA Astrophysics Data System (ADS)

    Subramanian, Natarajan; Kellogg, Louise H.; Turcotte, Donald L.

    2009-09-01

    We present a method to quantify kinematic stretching in incompressible, unsteady, isoviscous, three-dimensional flows. We extend the method of Kellogg and Turcotte (J. Geophys. Res. 95:421-432, 1990) to compute the axial stretching/thinning experienced by infinitesimal ellipsoidal strain markers in arbitrary three-dimensional incompressible flows and discuss the differences between our method and the computation of Finite Time Lyapunov Exponent (FTLE). We use the cellular flow model developed in Solomon and Mezic (Nature 425:376-380, 2003) to study the statistics of stretching in a three-dimensional unsteady cellular flow. We find that the probability density function of the logarithm of normalised cumulative stretching (log S) for a globally chaotic flow, with spatially heterogeneous stretching behavior, is not Gaussian and that the coefficient of variation of the Gaussian distribution does not decrease with time as t^{-1/2} . However, it is observed that stretching becomes exponential log S˜ t and the probability density function of log S becomes Gaussian when the time dependence of the flow and its three-dimensionality are increased to make the stretching behaviour of the flow more spatially uniform. We term these behaviors weak and strong chaotic mixing respectively. We find that for strongly chaotic mixing, the coefficient of variation of the Gaussian distribution decreases with time as t^{-1/2} . This behavior is consistent with a random multiplicative stretching process.

  5. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, D.; Alfonsi, A.; Talbot, P.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less

  6. Runway Scheduling Using Generalized Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Montoya, Justin; Wood, Zachary; Rathinam, Sivakumar

    2011-01-01

    A generalized dynamic programming method for finding a set of pareto optimal solutions for a runway scheduling problem is introduced. The algorithm generates a set of runway fight sequences that are optimal for both runway throughput and delay. Realistic time-based operational constraints are considered, including miles-in-trail separation, runway crossings, and wake vortex separation. The authors also model divergent runway takeoff operations to allow for reduced wake vortex separation. A modeled Dallas/Fort Worth International airport and three baseline heuristics are used to illustrate preliminary benefits of using the generalized dynamic programming method. Simulated traffic levels ranged from 10 aircraft to 30 aircraft with each test case spanning 15 minutes. The optimal solution shows a 40-70 percent decrease in the expected delay per aircraft over the baseline schedulers. Computational results suggest that the algorithm is promising for real-time application with an average computation time of 4.5 seconds. For even faster computation times, two heuristics are developed. As compared to the optimal, the heuristics are within 5% of the expected delay per aircraft and 1% of the expected number of runway operations per hour ad can be 100x faster.

  7. PCA-based artifact removal algorithm for stroke detection using UWB radar imaging.

    PubMed

    Ricci, Elisa; di Domenico, Simone; Cianca, Ernestina; Rossi, Tommaso; Diomedi, Marina

    2017-06-01

    Stroke patients should be dispatched at the highest level of care available in the shortest time. In this context, a transportable system in specialized ambulances, able to evaluate the presence of an acute brain lesion in a short time interval (i.e., few minutes), could shorten delay of treatment. UWB radar imaging is an emerging diagnostic branch that has great potential for the implementation of a transportable and low-cost device. Transportability, low cost and short response time pose challenges to the signal processing algorithms of the backscattered signals as they should guarantee good performance with a reasonably low number of antennas and low computational complexity, tightly related to the response time of the device. The paper shows that a PCA-based preprocessing algorithm can: (1) achieve good performance already with a computationally simple beamforming algorithm; (2) outperform state-of-the-art preprocessing algorithms; (3) enable a further improvement in the performance (and/or decrease in the number of antennas) by using a multistatic approach with just a modest increase in computational complexity. This is an important result toward the implementation of such a diagnostic device that could play an important role in emergency scenario.

  8. Interpretation of the auto-mutual information rate of decrease in the context of biomedical signal analysis. Application to electroencephalogram recordings.

    PubMed

    Escudero, Javier; Hornero, Roberto; Abásolo, Daniel

    2009-02-01

    The mutual information (MI) is a measure of both linear and nonlinear dependences. It can be applied to a time series and a time-delayed version of the same sequence to compute the auto-mutual information function (AMIF). Moreover, the AMIF rate of decrease (AMIFRD) with increasing time delay in a signal is correlated with its entropy and has been used to characterize biomedical data. In this paper, we aimed at gaining insight into the dependence of the AMIFRD on several signal processing concepts and at illustrating its application to biomedical time series analysis. Thus, we have analysed a set of synthetic sequences with the AMIFRD. The results show that the AMIF decreases more quickly as bandwidth increases and that the AMIFRD becomes more negative as there is more white noise contaminating the time series. Additionally, this metric detected changes in the nonlinear dynamics of a signal. Finally, in order to illustrate the analysis of real biomedical signals with the AMIFRD, this metric was applied to electroencephalogram (EEG) signals acquired with eyes open and closed and to ictal and non-ictal intracranial EEG recordings.

  9. Using Discrete Event Simulation to predict KPI's at a Projected Emergency Room.

    PubMed

    Concha, Pablo; Neriz, Liliana; Parada, Danilo; Ramis, Francisco

    2015-01-01

    Discrete Event Simulation (DES) is a powerful factor in the design of clinical facilities. DES enables facilities to be built or adapted to achieve the expected Key Performance Indicators (KPI's) such as average waiting times according to acuity, average stay times and others. Our computational model was built and validated using expert judgment and supporting statistical data. One scenario studied resulted in a 50% decrease in the average cycle time of patients compared to the original model, mainly by modifying the patient's attention model.

  10. Association of unipedal standing time and bone mineral density in community-dwelling Japanese women.

    PubMed

    Sakai, A; Toba, N; Takeda, M; Suzuki, M; Abe, Y; Aoyagi, K; Nakamura, T

    2009-05-01

    Bone mineral density (BMD) and physical performance of the lower extremities decrease with age. In community-dwelling Japanese women, unipedal standing time, timed up and go test, and age are associated with BMD while in women aged 70 years and over, unipedal standing time is associated with BMD. The aim of this study was to clarify whether unipedal standing time is significantly associated with BMD in community-dwelling women. The subjects were 90 community-dwelling Japanese women aged 54.7 years. BMD of the second metacarpal bone was measured by computed X-ray densitometry. We measured unipedal standing time as well as timed up and go test to assess physical performance of the lower extremities. Unipedal standing time decreased with increased age. Timed up and go test significantly correlated with age. Low BMD was significantly associated with old age, short unipedal standing time, and long timed up and go test. Stepwise regression analysis revealed that age, unipedal standing time, and timed up and go test were significant factors associated with BMD. In 21 participants aged 70 years and over, body weight and unipedal standing time, but not age, were significantly associated with BMD. BMD and physical performance of the lower extremities decrease with older age. Unipedal standing time, timed up and go test, and age are associated with BMD in community-dwelling Japanese women. In women aged 70 years and over, unipedal standing time is significantly associated with BMD.

  11. Evaluating Computer Screen Time and Its Possible Link to Psychopathology in the Context of Age: A Cross-Sectional Study of Parents and Children

    PubMed Central

    Ross, Sharon; Silman, Zmira; Maoz, Hagai; Bloch, Yuval

    2015-01-01

    Background Several studies have suggested that high levels of computer use are linked to psychopathology. However, there is ambiguity about what should be considered normal or over-use of computers. Furthermore, the nature of the link between computer usage and psychopathology is controversial. The current study utilized the context of age to address these questions. Our hypothesis was that the context of age will be paramount for differentiating normal from excessive use, and that this context will allow a better understanding of the link to psychopathology. Methods In a cross-sectional study, 185 parents and children aged 3–18 years were recruited in clinical and community settings. They were asked to fill out questionnaires regarding demographics, functional and academic variables, computer use as well as psychiatric screening questionnaires. Using a regression model, we identified 3 groups of normal-use, over-use and under-use and examined known factors as putative differentiators between the over-users and the other groups. Results After modeling computer screen time according to age, factors linked to over-use were: decreased socialization (OR 3.24, Confidence interval [CI] 1.23–8.55, p = 0.018), difficulty to disengage from the computer (OR 1.56, CI 1.07–2.28, p = 0.022) and age, though borderline-significant (OR 1.1 each year, CI 0.99–1.22, p = 0.058). While psychopathology was not linked to over-use, post-hoc analysis revealed that the link between increased computer screen time and psychopathology was age-dependent and solidified as age progressed (p = 0.007). Unlike computer usage, the use of small-screens and smartphones was not associated with psychopathology. Conclusions The results suggest that computer screen time follows an age-based course. We conclude that differentiating normal from over-use as well as defining over-use as a possible marker for psychiatric difficulties must be performed within the context of age. If verified by additional studies, future research should integrate those views in order to better understand the intricacies of computer over-use. PMID:26536037

  12. Evaluating Computer Screen Time and Its Possible Link to Psychopathology in the Context of Age: A Cross-Sectional Study of Parents and Children.

    PubMed

    Segev, Aviv; Mimouni-Bloch, Aviva; Ross, Sharon; Silman, Zmira; Maoz, Hagai; Bloch, Yuval

    2015-01-01

    Several studies have suggested that high levels of computer use are linked to psychopathology. However, there is ambiguity about what should be considered normal or over-use of computers. Furthermore, the nature of the link between computer usage and psychopathology is controversial. The current study utilized the context of age to address these questions. Our hypothesis was that the context of age will be paramount for differentiating normal from excessive use, and that this context will allow a better understanding of the link to psychopathology. In a cross-sectional study, 185 parents and children aged 3-18 years were recruited in clinical and community settings. They were asked to fill out questionnaires regarding demographics, functional and academic variables, computer use as well as psychiatric screening questionnaires. Using a regression model, we identified 3 groups of normal-use, over-use and under-use and examined known factors as putative differentiators between the over-users and the other groups. After modeling computer screen time according to age, factors linked to over-use were: decreased socialization (OR 3.24, Confidence interval [CI] 1.23-8.55, p = 0.018), difficulty to disengage from the computer (OR 1.56, CI 1.07-2.28, p = 0.022) and age, though borderline-significant (OR 1.1 each year, CI 0.99-1.22, p = 0.058). While psychopathology was not linked to over-use, post-hoc analysis revealed that the link between increased computer screen time and psychopathology was age-dependent and solidified as age progressed (p = 0.007). Unlike computer usage, the use of small-screens and smartphones was not associated with psychopathology. The results suggest that computer screen time follows an age-based course. We conclude that differentiating normal from over-use as well as defining over-use as a possible marker for psychiatric difficulties must be performed within the context of age. If verified by additional studies, future research should integrate those views in order to better understand the intricacies of computer over-use.

  13. Fast matrix multiplication and its algebraic neighbourhood

    NASA Astrophysics Data System (ADS)

    Pan, V. Ya.

    2017-11-01

    Matrix multiplication is among the most fundamental operations of modern computations. By 1969 it was still commonly believed that the classical algorithm was optimal, although the experts already knew that this was not so. Worldwide interest in matrix multiplication instantly exploded in 1969, when Strassen decreased the exponent 3 of cubic time to 2.807. Then everyone expected to see matrix multiplication performed in quadratic or nearly quadratic time very soon. Further progress, however, turned out to be capricious. It was at stalemate for almost a decade, then a combination of surprising techniques (completely independent of Strassen's original ones and much more advanced) enabled a new decrease of the exponent in 1978-1981 and then again in 1986, to 2.376. By 2017 the exponent has still not passed through the barrier of 2.373, but most disturbing was the curse of recursion — even the decrease of exponents below 2.7733 required numerous recursive steps, and each of them squared the problem size. As a result, all algorithms supporting such exponents supersede the classical algorithm only for inputs of immense sizes, far beyond any potential interest for the user. We survey the long study of fast matrix multiplication, focusing on neglected algorithms for feasible matrix multiplication. We comment on their design, the techniques involved, implementation issues, the impact of their study on the modern theory and practice of Algebraic Computations, and perspectives for fast matrix multiplication. Bibliography: 163 titles.

  14. Random-access technique for modular bathymetry data storage in a continental shelf wave refraction program

    NASA Technical Reports Server (NTRS)

    Poole, L. R.

    1974-01-01

    A study was conducted of an alternate method for storage and use of bathymetry data in the Langley Research Center and Virginia Institute of Marine Science mid-Atlantic continental-shelf wave-refraction computer program. The regional bathymetry array was divided into 105 indexed modules which can be read individually into memory in a nonsequential manner from a peripheral file using special random-access subroutines. In running a sample refraction case, a 75-percent decrease in program field length was achieved by using the random-access storage method in comparison with the conventional method of total regional array storage. This field-length decrease was accompanied by a comparative 5-percent increase in central processing time and a 477-percent increase in the number of operating-system calls. A comparative Langley Research Center computer system cost savings of 68 percent was achieved by using the random-access storage method.

  15. Ascent velocity and dynamics of the Fiumicino mud eruption, Rome, Italy

    NASA Astrophysics Data System (ADS)

    Vona, A.; Giordano, G.; De Benedetti, A. A.; D'Ambrosio, R.; Romano, C.; Manga, M.

    2015-08-01

    In August 2013 drilling triggered the eruption of mud near the international airport of Fiumicino (Rome, Italy). We monitored the evolution of the eruption and collected samples for laboratory characterization of physicochemical and rheological properties. Over time, muds show a progressive dilution with water; the rheology is typical of pseudoplastic fluids, with a small yield stress that decreases as mud density decreases. The eruption, while not naturally triggered, shares several similarities with natural mud volcanoes, including mud componentry, grain-size distribution, gas discharge, and mud rheology. We use the size of large ballistic fragments ejected from the vent along with mud rheology to compute a minimum ascent velocity of the mud. Computed values are consistent with in situ measurements of gas phase velocities, confirming that the stratigraphic record of mud eruptions can be quantitatively used to infer eruption history and ascent rates and hence to assess (or reassess) mud eruption hazards.

  16. Impact of computer-assisted data collection, evaluation and management on the cancer genetic counselor's time providing patient care.

    PubMed

    Cohen, Stephanie A; McIlvried, Dawn E

    2011-06-01

    Cancer genetic counseling sessions traditionally encompass collecting medical and family history information, evaluating that information for the likelihood of a genetic predisposition for a hereditary cancer syndrome, conveying that information to the patient, offering genetic testing when appropriate, obtaining consent and subsequently documenting the encounter with a clinic note and pedigree. Software programs exist to collect family and medical history information electronically, intending to improve efficiency and simplicity of collecting, managing and storing this data. This study compares the genetic counselor's time spent in cancer genetic counseling tasks in a traditional model and one using computer-assisted data collection, which is then used to generate a pedigree, risk assessment and consult note. Genetic counselor time spent collecting family and medical history and providing face-to-face counseling for a new patient session decreased from an average of 85-69 min when using the computer-assisted data collection. However, there was no statistically significant change in overall genetic counselor time on all aspects of the genetic counseling process, due to an increased amount of time spent generating an electronic pedigree and consult note. Improvements in the computer program's technical design would potentially minimize data manipulation. Certain aspects of this program, such as electronic collection of family history and risk assessment, appear effective in improving cancer genetic counseling efficiency while others, such as generating an electronic pedigree and consult note, do not.

  17. The vectorization of a ray tracing program for image generation

    NASA Technical Reports Server (NTRS)

    Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.

    1984-01-01

    Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.

  18. Differences by Sex in Association of Mental Health With Video Gaming or Other Nonacademic Computer Use Among US Adolescents

    PubMed Central

    Sung, Jung Hye; Lee, Ji-Young; Lee, Jae Eun

    2017-01-01

    Introduction Although numerous studies have examined the association between playing video games and cognitive skills, aggression, and depression, few studies have examined how these associations differ by sex. The objective of our study was to determine differences by sex in association between video gaming or other nonacademic computer use and depressive symptoms, suicidal behavior, and being bullied among adolescents in the United States. Methods We used data from the 2015 Youth Risk Behavior Survey on 15,624 US high school students. Rao–Scott χ2 tests, which were adjusted for the complex sampling design, were conducted to assess differences by sex in the association of mental health with video gaming or other nonacademic computer use. Results Approximately one-fifth (19.4%) of adolescents spent 5 or more hours daily on video gaming or other nonacademic computer use, and 17.9% did not spend any time in those activities. A greater percentage of female adolescents than male adolescents reported spending no time (22.1% and 14.0%, respectively) or 5 hours or more (21.3% and 17.5%, respectively) in gaming and other nonacademic computer use (P < .001). The association between mental problems and video gaming or other nonacademic computer use differed by sex. Among female adolescents, prevalence of mental problems increased steadily in association with increased time spent, whereas the pattern for male adolescents followed a J-shaped curve, decreasing initially, increasing slowly, and then increasing rapidly beginning at 4 hours or more. Conclusion Female adolescents were more likely to have all 3 mental health problems than male adolescents were. Spending no time or 5 hours or more daily on video gaming or other nonacademic computer use was associated with increased mental problems among both sexes. As suggested by the J-shaped relationship, 1 hour or less spent on video gaming or other nonacademic computer use may reduce depressive symptoms, suicidal behavior, and being bullied compared with no use or excessive use. PMID:29166250

  19. Differences by Sex in Association of Mental Health With Video Gaming or Other Nonacademic Computer Use Among US Adolescents.

    PubMed

    Lee, Hogan H; Sung, Jung Hye; Lee, Ji-Young; Lee, Jae Eun

    2017-11-22

    Although numerous studies have examined the association between playing video games and cognitive skills, aggression, and depression, few studies have examined how these associations differ by sex. The objective of our study was to determine differences by sex in association between video gaming or other nonacademic computer use and depressive symptoms, suicidal behavior, and being bullied among adolescents in the United States. We used data from the 2015 Youth Risk Behavior Survey on 15,624 US high school students. Rao-Scott χ 2 tests, which were adjusted for the complex sampling design, were conducted to assess differences by sex in the association of mental health with video gaming or other nonacademic computer use. Approximately one-fifth (19.4%) of adolescents spent 5 or more hours daily on video gaming or other nonacademic computer use, and 17.9% did not spend any time in those activities. A greater percentage of female adolescents than male adolescents reported spending no time (22.1% and 14.0%, respectively) or 5 hours or more (21.3% and 17.5%, respectively) in gaming and other nonacademic computer use (P < .001). The association between mental problems and video gaming or other nonacademic computer use differed by sex. Among female adolescents, prevalence of mental problems increased steadily in association with increased time spent, whereas the pattern for male adolescents followed a J-shaped curve, decreasing initially, increasing slowly, and then increasing rapidly beginning at 4 hours or more. Female adolescents were more likely to have all 3 mental health problems than male adolescents were. Spending no time or 5 hours or more daily on video gaming or other nonacademic computer use was associated with increased mental problems among both sexes. As suggested by the J-shaped relationship, 1 hour or less spent on video gaming or other nonacademic computer use may reduce depressive symptoms, suicidal behavior, and being bullied compared with no use or excessive use.

  20. On localization attacks against cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Ge, Linqiang; Yu, Wei; Sistani, Mohammad Ali

    2013-05-01

    One of the key characteristics of cloud computing is the device and location independence that enables the user to access systems regardless of their location. Because cloud computing is heavily based on sharing resource, it is vulnerable to cyber attacks. In this paper, we investigate a localization attack that enables the adversary to leverage central processing unit (CPU) resources to localize the physical location of server used by victims. By increasing and reducing CPU usage through the malicious virtual machine (VM), the response time from the victim VM will increase and decrease correspondingly. In this way, by embedding the probing signal into the CPU usage and correlating the same pattern in the response time from the victim VM, the adversary can find the location of victim VM. To determine attack accuracy, we investigate features in both the time and frequency domains. We conduct both theoretical and experimental study to demonstrate the effectiveness of such an attack.

  1. Predictive modeling of multicellular structure formation by using Cellular Particle Dynamics simulations

    NASA Astrophysics Data System (ADS)

    McCune, Matthew; Shafiee, Ashkan; Forgacs, Gabor; Kosztin, Ioan

    2014-03-01

    Cellular Particle Dynamics (CPD) is an effective computational method for describing and predicting the time evolution of biomechanical relaxation processes of multicellular systems. A typical example is the fusion of spheroidal bioink particles during post bioprinting structure formation. In CPD cells are modeled as an ensemble of cellular particles (CPs) that interact via short-range contact interactions, characterized by an attractive (adhesive interaction) and a repulsive (excluded volume interaction) component. The time evolution of the spatial conformation of the multicellular system is determined by following the trajectories of all CPs through integration of their equations of motion. CPD was successfully applied to describe and predict the fusion of 3D tissue construct involving identical spherical aggregates. Here, we demonstrate that CPD can also predict tissue formation involving uneven spherical aggregates whose volumes decrease during the fusion process. Work supported by NSF [PHY-0957914]. Computer time provided by the University of Missouri Bioinformatics Consortium.

  2. [Cost analysis for navigation in knee endoprosthetics].

    PubMed

    Cerha, O; Kirschner, S; Günther, K-P; Lützner, J

    2009-12-01

    Total knee arthroplasty (TKA) is one of the most frequent procedures in orthopaedic surgery. The outcome depends on a range of factors including alignment of the leg and the positioning of the implant in addition to patient-associated factors. Computer-assisted navigation systems can improve the restoration of a neutral leg alignment. This procedure has been established especially in Europe and North America. The additional expenses are not reimbursed in the German DRG system (Diagnosis Related Groups). In the present study a cost analysis of computer-assisted TKA compared to the conventional technique was performed. The acquisition expenses of various navigation systems (5 and 10 year depreciation), annual costs for maintenance and software updates as well as the accompanying costs per operation (consumables, additional operating time) were considered. The additional operating time was determined on the basis of a meta-analysis according to the current literature. Situations with 25, 50, 100, 200 and 500 computer-assisted TKAs per year were simulated. The amount of the incremental costs of the computer-assisted TKA depends mainly on the annual volume and the additional operating time. A relevant decrease of the incremental costs was detected between 50 and 100 procedures per year. In a model with 100 computer-assisted TKAs per year an additional operating time of 14 mins and a 10 year depreciation of the investment costs, the incremental expenses amount to 300-395 depending on the navigation system. Computer-assisted TKA is associated with additional costs. From an economical point of view an amount of more than 50 procedures per year appears to be favourable. The cost-effectiveness could be estimated if long-term results will show a reduction of revisions or a better clinical outcome.

  3. A UWB Radar Signal Processing Platform for Real-Time Human Respiratory Feature Extraction Based on Four-Segment Linear Waveform Model.

    PubMed

    Hsieh, Chi-Hsuan; Chiu, Yu-Fang; Shen, Yi-Hsiang; Chu, Ta-Shun; Huang, Yuan-Hao

    2016-02-01

    This paper presents an ultra-wideband (UWB) impulse-radio radar signal processing platform used to analyze human respiratory features. Conventional radar systems used in human detection only analyze human respiration rates or the response of a target. However, additional respiratory signal information is available that has not been explored using radar detection. The authors previously proposed a modified raised cosine waveform (MRCW) respiration model and an iterative correlation search algorithm that could acquire additional respiratory features such as the inspiration and expiration speeds, respiration intensity, and respiration holding ratio. To realize real-time respiratory feature extraction by using the proposed UWB signal processing platform, this paper proposes a new four-segment linear waveform (FSLW) respiration model. This model offers a superior fit to the measured respiration signal compared with the MRCW model and decreases the computational complexity of feature extraction. In addition, an early-terminated iterative correlation search algorithm is presented, substantially decreasing the computational complexity and yielding negligible performance degradation. These extracted features can be considered the compressed signals used to decrease the amount of data storage required for use in long-term medical monitoring systems and can also be used in clinical diagnosis. The proposed respiratory feature extraction algorithm was designed and implemented using the proposed UWB radar signal processing platform including a radar front-end chip and an FPGA chip. The proposed radar system can detect human respiration rates at 0.1 to 1 Hz and facilitates the real-time analysis of the respiratory features of each respiration period.

  4. An Algebraic Approach to Guarantee Harmonic Balance Method Using Gröbner Base

    NASA Astrophysics Data System (ADS)

    Yagi, Masakazu; Hisakado, Takashi; Okumura, Kohshi

    Harmonic balance (HB) method is well known principle for analyzing periodic oscillations on nonlinear networks and systems. Because the HB method has a truncation error, approximated solutions have been guaranteed by error bounds. However, its numerical computation is very time-consuming compared with solving the HB equation. This paper proposes an algebraic representation of the error bound using Gröbner base. The algebraic representation enables to decrease the computational cost of the error bound considerably. Moreover, using singular points of the algebraic representation, we can obtain accurate break points of the error bound by collisions.

  5. On the tip of the tongue: learning typing and pointing with an intra-oral computer interface.

    PubMed

    Caltenco, Héctor A; Breidegard, Björn; Struijk, Lotte N S Andreasen

    2014-07-01

    To evaluate typing and pointing performance and improvement over time of four able-bodied participants using an intra-oral tongue-computer interface for computer control. A physically disabled individual may lack the ability to efficiently control standard computer input devices. There have been several efforts to produce and evaluate interfaces that provide individuals with physical disabilities the possibility to control personal computers. Training with the intra-oral tongue-computer interface was performed by playing games over 18 sessions. Skill improvement was measured through typing and pointing exercises at the end of each training session. Typing throughput improved from averages of 2.36 to 5.43 correct words per minute. Pointing throughput improved from averages of 0.47 to 0.85 bits/s. Target tracking performance, measured as relative time on target, improved from averages of 36% to 47%. Path following throughput improved from averages of 0.31 to 0.83 bits/s and decreased to 0.53 bits/s with more difficult tasks. Learning curves support the notion that the tongue can rapidly learn novel motor tasks. Typing and pointing performance of the tongue-computer interface is comparable to performances of other proficient assistive devices, which makes the tongue a feasible input organ for computer control. Intra-oral computer interfaces could provide individuals with severe upper-limb mobility impairments the opportunity to control computers and automatic equipment. Typing and pointing performance of the tongue-computer interface is comparable to performances of other proficient assistive devices, but does not cause fatigue easily and might be invisible to other people, which is highly prioritized by assistive device users. Combination of visual and auditory feedback is vital for a good performance of an intra-oral computer interface and helps to reduce involuntary or erroneous activations.

  6. Arabidopsis plants perform arithmetic division to prevent starvation at night

    PubMed Central

    Scialdone, Antonio; Mugford, Sam T; Feike, Doreen; Skeffington, Alastair; Borrill, Philippa; Graf, Alexander; Smith, Alison M; Howard, Martin

    2013-01-01

    Photosynthetic starch reserves that accumulate in Arabidopsis leaves during the day decrease approximately linearly with time at night to support metabolism and growth. We find that the rate of decrease is adjusted to accommodate variation in the time of onset of darkness and starch content, such that reserves last almost precisely until dawn. Generation of these dynamics therefore requires an arithmetic division computation between the starch content and expected time to dawn. We introduce two novel chemical kinetic models capable of implementing analog arithmetic division. Predictions from the models are successfully tested in plants perturbed by a night-time light period or by mutations in starch degradation pathways. Our experiments indicate which components of the starch degradation apparatus may be important for appropriate arithmetic division. Our results are potentially relevant for any biological system dependent on a food reserve for survival over a predictable time period. DOI: http://dx.doi.org/10.7554/eLife.00669.001 PMID:23805380

  7. Development of a small-scale computer cluster

    NASA Astrophysics Data System (ADS)

    Wilhelm, Jay; Smith, Justin T.; Smith, James E.

    2008-04-01

    An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.

  8. Explicit integration with GPU acceleration for large kinetic networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brock, Benjamin; Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830; Belt, Andrew

    2015-12-01

    We demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. This orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies that important coupled, multiphysics problems inmore » various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.« less

  9. Efficient calculation of luminance variation of a luminaire that uses LED light sources

    NASA Astrophysics Data System (ADS)

    Goldstein, Peter

    2007-09-01

    Many luminaires have an array of LEDs that illuminate a lenslet-array diffuser in order to create the appearance of a single, extended source with a smooth luminance distribution. Designing such a system is challenging because luminance calculations for a lenslet array generally involve tracing millions of rays per LED, which is computationally intensive and time-consuming. This paper presents a technique for calculating an on-axis luminance distribution by tracing only one ray per LED per lenslet. A multiple-LED system is simulated with this method, and with Monte Carlo ray-tracing software for comparison. Accuracy improves, and computation time decreases by at least five orders of magnitude with this technique, which has applications in LED-based signage, displays, and general illumination.

  10. Graphical processors for HEP trigger systems

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.

    2017-02-01

    General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to employ GPUs as accelerators in offline computations. With the steady decrease of GPU latencies and the increase in link and memory throughputs, time is ripe for real-time applications using GPUs in high-energy physics data acquisition and trigger systems. We will discuss the use of online parallel computing on GPUs for synchronous low level trigger systems, focusing on tests performed on the trigger of the CERN NA62 experiment. Latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Moreover, we discuss how specific trigger algorithms can be parallelised and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen LHC luminosity upgrade where highly selective algorithms will be crucial to maintain sustainable trigger rates with very high pileup.

  11. Computer program for diagnostic X-ray exposure conversion.

    PubMed

    Lewis, S

    1984-01-01

    Presented is a computer program designed to convert any given set of exposure factors sequentially into another, yielding either an equivalent photographic density or one increased or decreased by a specifiable proportion. In addition to containing the wherewithal with which to manipulate a set of exposure factors, the facility to print hard (paper) copy is included enabling the results to be pasted into a notebook and used at any time. This program was originally written as an investigative exercise into examining the potential use of computers for practical radiographic purposes as conventionally encountered. At the same time, its possible use as an educational tool was borne in mind. To these ends, the current version of this program may be used as a means whereby exposure factors used in a diagnostic department may be altered to suit a particular requirement or may be used in the school as a mathematical model to describe the behaviour of exposure factors under manipulation without patient exposure.

  12. Consequences of Base Time for Redundant Signals Experiments

    PubMed Central

    Townsend, James T.; Honey, Christopher

    2007-01-01

    We report analytical and computational investigations into the effects of base time on the diagnosticity of two popular theoretical tools in the redundant signals literature: (1) the race model inequality and (2) the capacity coefficient. We show analytically and without distributional assumptions that the presence of base time decreases the sensitivity of both of these measures to model violations. We further use simulations to investigate the statistical power model selection tools based on the race model inequality, both with and without base time. Base time decreases statistical power, and biases the race model test toward conservatism. The magnitude of this biasing effect increases as we increase the proportion of total reaction time variance contributed by base time. We marshal empirical evidence to suggest that the proportion of reaction time variance contributed by base time is relatively small, and that the effects of base time on the diagnosticity of our model-selection tools are therefore likely to be minor. However, uncertainty remains concerning the magnitude and even the definition of base time. Experimentalists should continue to be alert to situations in which base time may contribute a large proportion of the total reaction time variance. PMID:18670591

  13. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.

  14. Optical Interconnections for VLSI Computational Systems Using Computer-Generated Holography.

    NASA Astrophysics Data System (ADS)

    Feldman, Michael Robert

    Optical interconnects for VLSI computational systems using computer generated holograms are evaluated in theory and experiment. It is shown that by replacing particular electronic connections with free-space optical communication paths, connection of devices on a single chip or wafer and between chips or modules can be improved. Optical and electrical interconnects are compared in terms of power dissipation, communication bandwidth, and connection density. Conditions are determined for which optical interconnects are advantageous. Based on this analysis, it is shown that by applying computer generated holographic optical interconnects to wafer scale fine grain parallel processing systems, dramatic increases in system performance can be expected. Some new interconnection networks, designed to take full advantage of optical interconnect technology, have been developed. Experimental Computer Generated Holograms (CGH's) have been designed, fabricated and subsequently tested in prototype optical interconnected computational systems. Several new CGH encoding methods have been developed to provide efficient high performance CGH's. One CGH was used to decrease the access time of a 1 kilobit CMOS RAM chip. Another was produced to implement the inter-processor communication paths in a shared memory SIMD parallel processor array.

  15. Efficient Redundancy Techniques in Cloud and Desktop Grid Systems using MAP/G/c-type Queues

    NASA Astrophysics Data System (ADS)

    Chakravarthy, Srinivas R.; Rumyantsev, Alexander

    2018-03-01

    Cloud computing is continuing to prove its flexibility and versatility in helping industries and businesses as well as academia as a way of providing needed computing capacity. As an important alternative to cloud computing, desktop grids allow to utilize the idle computer resources of an enterprise/community by means of distributed computing system, providing a more secure and controllable environment with lower operational expenses. Further, both cloud computing and desktop grids are meant to optimize limited resources and at the same time to decrease the expected latency for users. The crucial parameter for optimization both in cloud computing and in desktop grids is the level of redundancy (replication) for service requests/workunits. In this paper we study the optimal replication policies by considering three variations of Fork-Join systems in the context of a multi-server queueing system with a versatile point process for the arrivals. For services we consider phase type distributions as well as shifted exponential and Weibull. We use both analytical and simulation approach in our analysis and report some interesting qualitative results.

  16. Socio-Demographic, Social-Cognitive, Health-Related and Physical Environmental Variables Associated with Context-Specific Sitting Time in Belgian Adolescents: A One-Year Follow-Up Study.

    PubMed

    Busschaert, Cedric; Ridgers, Nicola D; De Bourdeaudhuij, Ilse; Cardon, Greet; Van Cauwenberg, Jelle; De Cocker, Katrien

    2016-01-01

    More knowledge is warranted about multilevel ecological variables associated with context-specific sitting time among adolescents. The present study explored cross-sectional and longitudinal associations of ecological domains of sedentary behaviour, including socio-demographic, social-cognitive, health-related and physical-environmental variables with sitting during TV viewing, computer use, electronic gaming and motorized transport among adolescents. For this longitudinal study, a sample of Belgian adolescents completed questionnaires at school on context-specific sitting time and associated ecological variables. At baseline, complete data were gathered from 513 adolescents (15.0±1.7 years). At one-year follow-up, complete data of 340 participants were available (retention rate: 66.3%). Multilevel linear regression analyses were conducted to explore cross-sectional correlates (baseline variables) and longitudinal predictors (change scores variables) of context-specific sitting time. Social-cognitive correlates/predictors were most frequently associated with context-specific sitting time. Longitudinal analyses revealed that increases over time in considering it pleasant to watch TV (p < .001), in perceiving TV watching as a way to relax (p < .05), in TV time of parents/care givers (p < .01) and in TV time of siblings (p < .001) were associated with more sitting during TV viewing at follow-up. Increases over time in considering it pleasant to use a computer in leisure time (p < .01) and in the computer time of siblings (p < .001) were associated with more sitting during computer use at follow-up. None of the changes in potential predictors were significantly related to changes in sitting during motorized transport or during electronic gaming. Future intervention studies aiming to decrease TV viewing and computer use should acknowledge the importance of the behaviour of siblings and the pleasure adolescents experience during these screen-related behaviours. In addition, more time parents or care givers spent sitting may lead to more sitting during TV viewing of the adolescents, so that a family-based approach may be preferable for interventions. Experimental study designs are warranted to confirm the present findings.

  17. Socio-Demographic, Social-Cognitive, Health-Related and Physical Environmental Variables Associated with Context-Specific Sitting Time in Belgian Adolescents: A One-Year Follow-Up Study

    PubMed Central

    Busschaert, Cedric; Ridgers, Nicola D.; De Bourdeaudhuij, Ilse; Cardon, Greet; Van Cauwenberg, Jelle; De Cocker, Katrien

    2016-01-01

    Introduction More knowledge is warranted about multilevel ecological variables associated with context-specific sitting time among adolescents. The present study explored cross-sectional and longitudinal associations of ecological domains of sedentary behaviour, including socio-demographic, social-cognitive, health-related and physical-environmental variables with sitting during TV viewing, computer use, electronic gaming and motorized transport among adolescents. Methods For this longitudinal study, a sample of Belgian adolescents completed questionnaires at school on context-specific sitting time and associated ecological variables. At baseline, complete data were gathered from 513 adolescents (15.0±1.7 years). At one-year follow-up, complete data of 340 participants were available (retention rate: 66.3%). Multilevel linear regression analyses were conducted to explore cross-sectional correlates (baseline variables) and longitudinal predictors (change scores variables) of context-specific sitting time. Results Social-cognitive correlates/predictors were most frequently associated with context-specific sitting time. Longitudinal analyses revealed that increases over time in considering it pleasant to watch TV (p < .001), in perceiving TV watching as a way to relax (p < .05), in TV time of parents/care givers (p < .01) and in TV time of siblings (p < .001) were associated with more sitting during TV viewing at follow-up. Increases over time in considering it pleasant to use a computer in leisure time (p < .01) and in the computer time of siblings (p < .001) were associated with more sitting during computer use at follow-up. None of the changes in potential predictors were significantly related to changes in sitting during motorized transport or during electronic gaming. Conclusions Future intervention studies aiming to decrease TV viewing and computer use should acknowledge the importance of the behaviour of siblings and the pleasure adolescents experience during these screen-related behaviours. In addition, more time parents or care givers spent sitting may lead to more sitting during TV viewing of the adolescents, so that a family-based approach may be preferable for interventions. Experimental study designs are warranted to confirm the present findings. PMID:27936073

  18. Minimally invasive and computer-navigated total hip arthroplasty: a qualitative and systematic review of the literature

    PubMed Central

    2010-01-01

    Background Both minimally invasive surgery (MIS) and computer-assisted surgery (CAS) for total hip arthroplasty (THA) have gained popularity in recent years. We conducted a qualitative and systematic review to assess the effectiveness of MIS, CAS and computer-assisted MIS for THA. Methods An extensive computerised literature search of PubMed, Medline, Embase and OVIDSP was conducted. Both randomised clinical trials and controlled clinical trials on the effectiveness of MIS, CAS and computer-assisted MIS for THA were included. Methodological quality was independently assessed by two reviewers. Effect estimates were calculated and a best-evidence synthesis was performed. Results Four high-quality and 14 medium-quality studies with MIS THA as study contrast, and three high-quality and four medium-quality studies with CAS THA as study contrast were included. No studies with computer-assisted MIS for THA as study contrast were identified. Strong evidence was found for a decrease in operative time and intraoperative blood loss for MIS THA, with no difference in complication rates and risk for acetabular outliers. Strong evidence exists that there is no difference in physical functioning, measured either by questionnaires or by gait analysis. Moderate evidence was found for a shorter length of hospital stay after MIS THA. Conflicting evidence was found for a positive effect of MIS THA on pain in the early postoperative period, but that effect diminished after three months postoperatively. Strong evidence was found for an increase in operative time for CAS THA, and limited evidence was found for a decrease in intraoperative blood loss. Furthermore, strong evidence was found for no difference in complication rates, as well as for a significantly lower risk for acetabular outliers. Conclusions The results indicate that MIS THA is a safe surgical procedure, without increases in operative time, blood loss, operative complication rates and component malposition rates. However, the beneficial effect of MIS THA on functional recovery has to be proven. The results also indicate that CAS THA, though resulting in an increase in operative time, may have a positive effect on operative blood loss and operative complication rates. More importantly, the use of CAS results in better positioning of acetabular component of the prosthesis. PMID:20470443

  19. The effect of area size and predation on the time to extinction of prairie vole populations. simulation studies via SERDYCA: a Spatially-Explicit Individual-Based Model of Rodent Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostova, T; Carlsen, T

    2003-11-21

    We present a spatially-explicit individual-based computational model of rodent dynamics, customized for the prairie vole species, M. Ochrogaster. The model is based on trophic relationships and represents important features such as territorial competition, mating behavior, density-dependent predation and dispersal out of the modeled spatial region. Vegetation growth and vole fecundity are dependent on climatic components. The results of simulations show that the model correctly predicts the overall temporal dynamics of the population density. Time-series analysis shows a very good match between the periods corresponding to the peak population density frequencies predicted by the model and the ones reported in themore » literature. The model is used to study the relation between persistence, landscape area and predation. We introduce the notions of average time to extinction (ATE) and persistence frequency to quantify persistence. While the ATE decreases with decrease of area, it is a bell-shaped function of the predation level: increasing for 'small' and decreasing for 'large' predation levels.« less

  20. Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment

    NASA Astrophysics Data System (ADS)

    Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.

    2013-12-01

    Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model

  1. Predicting moisture and economic value of solid forest fuel piles for improving the profitability of bioenergy use

    NASA Astrophysics Data System (ADS)

    Lauren, Ari; Kinnunen, Jyrki-Pekko; Sikanen, Lauri

    2016-04-01

    Bioenergy contributes 26 % of the total energy use in Finland, and 60 % of this is provided by solid forest fuel consisting of small stems and logging residues such as tops, branches, roots and stumps. Typically the logging residues are stored as piles on site before transporting to regional combined heat and power plants for combustion. Profitability of forest fuel use depends on smart control of the feedstock. Fuel moisture, dry matter loss, and the rate of interest during the storing are the key variables affecting the economic value of the fuel. The value increases with drying, but decreases with wetting, dry matter loss and positive rate of interest. We compiled a simple simulation model computing the moisture change, dry matter loss, transportation costs and present value of feedstock piles. The model was used to predict the time of the maximum value of the stock, and to compose feedstock allocation strategies under the question: how should we choose the piles and the combustion time so that total energy yield and the economic value of the energy production is maximized? The question was assessed concerning the demand of the energy plant. The model parameterization was based on field scale studies. The initial moisture, and the rates of daily moisture change and dry matter loss in the feedstock piles depended on the day of the year according to empirical field measurements. Time step of the computation was one day. Effects of pile use timing on the total energy yield and profitability was studied using combinatorial optimization. Results show that the storing increases the pile maximum value if the natural drying onsets soon after the harvesting; otherwise dry matter loss and the capital cost of the storing overcome the benefits gained by drying. Optimized timing of the pile use can improve slightly the profitability, based on the increased total energy yield and because the energy unit based transportation costs decrease when water content in the biomass is decreased.

  2. Accessible high-throughput virtual screening molecular docking software for students and educators.

    PubMed

    Jacob, Reed B; Andersen, Tim; McDougal, Owen M

    2012-05-01

    We survey low cost high-throughput virtual screening (HTVS) computer programs for instructors who wish to demonstrate molecular docking in their courses. Since HTVS programs are a useful adjunct to the time consuming and expensive wet bench experiments necessary to discover new drug therapies, the topic of molecular docking is core to the instruction of biochemistry and molecular biology. The availability of HTVS programs coupled with decreasing costs and advances in computer hardware have made computational approaches to drug discovery possible at institutional and non-profit budgets. This paper focuses on HTVS programs with graphical user interfaces (GUIs) that use either DOCK or AutoDock for the prediction of DockoMatic, PyRx, DockingServer, and MOLA since their utility has been proven by the research community, they are free or affordable, and the programs operate on a range of computer platforms.

  3. On the Fast Evaluation Method of Temperature and Gas Mixing Ratio Weighting Functions for Remote Sensing of Planetary Atmospheres in Thermal IR and Microwave

    NASA Technical Reports Server (NTRS)

    Ustinov, E. A.

    1999-01-01

    Evaluation of weighting functions in the atmospheric remote sensing is usually the most computer-intensive part of the inversion algorithms. We present an analytic approach to computations of temperature and mixing ratio weighting functions that is based on our previous results but the resulting expressions use the intermediate variables that are generated in computations of observable radiances themselves. Upwelling radiances at the given level in the atmosphere and atmospheric transmittances from space to the given level are combined with local values of the total absorption coefficient and its components due to absorption of atmospheric constituents under study. This makes it possible to evaluate the temperature and mixing ratio weighting functions in parallel with evaluation of radiances. This substantially decreases the computer time required for evaluation of weighting functions. Implications for the nadir and limb viewing geometries are discussed.

  4. Precision digital control systems

    NASA Astrophysics Data System (ADS)

    Vyskub, V. G.; Rozov, B. S.; Savelev, V. I.

    This book is concerned with the characteristics of digital control systems of great accuracy. A classification of such systems is considered along with aspects of stabilization, programmable control applications, digital tracking systems and servomechanisms, and precision systems for the control of a scanning laser beam. Other topics explored are related to systems of proportional control, linear devices and methods for increasing precision, approaches for further decreasing the response time in the case of high-speed operation, possibilities for the implementation of a logical control law, and methods for the study of precision digital control systems. A description is presented of precision automatic control systems which make use of electronic computers, taking into account the existing possibilities for an employment of computers in automatic control systems, approaches and studies required for including a computer in such control systems, and an analysis of the structure of automatic control systems with computers. Attention is also given to functional blocks in the considered systems.

  5. The Computer Aided Aircraft-design Package (CAAP)

    NASA Technical Reports Server (NTRS)

    Yalif, Guy U.

    1994-01-01

    The preliminary design of an aircraft is a complex, labor-intensive, and creative process. Since the 1970's, many computer programs have been written to help automate preliminary airplane design. Time and resource analyses have identified, 'a substantial decrease in project duration with the introduction of an automated design capability'. Proof-of-concept studies have been completed which establish 'a foundation for a computer-based airframe design capability', Unfortunately, today's design codes exist in many different languages on many, often expensive, hardware platforms. Through the use of a module-based system architecture, the Computer aided Aircraft-design Package (CAAP) will eventually bring together many of the most useful features of existing programs. Through the use of an expert system, it will add an additional feature that could be described as indispensable to entry level engineers and students: the incorporation of 'expert' knowledge into the automated design process.

  6. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1991-01-01

    The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  7. Evolution of product lifespan and implications for environmental assessment and management: a case study of personal computers in higher education.

    PubMed

    Babbitt, Callie W; Kahhat, Ramzy; Williams, Eric; Babbitt, Gregory A

    2009-07-01

    Product lifespan is a fundamental variable in understanding the environmental impacts associated with the life cycle of products. Existing life cycle and materials flow studies of products, almost without exception, consider lifespan to be constant over time. To determine the validity of this assumption, this study provides an empirical documentation of the long-term evolution of personal computer lifespan, using a major U.S. university as a case study. Results indicate that over the period 1985-2000, computer lifespan (purchase to "disposal") decreased steadily from a mean of 10.7 years in 1985 to 5.5 years in 2000. The distribution of lifespan also evolved, becoming narrower over time. Overall, however, lifespan distribution was broader than normally considered in life cycle assessments or materials flow forecasts of electronic waste management for policy. We argue that these results suggest that at least for computers, the assumption of constant lifespan is problematic and that it is important to work toward understanding the dynamics of use patterns. We modify an age-structured model of population dynamics from biology as a modeling approach to describe product life cycles. Lastly, the purchase share and generation of obsolete computers from the higher education sector is estimated using different scenarios for the dynamics of product lifespan.

  8. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  9. Segmentation of cortical bone using fast level sets

    NASA Astrophysics Data System (ADS)

    Chowdhury, Manish; Jörgens, Daniel; Wang, Chunliang; Smedby, Årjan; Moreno, Rodrigo

    2017-02-01

    Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However, traditional implementations of this method are computationally expensive. This drawback was recently tackled through the so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes few seconds to compute, which makes it suitable for clinical settings.

  10. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  11. Optimization of Angular-Momentum Biases of Reaction Wheels

    NASA Technical Reports Server (NTRS)

    Lee, Clifford; Lee, Allan

    2008-01-01

    RBOT [RWA Bias Optimization Tool (wherein RWA signifies Reaction Wheel Assembly )] is a computer program designed for computing angular momentum biases for reaction wheels used for providing spacecraft pointing in various directions as required for scientific observations. RBOT is currently deployed to support the Cassini mission to prevent operation of reaction wheels at unsafely high speeds while minimizing time in undesirable low-speed range, where elasto-hydrodynamic lubrication films in bearings become ineffective, leading to premature bearing failure. The problem is formulated as a constrained optimization problem in which maximum wheel speed limit is a hard constraint and a cost functional that increases as speed decreases below a low-speed threshold. The optimization problem is solved using a parametric search routine known as the Nelder-Mead simplex algorithm. To increase computational efficiency for extended operation involving large quantity of data, the algorithm is designed to (1) use large time increments during intervals when spacecraft attitudes or rates of rotation are nearly stationary, (2) use sinusoidal-approximation sampling to model repeated long periods of Earth-point rolling maneuvers to reduce computational loads, and (3) utilize an efficient equation to obtain wheel-rate profiles as functions of initial wheel biases based on conservation of angular momentum (in an inertial frame) using pre-computed terms.

  12. SU-E-T-531: Performance Evaluation of Multithreaded Geant4 for Proton Therapy Dose Calculations in a High Performance Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, J; Coss, D; McMurry, J

    Purpose: To evaluate the efficiency of multithreaded Geant4 (Geant4-MT, version 10.0) for proton Monte Carlo dose calculations using a high performance computing facility. Methods: Geant4-MT was used to calculate 3D dose distributions in 1×1×1 mm3 voxels in a water phantom and patient's head with a 150 MeV proton beam covering approximately 5×5 cm2 in the water phantom. Three timestamps were measured on the fly to separately analyze the required time for initialization (which cannot be parallelized), processing time of individual threads, and completion time. Scalability of averaged processing time per thread was calculated as a function of thread number (1,more » 100, 150, and 200) for both 1M and 50 M histories. The total memory usage was recorded. Results: Simulations with 50 M histories were fastest with 100 threads, taking approximately 1.3 hours and 6 hours for the water phantom and the CT data, respectively with better than 1.0 % statistical uncertainty. The calculations show 1/N scalability in the event loops for both cases. The gains from parallel calculations started to decrease with 150 threads. The memory usage increases linearly with number of threads. No critical failures were observed during the simulations. Conclusion: Multithreading in Geant4-MT decreased simulation time in proton dose distribution calculations by a factor of 64 and 54 at a near optimal 100 threads for water phantom and patient's data respectively. Further simulations will be done to determine the efficiency at the optimal thread number. Considering the trend of computer architecture development, utilizing Geant4-MT for radiotherapy simulations is an excellent cost-effective alternative for a distributed batch queuing system. However, because the scalability depends highly on simulation details, i.e., the ratio of the processing time of one event versus waiting time to access for the shared event queue, a performance evaluation as described is recommended.« less

  13. COMPUTER-AIDED DRUG DISCOVERY AND DEVELOPMENT (CADDD): in silico-chemico-biological approach

    PubMed Central

    Kapetanovic, I.M.

    2008-01-01

    It is generally recognized that drug discovery and development are very time and resources consuming processes. There is an ever growing effort to apply computational power to the combined chemical and biological space in order to streamline drug discovery, design, development and optimization. In biomedical arena, computer-aided or in silico design is being utilized to expedite and facilitate hit identification, hit-to-lead selection, optimize the absorption, distribution, metabolism, excretion and toxicity profile and avoid safety issues. Commonly used computational approaches include ligand-based drug design (pharmacophore, a 3-D spatial arrangement of chemical features essential for biological activity), structure-based drug design (drug-target docking), and quantitative structure-activity and quantitative structure-property relationships. Regulatory agencies as well as pharmaceutical industry are actively involved in development of computational tools that will improve effectiveness and efficiency of drug discovery and development process, decrease use of animals, and increase predictability. It is expected that the power of CADDD will grow as the technology continues to evolve. PMID:17229415

  14. Computer animation for minimally invasive surgery: computer system requirements and preferred implementations

    NASA Astrophysics Data System (ADS)

    Pieper, Steven D.; McKenna, Michael; Chen, David; McDowall, Ian E.

    1994-04-01

    We are interested in the application of computer animation to surgery. Our current project, a navigation and visualization tool for knee arthroscopy, relies on real-time computer graphics and the human interface technologies associated with virtual reality. We believe that this new combination of techniques will lead to improved surgical outcomes and decreased health care costs. To meet these expectations in the medical field, the system must be safe, usable, and cost-effective. In this paper, we outline some of the most important hardware and software specifications in the areas of video input and output, spatial tracking, stereoscopic displays, computer graphics models and libraries, mass storage and network interfaces, and operating systems. Since this is a fairly new combination of technologies and a new application, our justification for our specifications are drawn from the current generation of surgical technology and by analogy to other fields where virtual reality technology has been more extensively applied and studied.

  15. Impact of implementation choices on quantitative predictions of cell-based computational models

    NASA Astrophysics Data System (ADS)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  16. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  17. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE PAGES

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...

    2016-05-03

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  18. Effects of diluting medium and holding time on sperm motility analysis by CASA in ram.

    PubMed

    Mostafapor, Somayeh; Farrokhi Ardebili, Farhad

    2014-01-01

    The aim of this study was to evaluate the effects of dilution rate and holding time on various motility parameters using computer-assisted sperm analysis (CASA). The semen samples were collected from three Ghezel rams. Samples were diluted in seminal plasma (SP), phosphate-buffered saline (PBS) containing 1% bovine serum albumin (BSA) and Bioexcell. The motility parameters that computed and recorded by CASA include curvilinear velocity (VCL), straight line velocity (VSL), average path velocity (VAP), straightness (STR), linearity (LIN), amplitude of lateral head displacement (ALH), and beat cross frequency (BCF). In all diluters, there was a decrease in the average of all three parameters of sperms movement velocity as the time passed, but density of this decrease was more intensive in SP. The average of ALH between diluters indicated a significant difference, as it was more in Bioexcell in comparison with the similar amount in SP and PBS. The average of LIN in the diluted sperms in Bioexcell was less than two other diluters in all three times. The motility parameters of the diluted sperms in Bioexcell and PBS indicated an important and considerable difference with the diluted sperms in SP. According to the gained results, the Bioexcell has greater ability in preserving motility of sperm in comparison with the other diluters but as SP is considered as physiological environment for sperm. It seems that the evaluation of the motility parameters in Bioexcell and PBS cannot be an accurate and comparable evaluation with SP.

  19. Girls' physical activity and sedentary behaviors: Does sexual maturation matter? A cross-sectional study with HBSC 2010 Portuguese survey.

    PubMed

    Marques, Adilson; Branquinho, Cátia; De Matos, Margarida Gaspar

    2016-07-01

    The purpose of this study was to examine the relationship between girls' sexual maturation (age of menarche) and physical activity and sedentary behaviors. Data were collected from a national representative sample of girls in 2010 (pre-menarcheal girls n = 583, post-menarcheal girls n = 741). Physical activity (times/week and hours/week) and screen-based sedentary time (minutes/day) including television/video/DVD watching, playing videogames, and computer use were self-reported. Pre-menarcheal girls engaged significantly more times in physical activity in the last 7 days than post-menarcheal girls (3.5 ± 1.9 times/week vs. 3.0 ± 1.7 times/week, P < 0.001). There was no significant difference between pre-menarcheal and post-menarcheal girls in time (hours/week) spent in physical activity. Post-menarcheal girls spent significantly more minutes per day than pre-menarcheal girls watching TV, playing videogames, and using computers on weekdays (TV: 165.2 ± 105.8 vs. 136.0 ± 106.3, P < 0.001; videogames 72.0 ± 84.8 vs. 60.3 ± 78.9, P = 0.015; computer: 123.3 ± 103.9 vs. 82.8 ± 95.8, P < 0.001) and on weekends (TV: 249.0 ± 116.2 vs. 209.3 ± 124.8, P < 0.001; videogames: 123.0 ± 114.0 vs. 104.7 ± 103.5, P = 0.020; computer: 177.0 ± 122.2 vs. 119.7 ± 112.7, P < 0.001). After adjusting analyses for age, BMI, and socioeconomic status, differences were still significant for physical activity and for computer use. Specific interventions should be designed for girls to increase their physical activity participation and decrease time spent on the computer, for post-menarcheal girls in particular. Am. J. Hum. Biol. 28:471-475, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  20. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  1. Discrete Fourier Transform Analysis in a Complex Vector Space

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2009-01-01

    Alternative computational strategies for the Discrete Fourier Transform (DFT) have been developed using analysis of geometric manifolds. This approach provides a general framework for performing DFT calculations, and suggests a more efficient implementation of the DFT for applications using iterative transform methods, particularly phase retrieval. The DFT can thus be implemented using fewer operations when compared to the usual DFT counterpart. The software decreases the run time of the DFT in certain applications such as phase retrieval that iteratively call the DFT function. The algorithm exploits a special computational approach based on analysis of the DFT as a transformation in a complex vector space. As such, this approach has the potential to realize a DFT computation that approaches N operations versus Nlog(N) operations for the equivalent Fast Fourier Transform (FFT) calculation.

  2. Efficient Wideband Numerical Simulations for Nanostructures Employing a Drude-Critical Points (DCP) Dispersive Model.

    PubMed

    Ren, Qiang; Nagar, Jogender; Kang, Lei; Bian, Yusheng; Werner, Ping; Werner, Douglas H

    2017-05-18

    A highly efficient numerical approach for simulating the wideband optical response of nano-architectures comprised of Drude-Critical Points (DCP) media (e.g., gold and silver) is proposed and validated through comparing with commercial computational software. The kernel of this algorithm is the subdomain level discontinuous Galerkin time domain (DGTD) method, which can be viewed as a hybrid of the spectral-element time-domain method (SETD) and the finite-element time-domain (FETD) method. An hp-refinement technique is applied to decrease the Degrees-of-Freedom (DoFs) and computational requirements. The collocated E-J scheme facilitates solving the auxiliary equations by converting the inversions of matrices to simpler vector manipulations. A new hybrid time stepping approach, which couples the Runge-Kutta and Newmark methods, is proposed to solve the temporal auxiliary differential equations (ADEs) with a high degree of efficiency. The advantages of this new approach, in terms of computational resource overhead and accuracy, are validated through comparison with well-known commercial software for three diverse cases, which cover both near-field and far-field properties with plane wave and lumped port sources. The presented work provides the missing link between DCP dispersive models and FETD and/or SETD based algorithms. It is a competitive candidate for numerically studying the wideband plasmonic properties of DCP media.

  3. Real-time polarization imaging algorithm for camera-based polarization navigation sensors.

    PubMed

    Lu, Hao; Zhao, Kaichun; You, Zheng; Huang, Kaoli

    2017-04-10

    Biologically inspired polarization navigation is a promising approach due to its autonomous nature, high precision, and robustness. Many researchers have built point source-based and camera-based polarization navigation prototypes in recent years. Camera-based prototypes can benefit from their high spatial resolution but incur a heavy computation load. The pattern recognition algorithm in most polarization imaging algorithms involves several nonlinear calculations that impose a significant computation burden. In this paper, the polarization imaging and pattern recognition algorithms are optimized through reduction to several linear calculations by exploiting the orthogonality of the Stokes parameters without affecting precision according to the features of the solar meridian and the patterns of the polarized skylight. The algorithm contains a pattern recognition algorithm with a Hough transform as well as orientation measurement algorithms. The algorithm was loaded and run on a digital signal processing system to test its computational complexity. The test showed that the running time decreased to several tens of milliseconds from several thousand milliseconds. Through simulations and experiments, it was found that the algorithm can measure orientation without reducing precision. It can hence satisfy the practical demands of low computational load and high precision for use in embedded systems.

  4. Analytical model of contamination during the drying of cylinders of jamonable muscle

    NASA Astrophysics Data System (ADS)

    Montoya Arroyave, Isabel

    2014-05-01

    For a cylinder of jamonable muscle of radius R and length much greater than R; considering that the internal resistance to the transfer of water is much greater than the external and that the internal resistance is one certain function of the distance to the axis; the distribution of the punctual moisture in the jamonable cylinder is analytically computed in terms of the Bessel's functions. During the process of drying and salted the jamonable cylinder is sensitive to contaminate with bacterium and protozoa that come from the environment. An analytical model of contamination is presents using the diffusion equation with sources and sinks, which is solve by the method of the Laplace transform, the Bromwich integral, the residue theorem and some special functions like Bessel and Heun. The critical times intervals of drying and salted are computed in order to obtain the minimum possible contamination. It is assumed that both external moisture and contaminants decrease exponentially with time. Contaminants profiles are plotted and discussed some possible techniques of contaminants detection. All computations are executed using Computer Algebra, specifically Maple. It is said that the results are important for the food industry and it is suggested some future research lines.

  5. Validating the Accuracy of Reaction Time Assessment on Computer-Based Tablet Devices.

    PubMed

    Schatz, Philip; Ybarra, Vincent; Leitner, Donald

    2015-08-01

    Computer-based assessment has evolved to tablet-based devices. Despite the availability of tablets and "apps," there is limited research validating their use. We documented timing delays between stimulus presentation and (simulated) touch response on iOS devices (3rd- and 4th-generation Apple iPads) and Android devices (Kindle Fire, Google Nexus, Samsung Galaxy) at response intervals of 100, 250, 500, and 1,000 milliseconds (ms). Results showed significantly greater timing error on Google Nexus and Samsung tablets (81-97 ms), than Kindle Fire and Apple iPads (27-33 ms). Within Apple devices, iOS 7 obtained significantly lower timing error than iOS 6. Simple reaction time (RT) trials (250 ms) on tablet devices represent 12% to 40% error (30-100 ms), depending on the device, which decreases considerably for choice RT trials (3-5% error at 1,000 ms). Results raise implications for using the same device for serial clinical assessment of RT using tablets, as well as the need for calibration of software and hardware. © The Author(s) 2015.

  6. Comparison of missing value imputation methods in time series: the case of Turkish meteorological data

    NASA Astrophysics Data System (ADS)

    Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci

    2013-04-01

    This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.

  7. Phase Transition of a Dynamical System with a Bi-Directional, Instantaneous Coupling to a Virtual System

    NASA Astrophysics Data System (ADS)

    Gintautas, Vadas; Hubler, Alfred

    2006-03-01

    As worldwide computer resources increase in power and decrease in cost, real-time simulations of physical systems are becoming increasingly prevalent, from laboratory models to stock market projections and entire ``virtual worlds'' in computer games. Often, these systems are meticulously designed to match real-world systems as closely as possible. We study the limiting behavior of a virtual horizontally driven pendulum coupled to its real-world counterpart, where the interaction occurs on a time scale that is much shorter than the time scale of the dynamical system. We find that if the physical parameters of the virtual system match those of the real system within a certain tolerance, there is a qualitative change in the behavior of the two-pendulum system as the strength of the coupling is increased. Applications include a new method to measure the physical parameters of a real system and the use of resonance spectroscopy to refine a computer model. As virtual systems better approximate real ones, even very weak interactions may produce unexpected and dramatic behavior. The research is supported by the National Science Foundation Grant No. NSF PHY 01-40179, NSF DMS 03-25939 ITR, and NSF DGE 03-38215.

  8. Molecular simulation of small Knudsen number flows

    NASA Astrophysics Data System (ADS)

    Fei, Fei; Fan, Jing

    2012-11-01

    The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.

  9. Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.

    PubMed

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh

    2018-06-04

    Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  10. Computational evaluation of load carriage effects on gait balance stability.

    PubMed

    Mummolo, Carlotta; Park, Sukyung; Mangialardi, Luigi; Kim, Joo H

    2016-01-01

    Evaluating the effects of load carriage on gait balance stability is important in various applications. However, their quantification has not been rigorously addressed in the current literature, partially due to the lack of relevant computational indices. The novel Dynamic Gait Measure (DGM) characterizes gait balance stability by quantifying the relative effects of inertia in terms of zero-moment point, ground projection of center of mass, and time-varying foot support region. In this study, the DGM is formulated in terms of the gait parameters that explicitly reflect the gait strategy of a given walking pattern and is used for computational evaluation of the distinct balance stability of loaded walking. The observed gait adaptations caused by load carriage (decreased single support duration, inertia effects, and step length) result in decreased DGM values (p < 0.0001), which indicate that loaded walking motions are more statically stable compared with the unloaded normal walking. Comparison of the DGM with other common gait stability indices (the maximum Floquet multiplier and the margin of stability) validates the unique characterization capability of the DGM, which is consistently informative of the presence of the added load.

  11. Simulations of normal and inverse laminar diffusion flames under oxygen enhancement and gravity variation

    NASA Astrophysics Data System (ADS)

    Bhatia, P.; Katta, V. R.; Krishnan, S. S.; Zheng, Y.; Sunderland, P. B.; Gore, J. P.

    2012-10-01

    Steady-state global chemistry calculations for 20 different flames were carried out using an axisymmetric Computational Fluid Dynamics (CFD) code. Computational results for 16 flames were compared with flame images obtained at the NASA Glenn Research Center. The experimental flame data for these 16 flames were taken from Sunderland et al. [4] which included normal and inverse diffusion flames of ethane with varying oxidiser compositions (21, 30, 50, 100% O2 mole fraction in N2) stabilised on a 5.5 mm diameter burner. The test conditions of this reference resulted in highly convective inverse diffusion flames (Froude numbers of the order of 10) and buoyant normal diffusion flames (Froude numbers ∼0.1). Additionally, six flames were simulated to study the effect of oxygen enhancement on normal diffusion flames. The enhancement in oxygen resulted in increased flame temperatures and the presence of gravity led to increased gas velocities. The effect of gravity-variation and oxygen enhancement on flame shape and size of normal diffusion flames was far more pronounced than for inverse diffusion flames. For normal-diffusion flames, their flame-lengths decreased (1 to 2 times) and flames-widths increased (2 to 3 times) when going from earth-gravity to microgravity, and flame height decreased by five times when going from air to a pure oxygen environment.

  12. Passive attenuation of cortical pattern evoked potentials with increasing body weight in young male rhesus macaques.

    PubMed

    Komaromy, Andras M; Brooks, Dennis E; Kallberg, Maria E; Dawson, William W; Sapp, Harold L; Sherwood, Mark B; Lambrou, George N; Percicot, Christine L

    2003-05-01

    The purpose of our study was to determine changes in amplitudes and implicit times of retinal and cortical pattern evoked potentials with increasing body weight in young, growing rhesus macaques (Macaca mulatta). Retinal and cortical pattern evoked potentials were recorded from 29 male rhesus macaques between 3 and 7 years of age. Thirteen animals were reexamined after 11 months. Computed tomography (CT) was performed on two animals to measure the distance between the location of the skin electrode and the surface of the striate cortex. Spearman correlation coefficients were calculated to describe the relationship between body weights and either root mean square (rms) amplitudes or implicit times. For 13 animals rms amplitudes and implicit times were compared with the Wilcoxon matched pairs signed rank test for recordings taken 11 months apart. Highly significant correlations between increases in body weights and decreases in cortical rms amplitudes were noted in 29 monkeys (p < 0.0005). No significant changes were found in the cortical rms amplitudes in thirteen monkeys over 11 months. Computed tomography showed a large increase of soft tissue thickness over the skull and striate cortex with increased body weight. The decreased amplitude in cortical evoked potentials with weight gain associated with aging can be explained by the increased distance between skin electrode and striate cortex due to soft tissue thickening (passive attenuation).

  13. Explicit integration with GPU acceleration for large kinetic networks

    DOE PAGES

    Brock, Benjamin; Belt, Andrew; Billings, Jay Jay; ...

    2015-09-15

    In this study, we demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. In addition, this orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies thatmore » important coupled, multiphysics problems in various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.« less

  14. Computational methods for yeast prion curing curves.

    PubMed

    Ridout, Martin S

    2008-10-01

    If the chemical guanidine hydrochloride is added to a dividing culture of yeast cells in which some of the protein Sup35p is in its prion form, the proportion of cells that carry replicating units of the prion, termed propagons, decreases gradually over time. Stochastic models to describe this process of 'curing' have been developed in earlier work. The present paper investigates the use of numerical methods of Laplace transform inversion to calculate curing curves and contrasts this with an alternative, more direct, approach that involves numerical integration. Transform inversion is found to provide a much more efficient computational approach that allows different models to be investigated with minimal programming effort. The method is used to investigate the robustness of the curing curve to changes in the assumed distribution of cell generation times. Matlab code is available for carrying out the calculations.

  15. Socioeconomic inequalities in suicide in Europe: the widening gap.

    PubMed

    Lorant, Vincent; de Gelder, Rianne; Kapadia, Dharmi; Borrell, Carme; Kalediene, Ramune; Kovács, Katalin; Leinsalu, Mall; Martikainen, Pekka; Menvielle, Gwenn; Regidor, Enrique; Rodríguez-Sanz, Maica; Wojtyniak, Bogdan; Strand, Bjørn Heine; Bopp, Matthias; Mackenbach, Johan P

    2018-06-01

    Suicide has been decreasing over the past decade. However, we do not know whether socioeconomic inequality in suicide has been decreasing as well.AimsWe assessed recent trends in socioeconomic inequalities in suicide in 15 European populations. The DEMETRIQ study collected and harmonised register-based data on suicide mortality follow-up of population censuses, from 1991 and 2001, in European populations aged 35-79. Absolute and relative inequalities of suicide according to education were computed on more than 300 million person-years. In the 1990s, people in the lowest educational group had 1.82 times more suicides than those in the highest group. In the 2000s, this ratio increased to 2.12. Among men, absolute and relative inequalities were substantial in both periods and generally did not decrease over time, whereas among women inequalities were absent in the first period and emerged in the second. The World Health Organization (WHO) plan for 'Fair opportunity of mental wellbeing' is not likely to be met.Declaration of interestNone.

  16. Patient-specific Radiation Dose and Cancer Risk for Pediatric Chest CT

    PubMed Central

    Samei, Ehsan; Segars, W. Paul; Sturgeon, Gregory M.; Colsher, James G.; Frush, Donald P.

    2011-01-01

    Purpose: To estimate patient-specific radiation dose and cancer risk for pediatric chest computed tomography (CT) and to evaluate factors affecting dose and risk, including patient size, patient age, and scanning parameters. Materials and Methods: The institutional review board approved this study and waived informed consent. This study was HIPAA compliant. The study included 30 patients (0–16 years old), for whom full-body computer models were recently created from clinical CT data. A validated Monte Carlo program was used to estimate organ dose from eight chest protocols, representing clinically relevant combinations of bow tie filter, collimation, pitch, and tube potential. Organ dose was used to calculate effective dose and risk index (an index of total cancer incidence risk). The dose and risk estimates before and after normalization by volume-weighted CT dose index (CTDIvol) or dose–length product (DLP) were correlated with patient size and age. The effect of each scanning parameter was studied. Results: Organ dose normalized by tube current–time product or CTDIvol decreased exponentially with increasing average chest diameter. Effective dose normalized by tube current–time product or DLP decreased exponentially with increasing chest diameter. Chest diameter was a stronger predictor of dose than weight and total scan length. Risk index normalized by tube current–time product or DLP decreased exponentially with both chest diameter and age. When normalized by DLP, effective dose and risk index were independent of collimation, pitch, and tube potential (<10% variation). Conclusion: The correlations of dose and risk with patient size and age can be used to estimate patient-specific dose and risk. They can further guide the design and optimization of pediatric chest CT protocols. © RSNA, 2011 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101900/-/DC1 PMID:21467251

  17. Patient-specific radiation dose and cancer risk for pediatric chest CT.

    PubMed

    Li, Xiang; Samei, Ehsan; Segars, W Paul; Sturgeon, Gregory M; Colsher, James G; Frush, Donald P

    2011-06-01

    To estimate patient-specific radiation dose and cancer risk for pediatric chest computed tomography (CT) and to evaluate factors affecting dose and risk, including patient size, patient age, and scanning parameters. The institutional review board approved this study and waived informed consent. This study was HIPAA compliant. The study included 30 patients (0-16 years old), for whom full-body computer models were recently created from clinical CT data. A validated Monte Carlo program was used to estimate organ dose from eight chest protocols, representing clinically relevant combinations of bow tie filter, collimation, pitch, and tube potential. Organ dose was used to calculate effective dose and risk index (an index of total cancer incidence risk). The dose and risk estimates before and after normalization by volume-weighted CT dose index (CTDI(vol)) or dose-length product (DLP) were correlated with patient size and age. The effect of each scanning parameter was studied. Organ dose normalized by tube current-time product or CTDI(vol) decreased exponentially with increasing average chest diameter. Effective dose normalized by tube current-time product or DLP decreased exponentially with increasing chest diameter. Chest diameter was a stronger predictor of dose than weight and total scan length. Risk index normalized by tube current-time product or DLP decreased exponentially with both chest diameter and age. When normalized by DLP, effective dose and risk index were independent of collimation, pitch, and tube potential (<10% variation). The correlations of dose and risk with patient size and age can be used to estimate patient-specific dose and risk. They can further guide the design and optimization of pediatric chest CT protocols. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101900/-/DC1. RSNA, 2011

  18. With "big data" comes big responsibility: outreach to North Carolina Medicaid patients with 10 or more computed tomography scans in 12 months.

    PubMed

    Biola, Holly; Best, Randall M; Lahlou, Rita M; Burke, Lauren M; Dewar, Charles; Jackson, Carlos T; Broder, Joshua; Grey, Linda; Semelka, Richard C; Dobson, Allen

    2014-01-01

    Patients are being exposed to increasing levels of ionizing radiation, much of it from computed tomography (CT) scans. Adults without a cancer diagnosis who received 10 or more CT scans in 2010 were identified from North Carolina Medicaid claims data and were sent a letter in July 2011 informing them of their radiation exposure; those who had undergone 20 or more CT scans in 2010 were also telephoned. The CT scan exposure of these high-exposure patients during the 12 months following these interventions was compared with that of adult Medicaid patients without cancer who had at least 1 CT scan but were not in the intervention population. The average number of CT scans per month for the high-exposure population decreased over time, but most of that reduction occurred 6-9 months before our interventions took place. At about the same time, the number of CT scans per month also decreased in adult Medicaid patients without cancer who had at least 1 CT scan but were not in the intervention population. Our data do not include information about CT scans that may have been performed during times when patients were not covered by Medicaid. Some of our letters may not have been received or understood. Some high-exposure patients were unintentionally excluded from our study because organization of data on Medicaid claims varies by setting of care. Our patient education intervention was not temporally associated with significant decreases in subsequent CT exposure. Effecting behavior change to reduce exposure to ionizing radiation requires more than an educational letter or telephone call.

  19. Critical phenomena in communication/computation networks with various topologies and suboptimal to optimal resource allocation

    NASA Astrophysics Data System (ADS)

    Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi

    2015-01-01

    We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.

  20. Bringing MapReduce Closer To Data With Active Drives

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Prathapan, S.; Warmka, R.; Wyatt, B.; Halem, M.; Trantham, J. D.; Markey, C. A.

    2017-12-01

    Moving computation closer to the data location has been a much theorized improvement to computation for decades. The increase in processor performance, the decrease in processor size and power requirement combined with the increase in data intensive computing has created a push to move computation as close to data as possible. We will show the next logical step in this evolution in computing: moving computation directly to storage. Hypothetical systems, known as Active Drives, have been proposed as early as 1998. These Active Drives would have a general-purpose CPU on each disk allowing for computations to be performed on them without the need to transfer the data to the computer over the system bus or via a network. We will utilize Seagate's Active Drives to perform general purpose parallel computing using the MapReduce programming model directly on each drive. We will detail how the MapReduce programming model can be adapted to the Active Drive compute model to perform general purpose computing with comparable results to traditional MapReduce computations performed via Hadoop. We will show how an Active Drive based approach significantly reduces the amount of data leaving the drive when performing several common algorithms: subsetting and gridding. We will show that an Active Drive based design significantly improves data transfer speeds into and out of drives compared to Hadoop's HDFS while at the same time keeping comparable compute speeds as Hadoop.

  1. Nonportable computed radiography of the chest--radiologists' acceptance

    NASA Astrophysics Data System (ADS)

    Gennari, Rose C.; Gur, David; Miketic, Linda M.; Campbell, William L.; Oliver, James H., III; Plunkett, Michael B.

    1994-04-01

    Following a large ROC study to assess diagnostic accuracy of PA chest computed radiography (CR) images displayed in a variety of formats, we asked nine experienced radiologists to subjectively assess their acceptance of and preferences for display modes in primary diagnosis of erect PA chest images. Our results indicate that radiologists felt somewhat less comfortable interpreting CR images displayed on either laser-printed films or workstations as compared to conventional films. The use of four minified images were thought to somewhat decrease diagnostic confidence, as well as to increase the time of interpretation. The reverse mode (black bone) images increased radiologists' confidence level in the detection of soft tissue abnormalities.

  2. Essential Autonomous Science Inference on Rovers (EASIR)

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.; Shipman, Mark; Morris, Robert; Gazis, Paul; Pedersen, Liam

    2003-01-01

    Existing constraints on time, computational, and communication resources associated with Mars rover missions suggest on-board science evaluation of sensor data can contribute to decreasing human-directed operational planning, optimizing returned science data volumes, and recognition of unique or novel data. All of which act to increase the scientific return from a mission. Many different levels of science autonomy exist and each impacts the data collected and returned by, and activities of, rovers. Several computational algorithms, designed to recognize objects of interest to geologists and biologists, are discussed. The algorithms represent various functions that producing scientific opinions and several scenarios illustrate how the opinions can be used.

  3. Transient modelling of lacustrine regressions: two case studies from the Andean Altiplano

    NASA Astrophysics Data System (ADS)

    Condom, Thomas; Coudrain, Anne; Dezetter, Alain; Brunstein, Daniel; Delclaux, François; Jean-Emmanuel, Sicart

    2004-09-01

    A model was developed for estimating the delay between a change in climatic conditions and the corresponding fall of water level in large lakes. The input data include: rainfall, temperature, extraterrestrial radiation and astronomical mid-month daylight hours. The model uses two empirical coefficients for computing the potential evaporation and one parameter for the soil capacity. The case studies are two subcatchments of the Altiplano (196 000 km2), in which the central low points are Lake Titicaca and a salar corresponding to the desiccation of the Tauca palaeolake. During the Holocene, the two catchments experienced a 100 m fall in water level corresponding to a decrease in water surface area of 3586 km2 and 55 000 km2, respectively. Under modern climatic conditions with a marked rainy season, the model allows simulation of water levels in good agreement with the observations: 3810 m a.s.l. for Lake Titicaca and lack of permanent wide ponds in the southern subcatchment. Simulations were carried out under different climatic conditions that might explain the Holocene fall in water level. Computed results show quite different behaviour for the two subcatchments. For the northern subcatchment, the time required for the 100 m fall in lake-level ranges between 200 and 2000 years when, compared with the present conditions, (i) the rainfall is decreased by 15% (640 mm/year), or (ii) the temperature is increased by 5.5 °C, or (iii) rainfall is distributed equally over the year. For the southern subcatchment (Tauca palaeolake), the time required for a 100 m decrease in water level ranges between 50 and 100 years. This decrease requires precipitation values lower than 330 mm/year.

  4. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  5. Implementation and validation of a wake model for low-speed forward flight

    NASA Technical Reports Server (NTRS)

    Komerath, Narayanan M.; Schreiber, Olivier A.

    1987-01-01

    The computer implementation and calculations of the induced velocities produced by a wake model consisting of a trailing vortex system defined from a prescribed time averaged downwash distribution are detailed. Induced velocities are computed by approximating each spiral turn by a pair of large straight vortex segments positioned at critical points relative to where the induced velocity is required. A remainder term for the rest of the spiral is added. This approach results in decreased computation time compared to classical models where each spiral turn is broken down in small straight vortex segments. The model includes features such a harmonic variation of circulation, downwash outside of the blade and/or outside the tip path plane, blade bound vorticity induced velocity with harmonic variation of circulation and time averaging. The influence of various options and parameters on the results are investigated and results are compared to experimental field measurements with which, a resonable agreement is obtained. The capabilities of the model as well as its extension possibilities are studied. The performance of the model in predicting the recently-acquired NASA Langley Inflow data base for a four-bladed rotor is compared to that of the Scully Free Wake code, a well-established program which requires much greater computational resources. It is found that the two codes predict the experimental data with essentially the same accuracy, and show the same trends.

  6. Trends in screen time on week and weekend days in a representative sample of Southern Brazil students.

    PubMed

    Lopes, Adair S; Silva, Kelly S; Barbosa Filho, Valter C; Bezerra, Jorge; de Oliveira, Elusa S A; Nahas, Markus V

    2014-12-01

    Economic and technological improvements can help increase screen time use among adolescents, but evidence in developing countries is scarce. The aim of this study was to examine changes in TV watching and computer/video game use patterns on week and weekend days after a decade (2001 and 2011), among students in Santa Catarina, southern Brazil. A comparative analysis of two cross-sectional surveys that included 5 028 and 6 529 students in 2001 and 2011, respectively, aged 15-19 years. The screen time use indicators were self-reported. 95% Confidence intervals were used to compare the prevalence rates. All analyses were separated by gender. After a decade, there was a significant increase in computer/video game use. Inversely, a significant reduction in TV watching was observed, with a similar magnitude to the change in computer/video game use. The worst trends were identified on weekend days. The decrease in TV watching after a decade appears to be compensated by the increase in computer/video game use, both in boys and girls. Interventions are needed to reduce the negative impact of technological improvements in the lifestyles of young people, especially on weekend days. © The Author 2014. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Enabling Learning through the Assessment Process

    DTIC Science & Technology

    2010-04-08

    Software, 47. 32 a specific pattern over time.”98 Johnson provides an example of this when discussing the computer simulation of slime mold growth. He...asserts that since the designers understood the underlying interactions between the individual slime molds , they could increase or decrease the...density of individual mold cells and the aggregating chemical that is required for the molds to group together. Furthermore, Johnson suggests that this

  8. Design of object-oriented distributed simulation classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D. (Principal Investigator)

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  9. Design of Object-Oriented Distributed Simulation Classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  10. The Effectiveness of a Web-Based Computer-Tailored Intervention on Workplace Sitting: A Randomized Controlled Trial.

    PubMed

    De Cocker, Katrien; De Bourdeaudhuij, Ilse; Cardon, Greet; Vandelanotte, Corneel

    2016-05-31

    Effective interventions to influence workplace sitting are needed, as office-based workers demonstrate high levels of continued sitting, and sitting too much is associated with adverse health effects. Therefore, we developed a theory-driven, Web-based, interactive, computer-tailored intervention aimed at reducing and interrupting sitting at work. The objective of our study was to investigate the effects of this intervention on objectively measured sitting time, standing time, and breaks from sitting, as well as self-reported context-specific sitting among Flemish employees in a field-based approach. Employees (n=213) participated in a 3-group randomized controlled trial that assessed outcomes at baseline, 1-month follow-up, and 3-month follow-up through self-reports. A subsample (n=122) were willing to wear an activity monitor (activPAL) from Monday to Friday. The tailored group received an automated Web-based, computer-tailored intervention including personalized feedback and tips on how to reduce or interrupt workplace sitting. The generic group received an automated Web-based generic advice with tips. The control group was a wait-list control condition, initially receiving no intervention. Intervention effects were tested with repeated-measures multivariate analysis of variance. The tailored intervention was successful in decreasing self-reported total workday sitting (time × group: P<.001), sitting at work (time × group: P<.001), and leisure time sitting (time × group: P=.03), and in increasing objectively measured breaks at work (time × group: P=.07); this was not the case in the other conditions. The changes in self-reported total nonworkday sitting, sitting during transport, television viewing, and personal computer use, objectively measured total sitting time, and sitting and standing time at work did not differ between conditions. Our results point out the significance of computer tailoring for sedentary behavior and its potential use in public health promotion, as the effects of the tailored condition were superior to the generic and control conditions. Clinicaltrials.gov NCT02672215; http://clinicaltrials.gov/ct2/show/NCT02672215 (Archived by WebCite at http://www.webcitation.org/6glPFBLWv).

  11. The Effectiveness of a Web-Based Computer-Tailored Intervention on Workplace Sitting: A Randomized Controlled Trial

    PubMed Central

    De Bourdeaudhuij, Ilse; Cardon, Greet; Vandelanotte, Corneel

    2016-01-01

    Background Effective interventions to influence workplace sitting are needed, as office-based workers demonstrate high levels of continued sitting, and sitting too much is associated with adverse health effects. Therefore, we developed a theory-driven, Web-based, interactive, computer-tailored intervention aimed at reducing and interrupting sitting at work. Objective The objective of our study was to investigate the effects of this intervention on objectively measured sitting time, standing time, and breaks from sitting, as well as self-reported context-specific sitting among Flemish employees in a field-based approach. Methods Employees (n=213) participated in a 3-group randomized controlled trial that assessed outcomes at baseline, 1-month follow-up, and 3-month follow-up through self-reports. A subsample (n=122) were willing to wear an activity monitor (activPAL) from Monday to Friday. The tailored group received an automated Web-based, computer-tailored intervention including personalized feedback and tips on how to reduce or interrupt workplace sitting. The generic group received an automated Web-based generic advice with tips. The control group was a wait-list control condition, initially receiving no intervention. Intervention effects were tested with repeated-measures multivariate analysis of variance. Results The tailored intervention was successful in decreasing self-reported total workday sitting (time × group: P<.001), sitting at work (time × group: P<.001), and leisure time sitting (time × group: P=.03), and in increasing objectively measured breaks at work (time × group: P=.07); this was not the case in the other conditions. The changes in self-reported total nonworkday sitting, sitting during transport, television viewing, and personal computer use, objectively measured total sitting time, and sitting and standing time at work did not differ between conditions. Conclusions Our results point out the significance of computer tailoring for sedentary behavior and its potential use in public health promotion, as the effects of the tailored condition were superior to the generic and control conditions. Trial Registration Clinicaltrials.gov NCT02672215; http://clinicaltrials.gov/ct2/show/NCT02672215 (Archived by WebCite at http://www.webcitation.org/6glPFBLWv) PMID:27245789

  12. An economic analysis for optimal distributed computing resources for mask synthesis and tape-out in production environment

    NASA Astrophysics Data System (ADS)

    Cork, Chris; Lugg, Robert; Chacko, Manoj; Levi, Shimon

    2005-06-01

    With the exponential increase in output database size due to the aggressive optical proximity correction (OPC) and resolution enhancement technique (RET) required for deep sub-wavelength process nodes, the CPU time required for mask tape-out continues to increase significantly. For integrated device manufacturers (IDMs), this can impact the time-to-market for their products where even a few days delay could have a huge commercial impact and loss of market window opportunity. For foundries, a shorter turnaround time provides a competitive advantage in their demanding market, too slow could mean customers looking elsewhere for these services; while a fast turnaround may even command a higher price. With FAB turnaround of a mature, plain-vanilla CMOS process of around 20-30 days, a delay of several days in mask tapeout would contribute a significant fraction to the total time to deliver prototypes. Unlike silicon processing, masks tape-out time can be decreased by simply purchasing extra computing resources and software licenses. Mask tape-out groups are taking advantage of the ever-decreasing hardware cost and increasing power of commodity processors. The significant distributability inherent in some commercial Mask Synthesis software can be leveraged to address this critical business issue. Different implementations have different fractions of the code that cannot be parallelized and this affects the efficiency with which it scales, as is described by Amdahl"s law. Very few are efficient enough to allow the effective use of 1000"s of processors, enabling run times to drop from days to only minutes. What follows is a cost aware methodology to quantify the scalability of this class of software, and thus act as a guide to estimating the optimal investment in terms of hardware and software licenses.

  13. New indices from microneurography to investigate the arterial baroreflex.

    PubMed

    Laurin, Alexandre; Lloyd, Matthew G; Hachiya, Tesshin; Saito, Mitsuru; Claydon, Victoria E; Blaber, Andrew

    2017-06-01

    Baroreflex-mediated changes in heart rate and vascular resistance in response to variations in blood pressure are critical to maintain homeostasis. We aimed to develop time domain analysis methods to complement existing cross-spectral techniques in the investigation of the vascular resistance baroreflex response to orthostatic stress. A secondary goal was to apply these methods to distinguish between levels of orthostatic tolerance using baseline data. Eleven healthy, normotensive males participated in a graded lower body negative pressure protocol. Within individual neurogenic baroreflex cycles, the amount of muscle sympathetic nerve activity (MSNA), the diastolic pressure stimulus and response amplitudes, diastolic pressure to MSNA burst stimulus and response times, as well as the stimulus and response slopes between diastolic pressure and MSNA were computed. Coherence, gain, and frequency of highest coherence between systolic/diastolic arterial pressure (SAP/DAP) and RR-interval time series were also computed. The number of MSNA bursts per low-frequency cycle increased from 2.55 ± 0.68 at baseline to 5.44 ± 1.56 at -40 mmHg of LBNP Stimulus time decreased (3.21 ± 1.48-1.46 ± 0.43 sec), as did response time (3.47 ± 0.86-2.37 ± 0.27 sec). At baseline, DAP-RR coherence, DAP-RR gain, and the time delay between decreases in DAP and MSNA bursts were higher in participants who experienced symptoms of presyncope. Results clarified the role of different branches of the baroreflex loop, and suggested functional adaptation of neuronal pathways to orthostatic stress. © 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.

  14. Targeting an efficient target-to-target interval for P300 speller brain–computer interfaces

    PubMed Central

    Sellers, Eric W.; Wang, Xingyu

    2013-01-01

    Longer target-to-target intervals (TTI) produce greater P300 event-related potential amplitude, which can increase brain–computer interface (BCI) classification accuracy and decrease the number of flashes needed for accurate character classification. However, longer TTIs requires more time for each trial, which will decrease the information transfer rate of BCI. In this paper, a P300 BCI using a 7 × 12 matrix explored new flash patterns (16-, 18- and 21-flash pattern) with different TTIs to assess the effects of TTI on P300 BCI performance. The new flash patterns were designed to minimize TTI, decrease repetition blindness, and examine the temporal relationship between each flash of a given stimulus by placing a minimum of one (16-flash pattern), two (18-flash pattern), or three (21-flash pattern) non-target flashes between each target flashes. Online results showed that the 16-flash pattern yielded the lowest classification accuracy among the three patterns. The results also showed that the 18-flash pattern provides a significantly higher information transfer rate (ITR) than the 21-flash pattern; both patterns provide high ITR and high accuracy for all subjects. PMID:22350331

  15. Wrist Hypothermia Related to Continuous Work with a Computer Mouse: A Digital Infrared Imaging Pilot Study

    PubMed Central

    Reste, Jelena; Zvagule, Tija; Kurjane, Natalja; Martinsone, Zanna; Martinsone, Inese; Seile, Anita; Vanadzins, Ivars

    2015-01-01

    Computer work is characterized by sedentary static workload with low-intensity energy metabolism. The aim of our study was to evaluate the dynamics of skin surface temperature in the hand during prolonged computer mouse work under different ergonomic setups. Digital infrared imaging of the right forearm and wrist was performed during three hours of continuous computer work (measured at the start and every 15 minutes thereafter) in a laboratory with controlled ambient conditions. Four people participated in the study. Three different ergonomic computer mouse setups were tested on three different days (horizontal computer mouse without mouse pad; horizontal computer mouse with mouse pad and padded wrist support; vertical computer mouse without mouse pad). The study revealed a significantly strong negative correlation between the temperature of the dorsal surface of the wrist and time spent working with a computer mouse. Hand skin temperature decreased markedly after one hour of continuous computer mouse work. Vertical computer mouse work preserved more stable and higher temperatures of the wrist (>30 °C), while continuous use of a horizontal mouse for more than two hours caused an extremely low temperature (<28 °C) in distal parts of the hand. The preliminary observational findings indicate the significant effect of the duration and ergonomics of computer mouse work on the development of hand hypothermia. PMID:26262633

  16. Wrist Hypothermia Related to Continuous Work with a Computer Mouse: A Digital Infrared Imaging Pilot Study.

    PubMed

    Reste, Jelena; Zvagule, Tija; Kurjane, Natalja; Martinsone, Zanna; Martinsone, Inese; Seile, Anita; Vanadzins, Ivars

    2015-08-07

    Computer work is characterized by sedentary static workload with low-intensity energy metabolism. The aim of our study was to evaluate the dynamics of skin surface temperature in the hand during prolonged computer mouse work under different ergonomic setups. Digital infrared imaging of the right forearm and wrist was performed during three hours of continuous computer work (measured at the start and every 15 minutes thereafter) in a laboratory with controlled ambient conditions. Four people participated in the study. Three different ergonomic computer mouse setups were tested on three different days (horizontal computer mouse without mouse pad; horizontal computer mouse with mouse pad and padded wrist support; vertical computer mouse without mouse pad). The study revealed a significantly strong negative correlation between the temperature of the dorsal surface of the wrist and time spent working with a computer mouse. Hand skin temperature decreased markedly after one hour of continuous computer mouse work. Vertical computer mouse work preserved more stable and higher temperatures of the wrist (>30 °C), while continuous use of a horizontal mouse for more than two hours caused an extremely low temperature (<28 °C) in distal parts of the hand. The preliminary observational findings indicate the significant effect of the duration and ergonomics of computer mouse work on the development of hand hypothermia.

  17. Estimation of evapotranspiration in the Rainbow Springs and Silver Springs basins in North-Central Florida

    USGS Publications Warehouse

    Knowles, Leel

    1996-01-01

    Estimates of evapotranspiration (ET) for the Rainbow and Silver Springs ground-water basins in north-central Florida were determined using a regional water-~budget approach and compared to estimates computed using a modified Priestley-Taylor (PT) model calibrated with eddy-correlation data. Eddy-correlation measurements of latent 0~E) and sensible (H) heat flux were made monthly for a few days at a time, and the PT model was used to estimate 3,E between times of measurement during the 1994 water year. A water-budget analysis for the two-basin area indicated that over a 30-year period (196594) annual rainfall was 51.7 inches. Of the annual rainfall, ET accounted for about 37.9 inches; springflow accounted for 13.1 inches; and the remaining 0.7 inch was accounted for by stream-flow, by ground-water withdrawals from the Floridan aquifer system, and by net change in storage. For the same 30-year period, the annual estimate of ET for the Silver Springs basin was 37.6 inches and was 38.5 inches for the Rainbow Springs basin. Wet- and dry-season estimates of ET for each basin averaged between nearly 19 inches and 20 inches, indicating that like rainfall, ET rates during the 4-month wet season were about twice the ET rates during the 8-month dry season. Wet-season estimates of ET for the Rainbow Springs and Silver Springs basins decreased 2.7 inches, and 3.4 inches, respectively, over the 30-year period; whereas, dry-season estimates for the basins decreased about 0.4 inch and1.0 inch, respectively, over the 30-year period. This decrease probably is related to the general decrease in annual rainfall and reduction in net radiation over the basins during the 30-year period. ET rates computed using the modified PT model were compared to rates computed from the water budget for the 1994 water year. Annual ET, computed using the PT model, was 32.0 inches, nearly equal to the ET water-budget estimate of 31.7 inches computed for the Rainbow Springs and Silver Springs basins. Modeled ET rates for 1994 ranged from 14.4 inches per year in January to 51.6 inches per year in May. Water-budget ET rates for 1994 ranged from 12.0 inches per year in March to 61.2 inches per year in July. Potential evapotranspiration rates for 1994 averaged 46.8 inches per year and ranged from 21.6 inches per year in January to 74.4 inches per year in May. Lake evaporation rates averaged 47.1 inches per year and ranged from 18.0 inches per year in January to 72.0 inches per year in May 1994.

  18. Conjugated dynamic modeling on vanadium redox flow battery with non-constant variance for renewable power plant applications

    NASA Astrophysics Data System (ADS)

    Siddiquee, Abu Nayem Md. Asraf

    A parametric modeling study has been carried out to assess the impact of change in operating parameters on the performance of Vanadium Redox Flow Battery (VRFB). The objective of this research is to develop a computer program to predict the dynamic behavior of VRFB combining fluid mechanics, reaction kinetics, and electric circuit. The computer program was developed using Maple 2015 and calculations were made at different operating parameters. Modeling results show that the discharging time increases from 2.2 hours to 6.7 hours when the concentration of V2+ in electrolytes increases from 1M to 3M. The operation time during the charging cycle decreases from 6.9 hours to 3.3 hours with the increase of applied current from 1.85A to 3.85A. The modeling results represent that the charging and discharging time were found to increase from 4.5 hours to 8.2 hours with the increase in tank to cell ratio from 5:1 to 10:1.

  19. Investigation and appreciation of optimal output feedback. Volume 1: A convergent algorithm for the stochastic infinite-time discrete optimal output feedback problem

    NASA Technical Reports Server (NTRS)

    Halyo, N.; Broussard, J. R.

    1984-01-01

    The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.

  20. Compressive Spectral Method for the Simulation of the Nonlinear Gravity Waves

    PubMed Central

    Bayındır, Cihan

    2016-01-01

    In this paper an approach for decreasing the computational effort required for the spectral simulations of the fully nonlinear ocean waves is introduced. The proposed approach utilizes the compressive sampling algorithm and depends on the idea of using a smaller number of spectral components compared to the classical spectral method. After performing the time integration with a smaller number of spectral components and using the compressive sampling technique, it is shown that the ocean wave field can be reconstructed with a significantly better efficiency compared to the classical spectral method. For the sparse ocean wave model in the frequency domain the fully nonlinear ocean waves with Jonswap spectrum is considered. By implementation of a high-order spectral method it is shown that the proposed methodology can simulate the linear and the fully nonlinear ocean waves with negligible difference in the accuracy and with a great efficiency by reducing the computation time significantly especially for large time evolutions. PMID:26911357

  1. An experimental study of high-pressure droplet combustion

    NASA Technical Reports Server (NTRS)

    Norton, Chris M.; Litchford, Ron J.; Jeng, San-Mou

    1990-01-01

    The results are presented of an experimental study on suspended n-heptane droplet combustion in air for reduced pressures up to P(r) = 2.305. Transition to fully transient heat-up through the critical state is demonstrated above a threshold pressure corresponding to P(r) of roughly 1.4. A silhouette imaging technique resolves the droplet surface for reduced pressures up to about P(r) roughly 0.63, but soot formation conceals the surface at higher pressures. Images of the soot plumes do not show any sudden change in behavior indicative of critical transition. Mean burning rate constants are computed from the d-squared variation law using measured effective droplet diameters at ignition and measured burn times, and corrected burning times are computed for an effective initial droplet diameter. The results show that the burning rates increase as the fuel critical pressure is approached and decrease as the pressure exceeds the fuel critical pressure. Corrected burning times show inverse behavior.

  2. Highly sensitive troponin and coronary computed tomography angiography in the evaluation of suspected acute coronary syndrome in the emergency department.

    PubMed

    Ferencik, Maros; Hoffmann, Udo; Bamberg, Fabian; Januzzi, James L

    2016-08-07

    The evaluation of patients presenting to the emergency department with suspected acute coronary syndrome (ACS) remains a clinical challenge. The traditional assessment includes clinical risk assessment based on cardiovascular risk factors with serial electrocardiograms and cardiac troponin measurements, often followed by advanced cardiac testing as inpatient or outpatient (i.e. stress testing, imaging). Despite this costly and lengthy work-up, there is a non-negligible rate of missed ACS with an increased risk of death. There is a clinical need for diagnostic strategies that will lead to rapid and reliable triage of patients with suspected ACS. We provide an overview of the evidence for the role of highly sensitive troponin (hsTn) in the rapid and efficient evaluation of suspected ACS. Results of recent research studies have led to the introduction of hsTn with rapid rule-in and rule-out protocols into the guidelines. Highly sensitive troponin increases the sensitivity for the detection of myocardial infarction and decreases time to diagnosis; however, it may decrease the specificity, especially when used as a dichotomous variable, rather than continuous variable as recommended by guidelines; this may increase clinician uncertainty. We summarize the evidence for the use of coronary computed tomography angiography (CTA) as the rapid diagnostic tool in this population when used with conventional troponin assays. Coronary CTA significantly decreases time to diagnosis and discharge in patients with suspected ACS, while being safe. However, it may lead to increase in invasive procedures and includes radiation exposure. Finally, we outline the opportunities for the combined use of hsTn and coronary CTA that may result in increased efficiency, decreased need for imaging, lower cost, and decreased radiation dose. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.

  3. Real-Time Joint Streaming Data Processing from Social and Physical Sensors

    NASA Astrophysics Data System (ADS)

    Kropivnitskaya, Y. Y.; Qin, J.; Tiampo, K. F.; Bauer, M.

    2014-12-01

    The results of the technological breakthroughs in computing that have taken place over the last few decades makes it possible to achieve emergency management objectives that focus on saving human lives and decreasing economic effects. In particular, the integration of a wide variety of information sources, including observations from spatially-referenced physical sensors and new social media sources, enables better real-time seismic hazard analysis through distributed computing networks. The main goal of this work is to utilize innovative computational algorithms for better real-time seismic risk analysis by integrating different data sources and processing tools into streaming and cloud computing applications. The Geological Survey of Canada operates the Canadian National Seismograph Network (CNSN) with over 100 high-gain instruments and 60 low-gain or strong motion seismographs. The processing of the continuous data streams from each station of the CNSN provides the opportunity to detect possible earthquakes in near real-time. The information from physical sources is combined to calculate a location and magnitude for an earthquake. The automatically calculated results are not always sufficiently precise and prompt that can significantly reduce the response time to a felt or damaging earthquake. Social sensors, here represented as Twitter users, can provide information earlier to the general public and more rapidly to the emergency planning and disaster relief agencies. We introduce joint streaming data processing from social and physical sensors in real-time based on the idea that social media observations serve as proxies for physical sensors. By using the streams of data in the form of Twitter messages, each of which has an associated time and location, we can extract information related to a target event and perform enhanced analysis by combining it with physical sensor data. Results of this work suggest that the use of data from social media, in conjunction with the development of innovative computing algorithms, when combined with sensor data can provide a new paradigm for real-time earthquake detection in order to facilitate rapid and inexpensive natural risk reduction.

  4. GATE Monte Carlo simulation of dose distribution using MapReduce in a cloud computing environment.

    PubMed

    Liu, Yangchuan; Tang, Yuguo; Gao, Xin

    2017-12-01

    The GATE Monte Carlo simulation platform has good application prospects of treatment planning and quality assurance. However, accurate dose calculation using GATE is time consuming. The purpose of this study is to implement a novel cloud computing method for accurate GATE Monte Carlo simulation of dose distribution using MapReduce. An Amazon Machine Image installed with Hadoop and GATE is created to set up Hadoop clusters on Amazon Elastic Compute Cloud (EC2). Macros, the input files for GATE, are split into a number of self-contained sub-macros. Through Hadoop Streaming, the sub-macros are executed by GATE in Map tasks and the sub-results are aggregated into final outputs in Reduce tasks. As an evaluation, GATE simulations were performed in a cubical water phantom for X-ray photons of 6 and 18 MeV. The parallel simulation on the cloud computing platform is as accurate as the single-threaded simulation on a local server and the simulation correctness is not affected by the failure of some worker nodes. The cloud-based simulation time is approximately inversely proportional to the number of worker nodes. For the simulation of 10 million photons on a cluster with 64 worker nodes, time decreases of 41× and 32× were achieved compared to the single worker node case and the single-threaded case, respectively. The test of Hadoop's fault tolerance showed that the simulation correctness was not affected by the failure of some worker nodes. The results verify that the proposed method provides a feasible cloud computing solution for GATE.

  5. Assessing self-care and social function using a computer adaptive testing version of the pediatric evaluation of disability inventory.

    PubMed

    Coster, Wendy J; Haley, Stephen M; Ni, Pengsheng; Dumas, Helene M; Fragala-Pinkham, Maria A

    2008-04-01

    To examine score agreement, validity, precision, and response burden of a prototype computer adaptive testing (CAT) version of the self-care and social function scales of the Pediatric Evaluation of Disability Inventory compared with the full-length version of these scales. Computer simulation analysis of cross-sectional and longitudinal retrospective data; cross-sectional prospective study. Pediatric rehabilitation hospital, including inpatient acute rehabilitation, day school program, outpatient clinics; community-based day care, preschool, and children's homes. Children with disabilities (n=469) and 412 children with no disabilities (analytic sample); 38 children with disabilities and 35 children without disabilities (cross-validation sample). Not applicable. Summary scores from prototype CAT applications of each scale using 15-, 10-, and 5-item stopping rules; scores from the full-length self-care and social function scales; time (in seconds) to complete assessments and respondent ratings of burden. Scores from both computer simulations and field administration of the prototype CATs were highly consistent with scores from full-length administration (r range, .94-.99). Using computer simulation of retrospective data, discriminant validity, and sensitivity to change of the CATs closely approximated that of the full-length scales, especially when the 15- and 10-item stopping rules were applied. In the cross-validation study the time to administer both CATs was 4 minutes, compared with over 16 minutes to complete the full-length scales. Self-care and social function score estimates from CAT administration are highly comparable with those obtained from full-length scale administration, with small losses in validity and precision and substantial decreases in administration time.

  6. Examining Hurricane Track Length and Stage Duration Since 1980

    NASA Astrophysics Data System (ADS)

    Fandrich, K. M.; Pennington, D.

    2017-12-01

    Each year, tropical systems impact thousands of people worldwide. Current research shows a correlation between the intensity and frequency of hurricanes and the changing climate. However, little is known about other prominent hurricane features. This includes information about hurricane track length (the total distance traveled from tropical depression through a hurricane's final category assignment) and how this distance may have changed with time. Also unknown is the typical duration of a hurricane stage, such as tropical storm to category one, and if the time spent in each stage has changed in recent decades. This research aims to examine changes in hurricane stage duration and track lengths for the 319 storms in NOAA's National Ocean Service Hurricane Reanalysis dataset that reached Category 2 - 5 from 1980 - 2015. Based on evident ocean warming, it is hypothesized that a general increase in track length with time will be detected, thus modern hurricanes are traveling a longer distance than past hurricanes. It is also expected that stage durations are decreasing with time so that hurricanes mature faster than in past decades. For each storm, coordinates are acquired at 4-times daily intervals throughout its duration and track lengths are computed for each 6-hour period. Total track lengths are then computed and storms are analyzed graphically and statistically by category for temporal track length changes. The stage durations of each storm are calculated as the time difference between two consecutive stages. Results indicate that average track lengths for Cat 2 and 3 hurricanes are increasing through time. These findings show that these hurricanes are traveling a longer distance than earlier Cat 2 and 3 hurricanes. In contrast, average track lengths for Cat 4 and 5 hurricanes are decreasing through time, showing less distance traveled than earlier decades. Stage durations for all Cat 2, 4 and 5 storms decrease through the decades but Cat 3 storms show a positive increase though time. This compliments the results of the track length analysis indicating that as storms intensify faster, they are doing so over a shorter distance. It is expected that this research could be used to improve hurricane track forecasting and provide information about the effects of climate change on tropical systems and the tropical environment.

  7. Computer-controlled multi-parameter mapping of 3D compressible flowfields using planar laser-induced iodine fluorescence

    NASA Technical Reports Server (NTRS)

    Donohue, James M.; Victor, Kenneth G.; Mcdaniel, James C., Jr.

    1993-01-01

    A computer-controlled technique, using planar laser-induced iodine fluorescence, for measuring complex compressible flowfields is presented. A new laser permits the use of a planar two-line temperature technique so that all parameters can be measured with the laser operated narrowband. Pressure and temperature measurements in a step flowfield show agreement within 10 percent of a CFD model except in regions close to walls. Deviation of near wall temperature measurements from the model was decreased from 21 percent to 12 percent compared to broadband planar temperature measurements. Computer-control of the experiment has been implemented, except for the frequency tuning of the laser. Image data storage and processing has been improved by integrating a workstation into the experimental setup reducing the data reduction time by a factor of 50.

  8. Computer classes and games in virtual reality environment to reduce loneliness among students of an elderly reference center

    PubMed Central

    Antunes, Thaiany Pedrozo Campos; de Oliveira, Acary Souza Bulle; Crocetta, Tania Brusque; Antão, Jennifer Yohanna Ferreira de Lima; Barbosa, Renata Thais de Almeida; Guarnieri, Regiani; Massetti, Thais; Monteiro, Carlos Bandeira de Mello; de Abreu, Luiz Carlos

    2017-01-01

    Abstract Introduction: Physical and mental changes associated with aging commonly lead to a decrease in communication capacity, reducing social interactions and increasing loneliness. Computer classes for older adults make significant contributions to social and cognitive aspects of aging. Games in a virtual reality (VR) environment stimulate the practice of communicative and cognitive skills and might also bring benefits to older adults. Furthermore, it might help to initiate their contact to the modern technology. The purpose of this study protocol is to evaluate the effects of practicing VR games during computer classes on the level of loneliness of students of an elderly reference center. Methods and Analysis: This study will be a prospective longitudinal study with a randomised cross-over design, with subjects aged 50 years and older, of both genders, spontaneously enrolled in computer classes for beginners. Data collection will be done in 3 moments: moment 0 (T0) – at baseline; moment 1 (T1) – after 8 typical computer classes; and moment 2 (T2) – after 8 computer classes which include 15 minutes for practicing games in VR environment. A characterization questionnaire, the short version of the Short Social and Emotional Loneliness Scale for Adults (SELSA-S) and 3 games with VR (Random, MoviLetrando, and Reaction Time) will be used. For the intervention phase 4 other games will be used: Coincident Timing, Motor Skill Analyser, Labyrinth, and Fitts. The statistical analysis will compare the evolution in loneliness perception, performance, and reaction time during the practice of the games between the 3 moments of data collection. Performance and reaction time during the practice of the games will also be correlated to the loneliness perception. Ethics and Dissemination: The protocol is approved by the host institution's ethics committee under the number 52305215.3.0000.0082. Results will be disseminated via peer-reviewed journal articles and conferences. This clinical trial is registered at ClinicalTrials.gov identifier: NCT02798081. PMID:28272198

  9. Computer classes and games in virtual reality environment to reduce loneliness among students of an elderly reference center: Study protocol for a randomised cross-over design.

    PubMed

    Antunes, Thaiany Pedrozo Campos; Oliveira, Acary Souza Bulle de; Crocetta, Tania Brusque; Antão, Jennifer Yohanna Ferreira de Lima; Barbosa, Renata Thais de Almeida; Guarnieri, Regiani; Massetti, Thais; Monteiro, Carlos Bandeira de Mello; Abreu, Luiz Carlos de

    2017-03-01

    Physical and mental changes associated with aging commonly lead to a decrease in communication capacity, reducing social interactions and increasing loneliness. Computer classes for older adults make significant contributions to social and cognitive aspects of aging. Games in a virtual reality (VR) environment stimulate the practice of communicative and cognitive skills and might also bring benefits to older adults. Furthermore, it might help to initiate their contact to the modern technology. The purpose of this study protocol is to evaluate the effects of practicing VR games during computer classes on the level of loneliness of students of an elderly reference center. This study will be a prospective longitudinal study with a randomised cross-over design, with subjects aged 50 years and older, of both genders, spontaneously enrolled in computer classes for beginners. Data collection will be done in 3 moments: moment 0 (T0) - at baseline; moment 1 (T1) - after 8 typical computer classes; and moment 2 (T2) - after 8 computer classes which include 15 minutes for practicing games in VR environment. A characterization questionnaire, the short version of the Short Social and Emotional Loneliness Scale for Adults (SELSA-S) and 3 games with VR (Random, MoviLetrando, and Reaction Time) will be used. For the intervention phase 4 other games will be used: Coincident Timing, Motor Skill Analyser, Labyrinth, and Fitts. The statistical analysis will compare the evolution in loneliness perception, performance, and reaction time during the practice of the games between the 3 moments of data collection. Performance and reaction time during the practice of the games will also be correlated to the loneliness perception. The protocol is approved by the host institution's ethics committee under the number 52305215.3.0000.0082. Results will be disseminated via peer-reviewed journal articles and conferences. This clinical trial is registered at ClinicalTrials.gov identifier: NCT02798081.

  10. The impact of travel distance, travel time and waiting time on health-related quality of life of diabetes patients: An investigation in six European countries.

    PubMed

    Konerding, Uwe; Bowen, Tom; Elkhuizen, Sylvia G; Faubel, Raquel; Forte, Paul; Karampli, Eleftheria; Mahdavi, Mahdi; Malmström, Tomi; Pavi, Elpida; Torkki, Paulus

    2017-04-01

    The effects of travel distance and travel time to the primary diabetes care provider and waiting time in the practice on health-related quality of life (HRQoL) of patients with type 2 diabetes are investigated. Survey data of 1313 persons with type 2 diabetes from six regions in England (274), Finland (163), Germany (254), Greece (165), the Netherlands (354), and Spain (103) were analyzed. Various multiple linear regression analyses with four different EQ-5D-3L indices (English, German, Dutch and Spanish index) as target variables, with travel distance, travel time, and waiting time in the practice as focal predictors and with control for study region, patient's gender, patient's age, patient's education, time since diagnosis, thoroughness of provider-patient communication were computed. Interactions of regions with the remaining five control variables and the three focal predictors were also tested. There are no interactions of regions with control variables or focal predictors. The indices decrease with increasing travel time to the provider and increasing waiting time in the provider's practice. HRQoL of patients with type 2 diabetes might be improved by decreasing travel time to the provider and waiting time in the provider's practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. CloudMC: a cloud computing application for Monte Carlo simulation.

    PubMed

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-04-21

    This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  12. Numerically exact full counting statistics of the nonequilibrium Anderson impurity model

    NASA Astrophysics Data System (ADS)

    Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; Cohen, Guy

    2018-03-01

    The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n -electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events.

  13. Numerically exact full counting statistics of the nonequilibrium Anderson impurity model

    DOE PAGES

    Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; ...

    2018-03-06

    The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events

  14. Numerically exact full counting statistics of the nonequilibrium Anderson impurity model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ridley, Michael; Singh, Viveka N.; Gull, Emanuel

    The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events

  15. The association between sedentary behaviors during weekdays and weekend with change in body composition in young adults

    PubMed Central

    Drenowatz, Clemens; DeMello, Madison M.; Shook, Robin P.; Hand, Gregory A.; Burgess, Stephanie; Blair, Steven N.

    2016-01-01

    Background High sedentary time has been considered an important chronic disease risk factor but there is only limited information on the association of specific sedentary behaviors on weekdays and weekend-days with body composition. The present study examines the prospective association of total sedentary time and specific sedentary behaviors during weekdays and the weekend with body composition in young adults. Methods A total of 332 adults (50% male; 27.7 ± 3.7 years) were followed over a period of 1 year. Time spent sedentary, excluding sleep (SED), and in physical activity (PA) during weekdays and weekend-days was objectively assessed every 3 months with a multi-sensor device over a period of at least 8 days. In addition, participants reported sitting time, TV time and non-work related time spent at the computer separately for weekdays and the weekend. Fat mass and fat free mass were assessed via dual x-ray absorptiometry and used to calculate percent body fat (%BF). Energy intake was estimated based on TDEE and change in body composition. Results Cross-sectional analyses showed a significant correlation between SED and body composition (0.18 ≤ r ≤ 0.34). Associations between body weight and specific sedentary behaviors were less pronounced and significant during weekdays only (r ≤ 0.16). Nevertheless, decrease in SED during weekends, rather than during weekdays, was significantly associated with subsequent decrease in %BF (β = 0.06, p <0.01). After adjusting for PA and energy intake, results for SED were no longer significant. Only the association between change in sitting time during weekends and subsequent %BF was independent from change in PA or energy intake (β%BF = 0.04, p = 0.01), while there was no significant association between TV or computer time and subsequent body composition. Conclusions The stronger prospective association between sedentary behavior during weekends with subsequent body composition emphasizes the importance of leisure time behavior in weight management. PMID:29546170

  16. The association between sedentary behaviors during weekdays and weekend with change in body composition in young adults.

    PubMed

    Drenowatz, Clemens; DeMello, Madison M; Shook, Robin P; Hand, Gregory A; Burgess, Stephanie; Blair, Steven N

    2016-01-01

    High sedentary time has been considered an important chronic disease risk factor but there is only limited information on the association of specific sedentary behaviors on weekdays and weekend-days with body composition. The present study examines the prospective association of total sedentary time and specific sedentary behaviors during weekdays and the weekend with body composition in young adults. A total of 332 adults (50% male; 27.7 ± 3.7 years) were followed over a period of 1 year. Time spent sedentary, excluding sleep (SED), and in physical activity (PA) during weekdays and weekend-days was objectively assessed every 3 months with a multi-sensor device over a period of at least 8 days. In addition, participants reported sitting time, TV time and non-work related time spent at the computer separately for weekdays and the weekend. Fat mass and fat free mass were assessed via dual x-ray absorptiometry and used to calculate percent body fat (%BF). Energy intake was estimated based on TDEE and change in body composition. Cross-sectional analyses showed a significant correlation between SED and body composition (0.18 ≤ r ≤ 0.34). Associations between body weight and specific sedentary behaviors were less pronounced and significant during weekdays only ( r ≤ 0.16). Nevertheless, decrease in SED during weekends, rather than during weekdays, was significantly associated with subsequent decrease in %BF ( β = 0.06, p <0.01). After adjusting for PA and energy intake, results for SED were no longer significant. Only the association between change in sitting time during weekends and subsequent %BF was independent from change in PA or energy intake (β %BF = 0.04, p = 0.01), while there was no significant association between TV or computer time and subsequent body composition. The stronger prospective association between sedentary behavior during weekends with subsequent body composition emphasizes the importance of leisure time behavior in weight management.

  17. High performance GPU processing for inversion using uniform grid searches

    NASA Astrophysics Data System (ADS)

    Venetis, Ioannis E.; Saltogianni, Vasso; Stiros, Stathis; Gallopoulos, Efstratios

    2017-04-01

    Many geophysical problems are described by systems of redundant, highly non-linear systems of ordinary equations with constant terms deriving from measurements and hence representing stochastic variables. Solution (inversion) of such problems is based on numerical, optimization methods, based on Monte Carlo sampling or on exhaustive searches in cases of two or even three "free" unknown variables. Recently the TOPological INVersion (TOPINV) algorithm, a grid search-based technique in the Rn space, has been proposed. TOPINV is not based on the minimization of a certain cost function and involves only forward computations, hence avoiding computational errors. The basic concept is to transform observation equations into inequalities on the basis of an optimization parameter k and of their standard errors, and through repeated "scans" of n-dimensional search grids for decreasing values of k to identify the optimal clusters of gridpoints which satisfy observation inequalities and by definition contain the "true" solution. Stochastic optimal solutions and their variance-covariance matrices are then computed as first and second statistical moments. Such exhaustive uniform searches produce an excessive computational load and are extremely time consuming for common computers based on a CPU. An alternative is to use a computing platform based on a GPU, which nowadays is affordable to the research community, which provides a much higher computing performance. Using the CUDA programming language to implement TOPINV allows the investigation of the attained speedup in execution time on such a high performance platform. Based on synthetic data we compared the execution time required for two typical geophysical problems, modeling magma sources and seismic faults, described with up to 18 unknown variables, on both CPU/FORTRAN and GPU/CUDA platforms. The same problems for several different sizes of search grids (up to 1012 gridpoints) and numbers of unknown variables were solved on both platforms, and execution time as a function of the grid dimension for each problem was recorded. Results indicate an average speedup in calculations by a factor of 100 on the GPU platform; for example problems with 1012 grid-points require less than two hours instead of several days on conventional desktop computers. Such a speedup encourages the application of TOPINV on high performance platforms, as a GPU, in cases where nearly real time decisions are necessary, for example finite fault modeling to identify possible tsunami sources.

  18. Decreasing-Rate Pruning Optimizes the Construction of Efficient and Robust Distributed Networks.

    PubMed

    Navlakha, Saket; Barth, Alison L; Bar-Joseph, Ziv

    2015-07-01

    Robust, efficient, and low-cost networks are advantageous in both biological and engineered systems. During neural network development in the brain, synapses are massively over-produced and then pruned-back over time. This strategy is not commonly used when designing engineered networks, since adding connections that will soon be removed is considered wasteful. Here, we show that for large distributed routing networks, network function is markedly enhanced by hyper-connectivity followed by aggressive pruning and that the global rate of pruning, a developmental parameter not previously studied by experimentalists, plays a critical role in optimizing network structure. We first used high-throughput image analysis techniques to quantify the rate of pruning in the mammalian neocortex across a broad developmental time window and found that the rate is decreasing over time. Based on these results, we analyzed a model of computational routing networks and show using both theoretical analysis and simulations that decreasing rates lead to more robust and efficient networks compared to other rates. We also present an application of this strategy to improve the distributed design of airline networks. Thus, inspiration from neural network formation suggests effective ways to design distributed networks across several domains.

  19. Travel Mode Detection with Varying Smartphone Data Collection Frequencies

    PubMed Central

    Shafique, Muhammad Awais; Hato, Eiji

    2016-01-01

    Smartphones are becoming increasingly popular day-by-day. Modern smartphones are more than just calling devices. They incorporate a number of high-end sensors that provide many new dimensions to smartphone experience. The use of smartphones, however, can be extended from the usual telecommunication field to applications in other specialized fields including transportation. Sensors embedded in the smartphones like GPS, accelerometer and gyroscope can collect data passively, which in turn can be processed to infer the travel mode of the smartphone user. This will solve most of the shortcomings associated with conventional travel survey methods including biased response, no response, erroneous time recording, etc. The current study uses the sensors’ data collected by smartphones to extract nine features for classification. Variables including data frequency, moving window size and proportion of data to be used for training, are dealt with to achieve better results. Random forest is used to classify the smartphone data among six modes. An overall accuracy of 99.96% is achieved, with no mode less than 99.8% for data collected at 10 Hz frequency. The accuracy is observed to decrease with decrease in data frequency, but at the same time the computation time also decreases. PMID:27213380

  20. Decreasing-Rate Pruning Optimizes the Construction of Efficient and Robust Distributed Networks

    PubMed Central

    Navlakha, Saket; Barth, Alison L.; Bar-Joseph, Ziv

    2015-01-01

    Robust, efficient, and low-cost networks are advantageous in both biological and engineered systems. During neural network development in the brain, synapses are massively over-produced and then pruned-back over time. This strategy is not commonly used when designing engineered networks, since adding connections that will soon be removed is considered wasteful. Here, we show that for large distributed routing networks, network function is markedly enhanced by hyper-connectivity followed by aggressive pruning and that the global rate of pruning, a developmental parameter not previously studied by experimentalists, plays a critical role in optimizing network structure. We first used high-throughput image analysis techniques to quantify the rate of pruning in the mammalian neocortex across a broad developmental time window and found that the rate is decreasing over time. Based on these results, we analyzed a model of computational routing networks and show using both theoretical analysis and simulations that decreasing rates lead to more robust and efficient networks compared to other rates. We also present an application of this strategy to improve the distributed design of airline networks. Thus, inspiration from neural network formation suggests effective ways to design distributed networks across several domains. PMID:26217933

  1. The role of family-related factors in the effects of the UP4FUN school-based family-focused intervention targeting screen time in 10- to 12-year-old children: the ENERGY project.

    PubMed

    Van Lippevelde, Wendy; Bere, Elling; Verloigne, Maïté; van Stralen, Maartje M; De Bourdeaudhuij, Ilse; Lien, Nanna; Vik, Frøydis Nordgård; Manios, Yannis; Grillenberger, Monika; Kovács, Eva; ChinAPaw, Mai J M; Brug, Johannes; Maes, Lea

    2014-08-18

    Screen-related behaviours are highly prevalent in schoolchildren. Considering the adverse health effects and the relation of obesity and screen time in childhood, efforts to affect screen use in children are warranted. Parents have been identified as an important influence on children's screen time and therefore should be involved in prevention programmes. The aim was to examine the mediating role of family-related factors on the effects of the school-based family-focused UP4FUN intervention aimed at screen time in 10- to 12-year-old European children (n child-parent dyads = 1940). A randomised controlled trial was conducted to test the six-week UP4FUN intervention in 10- to 12-year-old children and one of their parents in five European countries in 2011 (n child-parent dyads = 1940). Self-reported data of children were used to assess their TV and computer/game console time per day, and parents reported their physical activity, screen time and family-related factors associated with screen behaviours (availability, permissiveness, monitoring, negotiation, rules, avoiding negative role modeling, and frequency of physically active family excursions). Mediation analyses were performed using multi-level regression analyses (child-school-country). Almost all TV-specific and half of the computer-specific family-related factors were associated with children's screen time. However, the measured family-related factors did not mediate intervention effects on children's TV and computer/game console use, because the intervention was not successful in changing these family-related factors. Future screen-related interventions should aim to effectively target the home environment and parents' practices related to children's use of TV and computers to decrease children's screen time. The study is registered in the International Standard Randomised Controlled Trial Number Register (registration number: ISRCTN34562078).

  2. Translational, rotational and internal dynamics of amyloid β-peptides (Aβ40 and Aβ42) from molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Bora, Ram Prasad; Prabhakar, Rajeev

    2009-10-01

    In this study, diffusion constants [translational (DT) and rotational (DR)], correlation times [rotational (τrot) and internal (τint)], and the intramolecular order parameters (S2) of the Alzheimer amyloid-β peptides Aβ40 and Aβ42 have been calculated from 150 ns molecular dynamics simulations in aqueous solution. The computed parameters have been compared with the experimentally measured values. The calculated DT of 1.61×10-6 cm2/s and 1.43×10-6 cm2/s for Aβ40 and Aβ42, respectively, at 300 K was found to follow the correct trend defined by the Debye-Stokes-Einstein relation that its value should decrease with the increase in the molecular weight. The estimated DR for Aβ40 and Aβ42 at 300 K are 0.085 and 0.071 ns-1, respectively. The rotational (Crot(t)) and internal (Cint(t)) correlation functions of Aβ40 and Aβ42 were observed to decay at nano- and picosecond time scales, respectively. The significantly different time decays of these functions validate the factorization of the total correlation function (Ctot(t)) of Aβ peptides into Crot(t) and Cint(t). At both short and long time scales, the Clore-Szabo model that was used as Cint(t) provided the best behavior of Ctot(t) for both Aβ40 and Aβ42. In addition, an effective rotational correlation time of Aβ40 is also computed at 18 °C and the computed value (2.30 ns) is in close agreement with the experimental value of 2.45 ns. The computed S2 parameters for the central hydrophobic core, the loop region, and C-terminal domains of Aβ40 and Aβ42 are in accord with the previous studies.

  3. Performance comparison analysis library communication cluster system using merge sort

    NASA Astrophysics Data System (ADS)

    Wulandari, D. A. R.; Ramadhan, M. E.

    2018-04-01

    Begins by using a single processor, to increase the speed of computing time, the use of multi-processor was introduced. The second paradigm is known as parallel computing, example cluster. The cluster must have the communication potocol for processing, one of it is message passing Interface (MPI). MPI have many library, both of them OPENMPI and MPICH2. Performance of the cluster machine depend on suitable between performance characters of library communication and characters of the problem so this study aims to analyze the comparative performances libraries in handling parallel computing process. The case study in this research are MPICH2 and OpenMPI. This case research execute sorting’s problem to know the performance of cluster system. The sorting problem use mergesort method. The research method is by implementing OpenMPI and MPICH2 on a Linux-based cluster by using five computer virtual then analyze the performance of the system by different scenario tests and three parameters for to know the performance of MPICH2 and OpenMPI. These performances are execution time, speedup and efficiency. The results of this study showed that the addition of each data size makes OpenMPI and MPICH2 have an average speed-up and efficiency tend to increase but at a large data size decreases. increased data size doesn’t necessarily increased speed up and efficiency but only execution time example in 100000 data size. OpenMPI has a execution time greater than MPICH2 example in 1000 data size average execution time with MPICH2 is 0,009721 and OpenMPI is 0,003895 OpenMPI can customize communication needs.

  4. A marker-based watershed method for X-ray image segmentation.

    PubMed

    Zhang, Xiaodong; Jia, Fucang; Luo, Suhuai; Liu, Guiying; Hu, Qingmao

    2014-03-01

    Digital X-ray images are the most frequent modality for both screening and diagnosis in hospitals. To facilitate subsequent analysis such as quantification and computer aided diagnosis (CAD), it is desirable to exclude image background. A marker-based watershed segmentation method was proposed to segment background of X-ray images. The method consisted of six modules: image preprocessing, gradient computation, marker extraction, watershed segmentation from markers, region merging and background extraction. One hundred clinical direct radiograph X-ray images were used to validate the method. Manual thresholding and multiscale gradient based watershed method were implemented for comparison. The proposed method yielded a dice coefficient of 0.964±0.069, which was better than that of the manual thresholding (0.937±0.119) and that of multiscale gradient based watershed method (0.942±0.098). Special means were adopted to decrease the computational cost, including getting rid of few pixels with highest grayscale via percentile, calculation of gradient magnitude through simple operations, decreasing the number of markers by appropriate thresholding, and merging regions based on simple grayscale statistics. As a result, the processing time was at most 6s even for a 3072×3072 image on a Pentium 4 PC with 2.4GHz CPU (4 cores) and 2G RAM, which was more than one time faster than that of the multiscale gradient based watershed method. The proposed method could be a potential tool for diagnosis and quantification of X-ray images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Optimizing agent-based transmission models for infectious diseases.

    PubMed

    Willem, Lander; Stijven, Sean; Tijskens, Engelbert; Beutels, Philippe; Hens, Niel; Broeckhove, Jan

    2015-06-02

    Infectious disease modeling and computational power have evolved such that large-scale agent-based models (ABMs) have become feasible. However, the increasing hardware complexity requires adapted software designs to achieve the full potential of current high-performance workstations. We have found large performance differences with a discrete-time ABM for close-contact disease transmission due to data locality. Sorting the population according to the social contact clusters reduced simulation time by a factor of two. Data locality and model performance can also be improved by storing person attributes separately instead of using person objects. Next, decreasing the number of operations by sorting people by health status before processing disease transmission has also a large impact on model performance. Depending of the clinical attack rate, target population and computer hardware, the introduction of the sort phase decreased the run time from 26% up to more than 70%. We have investigated the application of parallel programming techniques and found that the speedup is significant but it drops quickly with the number of cores. We observed that the effect of scheduling and workload chunk size is model specific and can make a large difference. Investment in performance optimization of ABM simulator code can lead to significant run time reductions. The key steps are straightforward: the data structure for the population and sorting people on health status before effecting disease propagation. We believe these conclusions to be valid for a wide range of infectious disease ABMs. We recommend that future studies evaluate the impact of data management, algorithmic procedures and parallelization on model performance.

  6. Evaluation of digital dental models obtained from dental cone-beam computed tomography scan of alginate impressions

    PubMed Central

    Jiang, Tingting; Lee, Sang-Mi; Hou, Yanan; Chang, Xin

    2016-01-01

    Objective To investigate the dimensional accuracy of digital dental models obtained from the dental cone-beam computed tomography (CBCT) scan of alginate impressions according to the time elapse when the impressions are stored under ambient conditions. Methods Alginate impressions were obtained from 20 adults using 3 different alginate materials, 2 traditional alginate materials (Alginoplast and Cavex Impressional) and 1 extended-pour alginate material (Cavex ColorChange). The impressions were stored under ambient conditions, and scanned by CBCT immediately after the impressions were taken, and then at 1 hour intervals for 6 hours. After reconstructing three-dimensional digital dental models, the models were measured and the data were analyzed to determine dimensional changes according to the elapsed time. The changes within the measurement error were regarded as clinically acceptable in this study. Results All measurements showed a decreasing tendency with an increase in the elapsed time after the impressions. Although the extended-pour alginate exhibited a less decreasing tendency than the other 2 materials, there were no statistically significant differences between the materials. Changes above the measurement error occurred between the time points of 3 and 4 hours after the impressions. Conclusions The results of this study indicate that digital dental models can be obtained simply from a CBCT scan of alginate impressions without sending them to a remote laboratory. However, when the impressions are not stored under special conditions, they should be scanned immediately, or at least within 2 to 3 hours after the impressions are taken. PMID:27226958

  7. Intra- and Inter-Fractional Variation Prediction of Lung Tumors Using Fuzzy Deep Learning

    PubMed Central

    Park, Seonyeong; Lee, Suk Jin; Weiss, Elisabeth

    2016-01-01

    Tumor movements should be accurately predicted to improve delivery accuracy and reduce unnecessary radiation exposure to healthy tissue during radiotherapy. The tumor movements pertaining to respiration are divided into intra-fractional variation occurring in a single treatment session and inter-fractional variation arising between different sessions. Most studies of patients’ respiration movements deal with intra-fractional variation. Previous studies on inter-fractional variation are hardly mathematized and cannot predict movements well due to inconstant variation. Moreover, the computation time of the prediction should be reduced. To overcome these limitations, we propose a new predictor for intra- and inter-fractional data variation, called intra- and inter-fraction fuzzy deep learning (IIFDL), where FDL, equipped with breathing clustering, predicts the movement accurately and decreases the computation time. Through the experimental results, we validated that the IIFDL improved root-mean-square error (RMSE) by 29.98% and prediction overshoot by 70.93%, compared with existing methods. The results also showed that the IIFDL enhanced the average RMSE and overshoot by 59.73% and 83.27%, respectively. In addition, the average computation time of IIFDL was 1.54 ms for both intra- and inter-fractional variation, which was much smaller than the existing methods. Therefore, the proposed IIFDL might achieve real-time estimation as well as better tracking techniques in radiotherapy. PMID:27170914

  8. EX VIVO MODEL FOR THE CHARACTERIZATION AND IDENTIFICATION OF DRYWALL INTRAOCULAR FOREIGN BODIES ON COMPUTED TOMOGRAPHY.

    PubMed

    Syed, Reema; Kim, Sung-Hye; Palacio, Agustina; Nunery, William R; Schaal, Shlomit

    2017-06-06

    The study was inspired after the authors encountered a patient with a penetrating globe injury due to drywall, who had retained intraocular drywall foreign body. Computed tomography (CT) was read as normal in this patient. Open globe injury with drywall has never been reported previously in the literature and there are no previous studies describing its radiographic features. The case report is described in detail elsewhere. This was an experimental study. An ex vivo model of 15 porcine eyes with 1 mm to 5 mm fragments of implanted drywall, 2 vitreous only samples with drywall and 3 control eyes were used. Eyes and vitreous samples were CT scanned on Days 0, 1, and 3 postimplantation. Computed ocular images were analyzed by masked observers. Size and radiodensity of intraocular drywall were measured using Hounsfield units (HUs) over time. Intraocular drywall was hyperdense on CT. All sizes studied were detectable on Day 0 of scanning. Mean intraocular drywall foreign body density was 171 ± 52 Hounsfield units (70-237) depending on fragment size. Intraocular drywall foreign body decreased in size whereas Hounsfield unit intensity increased over time. Drywall dissolves in the eye and becomes denser over time as air in the drywall is replaced by fluid. This study identified Hounsfield Units specific to intraocular drywall foreign body over time.

  9. Effects of standing on typing task performance and upper limb discomfort, vascular and muscular indicators.

    PubMed

    Fedorowich, Larissa M; Côté, Julie N

    2018-10-01

    Standing is a popular alternative to traditionally seated computer work. However, no studies have described how standing impacts both upper body muscular and vascular outcomes during a computer typing task. Twenty healthy adults completed two 90-min simulated work sessions, seated or standing. Upper limb discomfort, electromyography (EMG) from eight upper body muscles, typing performance and neck/shoulder and forearm blood flow were collected. Results showed significantly less upper body discomfort and higher typing speed during standing. Lower Trapezius EMG amplitude was higher during standing, but this postural difference decreased with time (interaction effect), and its variability was 68% higher during standing compared to sitting. There were no effects on blood flow. Results suggest that standing computer work may engage shoulder girdle stabilizers while reducing discomfort and improving performance. Studies are needed to identify how standing affects more complex computer tasks over longer work bouts in symptomatic workers. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. On-line confidence monitoring during decision making.

    PubMed

    Dotan, Dror; Meyniel, Florent; Dehaene, Stanislas

    2018-02-01

    Humans can readily assess their degree of confidence in their decisions. Two models of confidence computation have been proposed: post hoc computation using post-decision variables and heuristics, versus online computation using continuous assessment of evidence throughout the decision-making process. Here, we arbitrate between these theories by continuously monitoring finger movements during a manual sequential decision-making task. Analysis of finger kinematics indicated that subjects kept separate online records of evidence and confidence: finger deviation continuously reflected the ongoing accumulation of evidence, whereas finger speed continuously reflected the momentary degree of confidence. Furthermore, end-of-trial finger speed predicted the post-decisional subjective confidence rating. These data indicate that confidence is computed on-line, throughout the decision process. Speed-confidence correlations were previously interpreted as a post-decision heuristics, whereby slow decisions decrease subjective confidence, but our results suggest an adaptive mechanism that involves the opposite causality: by slowing down when unconfident, participants gain time to improve their decisions. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Intelligent fuzzy approach for fast fractal image compression

    NASA Astrophysics Data System (ADS)

    Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila

    2014-12-01

    Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.

  12. Bio-steps beyond Turing.

    PubMed

    Calude, Cristian S; Păun, Gheorghe

    2004-11-01

    Are there 'biologically computing agents' capable to compute Turing uncomputable functions? It is perhaps tempting to dismiss this question with a negative answer. Quite the opposite, for the first time in the literature on molecular computing we contend that the answer is not theoretically negative. Our results will be formulated in the language of membrane computing (P systems). Some mathematical results presented here are interesting in themselves. In contrast with most speed-up methods which are based on non-determinism, our results rest upon some universality results proved for deterministic P systems. These results will be used for building "accelerated P systems". In contrast with the case of Turing machines, acceleration is a part of the hardware (not a quality of the environment) and it is realised either by decreasing the size of "reactors" or by speeding-up the communication channels. Consequently, two acceleration postulates of biological inspiration are introduced; each of them poses specific questions to biology. Finally, in a more speculative part of the paper, we will deal with Turing non-computability activity of the brain and possible forms of (extraterrestrial) intelligence.

  13. Visual Fatigue Induced by Viewing a Tablet Computer with a High-resolution Display.

    PubMed

    Kim, Dong Ju; Lim, Chi Yeon; Gu, Namyi; Park, Choul Yong

    2017-10-01

    In the present study, the visual discomfort induced by smart mobile devices was assessed in normal and healthy adults. Fifty-nine volunteers (age, 38.16 ± 10.23 years; male : female = 19 : 40) were exposed to tablet computer screen stimuli (iPad Air, Apple Inc.) for 1 hour. Participants watched a movie or played a computer game on the tablet computer. Visual fatigue and discomfort were assessed using an asthenopia questionnaire, tear film break-up time, and total ocular wavefront aberration before and after viewing smart mobile devices. Based on the questionnaire, viewing smart mobile devices for 1 hour significantly increased mean total asthenopia score from 19.59 ± 8.58 to 22.68 ± 9.39 (p < 0.001). Specifically, the scores for five items (tired eyes, sore/aching eyes, irritated eyes, watery eyes, and hot/burning eye) were significantly increased by viewing smart mobile devices. Tear film break-up time significantly decreased from 5.09 ± 1.52 seconds to 4.63 ± 1.34 seconds (p = 0.003). However, total ocular wavefront aberration was unchanged. Visual fatigue and discomfort were significantly induced by viewing smart mobile devices, even though the devices were equipped with state-of-the-art display technology. © 2017 The Korean Ophthalmological Society

  14. Visual Fatigue Induced by Viewing a Tablet Computer with a High-resolution Display

    PubMed Central

    Kim, Dong Ju; Lim, Chi-Yeon; Gu, Namyi

    2017-01-01

    Purpose In the present study, the visual discomfort induced by smart mobile devices was assessed in normal and healthy adults. Methods Fifty-nine volunteers (age, 38.16 ± 10.23 years; male : female = 19 : 40) were exposed to tablet computer screen stimuli (iPad Air, Apple Inc.) for 1 hour. Participants watched a movie or played a computer game on the tablet computer. Visual fatigue and discomfort were assessed using an asthenopia questionnaire, tear film break-up time, and total ocular wavefront aberration before and after viewing smart mobile devices. Results Based on the questionnaire, viewing smart mobile devices for 1 hour significantly increased mean total asthenopia score from 19.59 ± 8.58 to 22.68 ± 9.39 (p < 0.001). Specifically, the scores for five items (tired eyes, sore/aching eyes, irritated eyes, watery eyes, and hot/burning eye) were significantly increased by viewing smart mobile devices. Tear film break-up time significantly decreased from 5.09 ± 1.52 seconds to 4.63 ± 1.34 seconds (p = 0.003). However, total ocular wavefront aberration was unchanged. Conclusions Visual fatigue and discomfort were significantly induced by viewing smart mobile devices, even though the devices were equipped with state-of-the-art display technology. PMID:28914003

  15. List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor

    NASA Astrophysics Data System (ADS)

    Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.

    2014-03-01

    List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging applications.

  16. Comparison and Computational Performance of Tsunami-HySEA and MOST Models for the LANTEX 2013 scenario

    NASA Astrophysics Data System (ADS)

    González-Vida, Jose M.; Macías, Jorge; Mercado, Aurelio; Ortega, Sergio; Castro, Manuel J.

    2017-04-01

    Tsunami-HySEA model is used to simulate the Caribbean LANTEX 2013 scenario (LANTEX is the acronym for Large AtlaNtic Tsunami EXercise, which is carried out annually). The numerical simulation of the propagation and inundation phases, is performed with both models but using different mesh resolutions and nested meshes. Some comparisons with the MOST tsunami model available at the University of Puerto Rico (UPR) are made. Both models compare well for propagating tsunami waves in open sea, producing very similar results. In near-shore shallow waters, Tsunami-HySEA should be compared with the inundation version of MOST, since the propagation version of MOST is limited to deeper waters. Regarding the inundation phase, a 1 arc-sec (approximately 30 m) resolution mesh covering all of Puerto Rico, is used, and a three-level nested meshes technique implemented. In the inundation phase, larger differences between model results are observed. Nevertheless, the most striking difference resides in computational time; Tsunami-HySEA is coded using the advantages of GPU architecture, and can produce a 4 h simulation in a 60 arcsec resolution grid for the whole Caribbean Sea in less than 4 min with a single general-purpose GPU and as fast as 11 s with 32 general-purpose GPUs. In the inundation stage with nested meshes, approximately 8 hours of wall clock time is needed for a 2-h simulation in a single GPU (versus more than 2 days for the MOST inundation, running three different parts of the island—West, Center, East—at the same time due to memory limitations in MOST). When domain decomposition techniques are finally implemented by breaking up the computational domain into sub-domains and assigning a GPU to each sub-domain (multi-GPU Tsunami-HySEA version), we show that the wall clock time significantly decreases, allowing high-resolution inundation modelling in very short computational times, reducing, for example, if eight GPUs are used, the wall clock time to around 1 hour. Besides, these computational times are obtained using general-purpose GPU hardware.

  17. Range wise busy checking 2-way imbalanced algorithm for cloudlet allocation in cloud environment

    NASA Astrophysics Data System (ADS)

    Alanzy, Mohammed; Latip, Rohaya; Muhammed, Abdullah

    2018-05-01

    Cloud computing considers as a new business paradigm and a popular platform over the last few years. Many organizations, agencies, and departments consider responsible tasks time and tasks needed to be accomplished as soon as possible. These agencies counter IT issues due to the massive arise of data, applications, and solution scopes. Currently, the main issue related with the cloud is the way of making the environment of the cloud computing more qualified, and this way needs a competent allocation strategy of the cloudlet, Thus, there are huge number of studies conducted with regards to this matter that sought to assign the cloudlets to VMs or resources by variety of strategies. In this paper we have proposed range wise busy checking 2-way imbalanced Algorithm in cloud computing. Compare to other methods, it decreases the completion time to finish tasks’ execution, it is considered the fundamental part to enhance the system performance such as the makespan. This algorithm was simulated using Cloudsim to give more opportunity to the higher VM speed to accommodate more Cloudlets in its local queue without considering the threshold balance condition. The simulation result shows that the average makespan time is lesser compare to the previous cloudlet allocation strategy.

  18. A diffusive information preservation method for small Knudsen number flows

    NASA Astrophysics Data System (ADS)

    Fei, Fei; Fan, Jing

    2013-06-01

    The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.

  19. Modified Chebyshev Picard Iteration for Efficient Numerical Integration of Ordinary Differential Equations

    NASA Astrophysics Data System (ADS)

    Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.

    2013-09-01

    Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are presented to compare the output from the MCPI library to current state-of-practice numerical integration methods. It is shown that MCPI is capable of out-performing the state-of-practice in terms of computational cost and accuracy.

  20. Octree-based, GPU implementation of a continuous cellular automaton for the simulation of complex, evolving surfaces

    NASA Astrophysics Data System (ADS)

    Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.

    2011-03-01

    Presently, dynamic surface-based models are required to contain increasingly larger numbers of points and to propagate them over longer time periods. For large numbers of surface points, the octree data structure can be used as a balance between low memory occupation and relatively rapid access to the stored data. For evolution rules that depend on neighborhood states, extended simulation periods can be obtained by using simplified atomistic propagation models, such as the Cellular Automata (CA). This method, however, has an intrinsic parallel updating nature and the corresponding simulations are highly inefficient when performed on classical Central Processing Units (CPUs), which are designed for the sequential execution of tasks. In this paper, a series of guidelines is presented for the efficient adaptation of octree-based, CA simulations of complex, evolving surfaces into massively parallel computing hardware. A Graphics Processing Unit (GPU) is used as a cost-efficient example of the parallel architectures. For the actual simulations, we consider the surface propagation during anisotropic wet chemical etching of silicon as a computationally challenging process with a wide-spread use in microengineering applications. A continuous CA model that is intrinsically parallel in nature is used for the time evolution. Our study strongly indicates that parallel computations of dynamically evolving surfaces simulated using CA methods are significantly benefited by the incorporation of octrees as support data structures, substantially decreasing the overall computational time and memory usage.

  1. Television, computer, and video viewing; physical activity; and upper limb fracture risk in children: a population-based case control study.

    PubMed

    Ma, Deqiong; Jones, Graeme

    2003-11-01

    The effect of physical activity on upper limb fractures was examined in this population-based case control study with 321 age- and gender-matched pairs. Sports participation increased fracture risk in boys and decreased risk in girls. Television viewing had a deleterious dose response association with wrist and forearm fractures while light physical activity was protective. The aim of this population-based case control study was to examine the association between television, computer, and video viewing; types and levels of physical activity; and upper limb fractures in children 9-16 years of age. A total of 321 fracture cases and 321 randomly selected individually matched controls were studied. Television, computer, and video viewing and types and levels of physical activity were determined by interview-administered questionnaire. Bone strength was assessed by DXA and metacarpal morphometry. In general, sports participation increased total upper limb fracture risk in boys and decreased risk in girls. Gender-specific risk estimates were significantly different for total, contact, noncontact, and high-risk sports participation as well as four individual sports (soccer, cricket, surfing, and swimming). In multivariate analysis, time spent television, computer, and video viewing in both sexes was positively associated with wrist and forearm fracture risk (OR 1.6/category, 95% CI: 1.1-2.2), whereas days involved in light physical activity participation decreased fracture risk (OR 0.8/category, 95% CI: 0.7-1.0). Sports participation increased hand (OR 1.5/sport, 95% CI: 1.1-2.0) and upper arm (OR 29.8/sport, 95% CI: 1.7-535) fracture risk in boys only and decreased wrist and forearm fracture risk in girls only (OR 0.5/sport, 95% CI: 0.3-0.9). Adjustment for bone density and metacarpal morphometry did not alter these associations. There is gender discordance with regard to sports participation and fracture risk in children, which may reflect different approaches to sport. Importantly, television, computer, and video viewing has a dose-dependent association with wrist and forearm fractures, whereas light physical activity is protective. The mechanism is unclear but may involve bone-independent factors, or less likely, changes in bone quality not detected by DXA or metacarpal morphometry.

  2. Efficacy of a telerehabilitation intervention programme using biofeedback among computer operators.

    PubMed

    Golebowicz, Merav; Levanon, Yafa; Palti, Ram; Ratzon, Navah Z

    2015-01-01

    Computer operators spend long periods of time sitting in a static posture at computer workstations and therefore have an increased exposure to work-related musculoskeletal disorders (WRMSD). The present study is aimed at investigating the feasibility and effectiveness of a tele-biofeedback ergonomic intervention programme among computer operators suffering from WRMSD. Twelve subjects with WRMSD were assigned an ergonomic intervention accompanied by remote tele-biofeedback training, which was practised at their workstations. Evaluations of pain symptoms and locations, body posture and psychosocial characteristics were carried out before and after the intervention in the workplace. The hypothesis was partially verified as it showed improved body position at the workstation and decreased pain in some body parts. Tele-biofeedback, as part of an intervention, appears to be feasible and efficient for computer operators who suffer from WRMSD. This study encourages further research on tele-health within the scope of occupational therapy practice. Practitioner summary: Research concerning tele-health using biofeedback is scarce. The present study analyses the feasibility and partial effectiveness of a tele-biofeedback ergonomic intervention programme for computer operators suffering from WRMSD. The uniqueness and singularity of this study is the usage of remote communication between participants and practitioners through the Internet.

  3. Shortened OR time and decreased patient risk through use of a modular surgical instrument with artificial intelligence.

    PubMed

    Miller, David J; Nelson, Carl A; Oleynikov, Dmitry

    2009-05-01

    With a limited number of access ports, minimally invasive surgery (MIS) often requires the complete removal of one tool and reinsertion of another. Modular or multifunctional tools can be used to avoid this step. In this study, soft computing techniques are used to optimally arrange a modular tool's functional tips, allowing surgeons to deliver treatment of improved quality in less time, decreasing overall cost. The investigators watched University Medical Center surgeons perform MIS procedures (e.g., cholecystectomy and Nissen fundoplication) and recorded the procedures to digital video. The video was then used to analyze the types of instruments used, the duration of each use, and the function of each instrument. These data were aggregated with fuzzy logic techniques using four membership functions to quantify the overall usefulness of each tool. This allowed subsequent optimization of the arrangement of functional tips within the modular tool to decrease overall time spent changing instruments during simulated surgical procedures based on the video recordings. Based on a prototype and a virtual model of a multifunction laparoscopic tool designed by the investigators that can interchange six different instrument tips through the tool's shaft, the range of tool change times is approximately 11-13 s. Using this figure, estimated time savings for the procedures analyzed ranged from 2.5 to over 32 min, and on average, total surgery time can be reduced by almost 17% by using the multifunction tool.

  4. Porosity dependence of terahertz emission of porous silicon investigated using reflection geometry terahertz time-domain spectroscopy

    NASA Astrophysics Data System (ADS)

    Mabilangan, Arvin I.; Lopez, Lorenzo P.; Faustino, Maria Angela B.; Muldera, Joselito E.; Cabello, Neil Irvin F.; Estacio, Elmer S.; Salvador, Arnel A.; Somintac, Armando S.

    2016-12-01

    Porosity dependent terahertz emission of porous silicon (PSi) was studied. The PSi samples were fabricated via electrochemical etching of boron-doped (100) silicon in a solution containing 48% hydrofluoric acid, deionized water and absolute ethanol in a 1:3:4 volumetric ratio. The porosity was controlled by varying the supplied anodic current for each sample. The samples were then optically characterized via normal incidence reflectance spectroscopy to obtain values for their respective refractive indices and porosities. Absorbance of each sample was also computed using the data from its respective reflectance spectrum. Terahertz emission of each sample was acquired through terahertz - time domain spectroscopy. A decreasing trend in the THz signal power was observed as the porosity of each PSi was increased. This was caused by the decrease in the absorption strength as the silicon crystallite size in the PSi was minimized.

  5. Cattaneo-Christov Heat Flux Model for MHD Three-Dimensional Flow of Maxwell Fluid over a Stretching Sheet.

    PubMed

    Rubab, Khansa; Mustafa, M

    2016-01-01

    This letter investigates the MHD three-dimensional flow of upper-convected Maxwell (UCM) fluid over a bi-directional stretching surface by considering the Cattaneo-Christov heat flux model. This model has tendency to capture the characteristics of thermal relaxation time. The governing partial differential equations even after employing the boundary layer approximations are non linear. Accurate analytic solutions for velocity and temperature distributions are computed through well-known homotopy analysis method (HAM). It is noticed that velocity decreases and temperature rises when stronger magnetic field strength is accounted. Penetration depth of temperature is a decreasing function of thermal relaxation time. The analysis for classical Fourier heat conduction law can be obtained as a special case of the present work. To our knowledge, the Cattaneo-Christov heat flux model law for three-dimensional viscoelastic flow problem is just introduced here.

  6. Are computer and cell phone use associated with body mass index and overweight? A population study among twin adolescents.

    PubMed

    Lajunen, Hanna-Reetta; Keski-Rahkonen, Anna; Pulkkinen, Lea; Rose, Richard J; Rissanen, Aila; Kaprio, Jaakko

    2007-02-26

    Overweight in children and adolescents has reached dimensions of a global epidemic during recent years. Simultaneously, information and communication technology use has rapidly increased. A population-based sample of Finnish twins born in 1983-1987 (N = 4098) was assessed by self-report questionnaires at 17 y during 2000-2005. The association of overweight (defined by Cole's BMI-for-age cut-offs) with computer and cell phone use and ownership was analyzed by logistic regression and their association with BMI by linear regression models. The effect of twinship was taken into account by correcting for clustered sampling of families. All models were adjusted for gender, physical exercise, and parents' education and occupational class. The proportion of adolescents who did not have a computer at home decreased from 18% to 8% from 2000 to 2005. Compared to them, having a home computer (without an Internet connection) was associated with a higher risk of overweight (odds ratio 2.3, 95% CI 1.4 to 3.8) and BMI (beta coefficient 0.57, 95% CI 0.15 to 0.98). However, having a computer with an Internet connection was not associated with weight status. Belonging to the highest quintile (OR 1.8 95% CI 1.2 to 2.8) and second-highest quintile (OR 1.6 95% CI 1.1 to 2.4) of weekly computer use was positively associated with overweight. The proportion of adolescents without a personal cell phone decreased from 12% to 1% across 2000 to 2005. There was a positive linear trend of increasing monthly phone bill with BMI (beta 0.18, 95% CI 0.06 to 0.30), but the association of a cell phone bill with overweight was very weak. Time spent using a home computer was associated with an increased risk of overweight. Cell phone use correlated weakly with BMI. Increasing use of information and communication technology may be related to the obesity epidemic among adolescents.

  7. A simple theory of back surface field /BSF/ solar cells

    NASA Technical Reports Server (NTRS)

    Von Roos, O.

    1978-01-01

    A theory of an n-p-p/+/ junction is developed, entirely based on Shockley's depletion layer approximation. Under the further assumption of uniform doping the electrical characteristics of solar cells as a function of all relevant parameters (cell thickness, diffusion lengths, etc.) can quickly be ascertained with a minimum of computer time. Two effects contribute to the superior performance of a BSF cell (n-p-p/+/ junction) as compared to an ordinary solar cell (n-p junction). The sharing of the applied voltage among the two junctions (the n-p and the p-p/+/ junction) decreases the dark current and the reflection of minority carriers by the builtin electron field of the p-p/+/ junction increases the short-circuit current. The theory predicts an increase in the open-circuit voltage (Voc) with a decrease in cell thickness. Although the short-circuit current decreases at the same time, the efficiency of the cell is virtually unaltered in going from a thickness of 200 microns to a thickness of 50 microns. The importance of this fact for space missions where large power-to-weight ratios are required is obvious.

  8. A new computational method for reacting hypersonic flows

    NASA Astrophysics Data System (ADS)

    Niculescu, M. L.; Cojocaru, M. G.; Pricop, M. V.; Fadgyas, M. C.; Pepelea, D.; Stoican, M. G.

    2017-07-01

    Hypersonic gas dynamics computations are challenging due to the difficulties to have reliable and robust chemistry models that are usually added to Navier-Stokes equations. From the numerical point of view, it is very difficult to integrate together Navier-Stokes equations and chemistry model equations because these partial differential equations have different specific time scales. For these reasons, almost all known finite volume methods fail shortly to solve this second order partial differential system. Unfortunately, the heating of Earth reentry vehicles such as space shuttles and capsules is very close linked to endothermic chemical reactions. A better prediction of wall heat flux leads to smaller safety coefficient for thermal shield of space reentry vehicle; therefore, the size of thermal shield decreases and the payload increases. For these reasons, the present paper proposes a new computational method based on chemical equilibrium, which gives accurate prediction of hypersonic heating in order to support the Earth reentry capsule design.

  9. A spatially localized architecture for fast and modular DNA computing

    NASA Astrophysics Data System (ADS)

    Chatterjee, Gourab; Dalchau, Neil; Muscat, Richard A.; Phillips, Andrew; Seelig, Georg

    2017-09-01

    Cells use spatial constraints to control and accelerate the flow of information in enzyme cascades and signalling networks. Synthetic silicon-based circuitry similarly relies on spatial constraints to process information. Here, we show that spatial organization can be a similarly powerful design principle for overcoming limitations of speed and modularity in engineered molecular circuits. We create logic gates and signal transmission lines by spatially arranging reactive DNA hairpins on a DNA origami. Signal propagation is demonstrated across transmission lines of different lengths and orientations and logic gates are modularly combined into circuits that establish the universality of our approach. Because reactions preferentially occur between neighbours, identical DNA hairpins can be reused across circuits. Co-localization of circuit elements decreases computation time from hours to minutes compared to circuits with diffusible components. Detailed computational models enable predictive circuit design. We anticipate our approach will motivate using spatial constraints for future molecular control circuit designs.

  10. Permittivity and conductivity parameter estimations using full waveform inversion

    NASA Astrophysics Data System (ADS)

    Serrano, Jheyston O.; Ramirez, Ana B.; Abreo, Sergio A.; Sadler, Brian M.

    2018-04-01

    Full waveform inversion of Ground Penetrating Radar (GPR) data is a promising strategy to estimate quantitative characteristics of the subsurface such as permittivity and conductivity. In this paper, we propose a methodology that uses Full Waveform Inversion (FWI) in time domain of 2D GPR data to obtain highly resolved images of the permittivity and conductivity parameters of the subsurface. FWI is an iterative method that requires a cost function to measure the misfit between observed and modeled data, a wave propagator to compute the modeled data and an initial velocity model that is updated at each iteration until an acceptable decrease of the cost function is reached. The use of FWI with GPR are expensive computationally because it is based on the computation of the electromagnetic full wave propagation. Also, the commercially available acquisition systems use only one transmitter and one receiver antenna at zero offset, requiring a large number of shots to scan a single line.

  11. Modeling the fusion of cylindrical bioink particles in post bioprinting structure formation

    NASA Astrophysics Data System (ADS)

    McCune, Matt; Shafiee, Ashkan; Forgacs, Gabor; Kosztin, Ioan

    2015-03-01

    Cellular Particle Dynamics (CPD) is an effective computational method to describe the shape evolution and biomechanical relaxation processes in multicellular systems. Thus, CPD is a useful tool to predict the outcome of post-printing structure formation in bioprinting. The predictive power of CPD has been demonstrated for multicellular systems composed of spherical bioink units. Experiments and computer simulations were related through an independently developed theoretical formalism based on continuum mechanics. Here we generalize the CPD formalism to (i) include cylindrical bioink particles often used in specific bioprinting applications, (ii) describe the more realistic experimental situation in which both the length and the volume of the cylindrical bioink units decrease during post-printing structure formation, and (iii) directly connect CPD simulations to the corresponding experiments without the need of the intermediate continuum theory inherently based on simplifying assumptions. Work supported by NSF [PHY-0957914]. Computer time provided by the University of Missouri Bioinformatics Consortium.

  12. The effect of diagnosis-specific computerized discharge instructions on 72-hour return visits to the pediatric emergency department.

    PubMed

    Lawrence, Laurie M; Jenkins, Cathy A; Zhou, Chuan; Givens, Timothy G

    2009-11-01

    The number of patients returning to the pediatric emergency department (PED) within 72 hours of discharge is frequently cited as a benchmark for quality patient care. The purpose of this study was to determine whether the introduction of diagnosis-specific computer-generated discharge instructions would decrease the number of medically unnecessary return visits to the PED. A retrospective chart review of patients who returned to the PED within 72 hours of discharge was performed. Charts were reviewed from 2 comparable periods: September 2004 to February 2005, when handwritten discharge instructions were issued to each patient, and September 2005 to February 2006, when each patient received computer-generated diagnosis-specific discharge instructions. The patient's age, primary care provider, insurance status, chief complaint, vital signs, history, physical examination, plan of care, and diagnosis at each visit were recorded. Cases were excluded if the patient left against medical advice or without being seen, was admitted to the hospital on the first visit, or had incomplete or missing records. The medical necessity of the return visit was rated as "yes," "no," or "indeterminate" based on review of the visit noting reason for return, history and physical examination, diagnosis, and interventions or changes in the initial care plan. Of all return visits to the PED within 72 hours of discharge, 13% were deemed unnecessary for patients receiving handwritten instructions compared with 15% for patients receiving computer-generated instructions (P = 0.5, not significant). For each additional year of age, the return visit was 1.07 times as likely to be medically appropriate (95% confidence interval, 1.03-1.12; P = 0.002). Patients who returned to the PED more than once were 2.69 times more likely to have a medically appropriate visit as were those with only 1 return visit (95% confidence interval, 0.95-7.58; P = 0.062). Computer-generated diagnosis-specific discharge instructions do not decrease the number of medically unnecessary repeat visits to the PED.

  13. A novel method to measure conspicuous facial pores using computer analysis of digital-camera-captured images: the effect of glycolic acid chemical peeling.

    PubMed

    Kakudo, Natsuko; Kushida, Satoshi; Tanaka, Nobuko; Minakata, Tatsuya; Suzuki, Kenji; Kusumoto, Kenji

    2011-11-01

    Chemical peeling is becoming increasingly popular for skin rejuvenation in dermatological esthetic surgery. Conspicuous facial pores are one of the most frequently encountered skin problems in women of all ages. This study was performed to analyze the effectiveness of reducing conspicuous facial pores using glycolic acid chemical peeling (GACP) based on a novel computer analysis of digital-camera-captured images. GACP was performed a total of five times at 2-week intervals in 22 healthy women. Computerized image analysis of conspicuous, open, and darkened facial pores was performed using the Robo Skin Analyzer CS 50. The number of conspicuous facial pores decreased significantly in 19 (86%) of the 22 subjects, with a mean improvement rate of 34.6%. The number of open pores decreased significantly in 16 (72%) of the subjects, with a mean improvement rate of 11.0%. The number of darkened pores decreased significantly in 18 (81%) of the subjects, with a mean improvement rate of 34.3%. GACP significantly reduces the number of conspicuous facial pores. The Robo Skin Analyzer CS 50 is useful for the quantification and analysis of 'pore enlargement', a subtle finding in dermatological esthetic surgery. © 2011 John Wiley & Sons A/S.

  14. Impact of number of repeated scans on model observer performance for a low-contrast detection task in computed tomography.

    PubMed

    Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia

    2016-04-01

    Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model's template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, [Formula: see text], was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using [Formula: see text] from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO.

  15. Impact of number of repeated scans on model observer performance for a low-contrast detection task in computed tomography

    PubMed Central

    Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia

    2016-01-01

    Abstract. Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model’s template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, Az, was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using Az from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO. PMID:27284547

  16. An effective support system of emergency medical services with tablet computers.

    PubMed

    Yamada, Kosuke C; Inoue, Satoshi; Sakamoto, Yuichiro

    2015-02-27

    There were over 5,000,000 ambulance dispatches during 2010 in Japan, and the time for transportation has been increasing, it took over 37 minutes from dispatch to the hospitals. A way to reduce transportation time by ambulance is to shorten the time of searching for an appropriate facility/hospital during the prehospital phase. Although the information system of medical institutions and emergency medical service (EMS) was established in 2003 in Saga Prefecture, Japan, it has not been utilized efficiently. The Saga Prefectural Government renewed the previous system in an effort to make it the real-time support system that can efficiently manage emergency demand and acceptance for the first time in Japan in April 2011. The objective of this study was to evaluate if the new system promotes efficient emergency transportation for critically ill patients and provides valuable epidemiological data. The new system has provided both emergency personnel in the ambulance, or at the scene, and the medical staff in each hospital to be able to share up-to-date information about available hospitals by means of cloud computing. All 55 ambulances in Saga are equipped with tablet computers through third generation/long term evolution networks. When the emergency personnel arrive on the scene and discern the type of patient's illness, they can search for an appropriate facility/hospital with their tablet computer based on the patient's symptoms and available medical specialists. Data were collected prospectively over a three-year period from April 1, 2011 to March 31, 2013. The transportation time by ambulance in Saga was shortened for the first time since the statistics were first kept in 1999; the mean time was 34.3 minutes in 2010 (based on administrative statistics) and 33.9 minutes (95% CI 33.6-34.1) in 2011. The ratio of transportation to the tertiary care facilities in Saga has decreased by 3.12% from the year before, 32.7% in 2010 (regional average) and 29.58% (9085/30,709) in 2011. The system entry completion rate by the emergency personnel was 100.00% (93,110/93,110) and by the medical staff was 46.11% (14,159/30,709) to 47.57% (14,639/30,772) over a three-year period. Finally, the new system reduced the operational costs by 40,000,000 yen (about $400,000 US dollars) a year. The transportation time by ambulance was shorter following the implementation of the tablet computer in the current support system of EMS in Saga Prefecture, Japan. The cloud computing reduced the cost of the EMS system.

  17. Conditioning of MVM '73 radio-tracking data

    NASA Technical Reports Server (NTRS)

    Koch, R. E.; Chao, C. C.; Winn, F. B.; Yip, K. W.

    1974-01-01

    An extensive effort was undertaken to edit Mariner 10 radiometric tracking data. Interactive computer graphics were used for the first time by an interplanetary mission to detect blunder points and spurious signatures in the data. Interactive graphics improved the former process time by a factor of 10 to 20, while increasing reliability. S/X dual Doppler data was used for the first time to calibrate charged particles in the tracking medium. Application of the charged particle calibrations decreased the orbit determination error for a short data arc following the 16 March 1974 maneuver by about 80%. A new model was developed to calibrate the earth's troposphere with seasonal adjustments. The new model has a 2% accuracy and is 5 times better than the old model.

  18. Computational study of noise in a large signal transduction network.

    PubMed

    Intosalmi, Jukka; Manninen, Tiina; Ruohonen, Keijo; Linne, Marja-Leena

    2011-06-21

    Biochemical systems are inherently noisy due to the discrete reaction events that occur in a random manner. Although noise is often perceived as a disturbing factor, the system might actually benefit from it. In order to understand the role of noise better, its quality must be studied in a quantitative manner. Computational analysis and modeling play an essential role in this demanding endeavor. We implemented a large nonlinear signal transduction network combining protein kinase C, mitogen-activated protein kinase, phospholipase A2, and β isoform of phospholipase C networks. We simulated the network in 300 different cellular volumes using the exact Gillespie stochastic simulation algorithm and analyzed the results in both the time and frequency domain. In order to perform simulations in a reasonable time, we used modern parallel computing techniques. The analysis revealed that time and frequency domain characteristics depend on the system volume. The simulation results also indicated that there are several kinds of noise processes in the network, all of them representing different kinds of low-frequency fluctuations. In the simulations, the power of noise decreased on all frequencies when the system volume was increased. We concluded that basic frequency domain techniques can be applied to the analysis of simulation results produced by the Gillespie stochastic simulation algorithm. This approach is suited not only to the study of fluctuations but also to the study of pure noise processes. Noise seems to have an important role in biochemical systems and its properties can be numerically studied by simulating the reacting system in different cellular volumes. Parallel computing techniques make it possible to run massive simulations in hundreds of volumes and, as a result, accurate statistics can be obtained from computational studies. © 2011 Intosalmi et al; licensee BioMed Central Ltd.

  19. Assessing self-care and social function using a computer adaptive testing version of the Pediatric Evaluation of Disability Inventory Accepted for Publication, Archives of Physical Medicine and Rehabilitation

    PubMed Central

    Coster, Wendy J.; Haley, Stephen M.; Ni, Pengsheng; Dumas, Helene M.; Fragala-Pinkham, Maria A.

    2009-01-01

    Objective To examine score agreement, validity, precision, and response burden of a prototype computer adaptive testing (CAT) version of the Self-Care and Social Function scales of the Pediatric Evaluation of Disability Inventory (PEDI) compared to the full-length version of these scales. Design Computer simulation analysis of cross-sectional and longitudinal retrospective data; cross-sectional prospective study. Settings Pediatric rehabilitation hospital, including inpatient acute rehabilitation, day school program, outpatient clinics; community-based day care, preschool, and children’s homes. Participants Four hundred sixty-nine children with disabilities and 412 children with no disabilities (analytic sample); 38 children with disabilities and 35 children without disabilities (cross-validation sample). Interventions Not applicable. Main Outcome Measures Summary scores from prototype CAT applications of each scale using 15-, 10-, and 5-item stopping rules; scores from the full-length Self-Care and Social Function scales; time (in seconds) to complete assessments and respondent ratings of burden. Results Scores from both computer simulations and field administration of the prototype CATs were highly consistent with scores from full-length administration (all r’s between .94 and .99). Using computer simulation of retrospective data, discriminant validity and sensitivity to change of the CATs closely approximated that of the full-length scales, especially when the 15- and 10-item stopping rules were applied. In the cross-validation study the time to administer both CATs was 4 minutes, compared to over 16 minutes to complete the full-length scales. Conclusions Self-care and Social Function score estimates from CAT administration are highly comparable to those obtained from full-length scale administration, with small losses in validity and precision and substantial decreases in administration time. PMID:18373991

  20. Modelling Pre-eruptive Progressive Damage in Basaltic Volcanoes: Consequences for the Pre-eruptive Process

    NASA Astrophysics Data System (ADS)

    Got, J. L.; Amitrano, D.; Carrier, A.; Marsan, D.; Jouanne, F.; Vogfjord, K. S.

    2017-12-01

    At Grimsvötn volcano, high-quality earthquake and continuous GPS data were recorded by the Icelandic Meteorological Office during its 2004-2011 inter-eruptive period and exhibited remarkable patterns : acceleration of the cumulated earthquake number, and a 2-year exponential decrease in displacement rate followed by a 4-year constant inflation rate. We proposed a model with one magma reservoir in a non-linear elastic damaging edifice, with incompressible magma and a constant pressure at the base of the magma conduit. We first modelled seismicity rate and damage as a function of time, and show that Kachanov's elastic brittle damage law may be used to express the decrease of the effective shear modulus with time. We then derived simple analytical expressions for the magma reservoir overpressure and the surface displacement as a function of time. We got a very good fit of the seismicity and surface displacement data by adjusting only three phenomenological parameters and computed magma reservoir overpressure, magma flow and strain power as a function of time. Overpressure decrease is controlled by damage and shear modulus decrease. Displacement increases, although overpressure is decreasing, because shear modulus decreases more than overpressure. Normalized strain power reaches a maximum 0.25 value. This maximum is a physical limit, after which the elasticity laws are no longer valid, earthquakes cluster, cumulative number of earthquakes departs from the model. State variable extrema provide four reference times that may be used to assess the mechanical state and dynamics of the volcanic edifice. We also performed the spatial modelling of the progressive damage and strain localization around a pressurized magma reservoir. We used Kachanov's damage law and finite element modelling of an initially elastic volcanic edifice pressurized by a spherical magma reservoir, with a constant pressure in the reservoir and various external boundary conditions. At each node of the model, Young's modulus is decreased if deviatoric stress locally reaches the Mohr-Coulomb plastic threshold. For a compressive horizontal stress, the result shows a complex strain localization pattern, showing reverse and normal faulting very similar to what is obtained from analog modelling and observed at volcanic resurgent domes.

  1. Consumers' behavior in quantitative microbial risk assessment for pathogens in raw milk: Incorporation of the likelihood of consumption as a function of storage time and temperature.

    PubMed

    Crotta, Matteo; Paterlini, Franco; Rizzi, Rita; Guitian, Javier

    2016-02-01

    Foodborne disease as a result of raw milk consumption is an increasing concern in Western countries. Quantitative microbial risk assessment models have been used to estimate the risk of illness due to different pathogens in raw milk. In these models, the duration and temperature of storage before consumption have a critical influence in the final outcome of the simulations and are usually described and modeled as independent distributions in the consumer phase module. We hypothesize that this assumption can result in the computation, during simulations, of extreme scenarios that ultimately lead to an overestimation of the risk. In this study, a sensorial analysis was conducted to replicate consumers' behavior. The results of the analysis were used to establish, by means of a logistic model, the relationship between time-temperature combinations and the probability that a serving of raw milk is actually consumed. To assess our hypothesis, 2 recently published quantitative microbial risk assessment models quantifying the risks of listeriosis and salmonellosis related to the consumption of raw milk were implemented. First, the default settings described in the publications were kept; second, the likelihood of consumption as a function of the length and temperature of storage was included. When results were compared, the density of computed extreme scenarios decreased significantly in the modified model; consequently, the probability of illness and the expected number of cases per year also decreased. Reductions of 11.6 and 12.7% in the proportion of computed scenarios in which a contaminated milk serving was consumed were observed for the first and the second study, respectively. Our results confirm that overlooking the time-temperature dependency may yield to an important overestimation of the risk. Furthermore, we provide estimates of this dependency that could easily be implemented in future quantitative microbial risk assessment models of raw milk pathogens. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  2. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial decrease of the required number of function evaluations for detecting the optimal management policy, using an innovative, surrogate-assisted global optimization approach.

  3. An analytical study of nitrogen oxides and carbon monoxide emissions in hydrocarbon combustion with added nitrogen - Preliminary results

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1980-01-01

    The influence of ground-based gas turbine combustor operating conditions and fuel-bound nitrogen (FBN) found in coal-derived liquid fuels on the formation of nitrogen oxides and carbon monoxide is investigated. Analytical predictions of NOx and CO concentrations are obtained for a two-stage, adiabatic, perfectly-stirred reactor operating on a propane-air mixture, with primary equivalence ratios from 0.5 to 1.7, secondary equivalence ratios of 0.5 or 0.7, primary stage residence times from 12 to 20 msec, secondary stage residence times of 1, 2 and 3 msec and fuel nitrogen contents of 0.5, 1.0 and 2.0 wt %. Minimum nitrogen oxide but maximum carbon monoxide formation is obtained at primary zone equivalence ratios between 1.4 and 1.5, with percentage conversion of FBN to NOx decreasing with increased fuel nitrogen content. Additional secondary dilution is observed to reduce final pollutant concentrations, with NOx concentration independent of secondary residence time and CO decreasing with secondary residence time; primary zone residence time is not observed to affect final NOx and CO concentrations significantly. Finally, comparison of computed results with experimental values shows a good semiquantitative agreement.

  4. Television, computer use, physical activity, diet and fatness in Australian adolescents.

    PubMed

    Burke, Valerie; Beilin, Lawrie J; Durkin, Kevin; Stritzke, Werner G K; Houghton, Stephen; Cameron, Charmaine A

    2006-01-01

    To examine sedentary behaviours (including television viewing, playing computer games and computer use), diet, exercise and fitness in relation to overweight/obesity in Australian adolescents. Questionnaires elicited food frequency data, time spent in TV-viewing, using computers, other sedentary occupations and physical activity recall. Weight, height and fitness (laps completed in the Leger test) were measured. Among 281 boys and 321 girls, mean age 12 years (SD 0.9), 56 boys (20.0%) and 70 girls (23.3%) were overweight/obese. Greater fitness was associated with decreased risk of overweight/obesity in boys (Odds ratio [OR] 0.74; 95% CI 0.55, 0.99) and girls (OR 0.93; 95% CI 0.91, 0.99). TV-viewing predicted increased risk in boys (OR 1.04; 95% CI 1.01, 1.06) and decreased risk in girls (OR 0.99; 95% CI 0.96, 0.99). Computer use, video games, and other sedentary behaviours were not significantly related to risk of overweight/obesity. Vegetable intake was associated with lower risk in boys (OR 0.98; 95% CI 0.97, 0.99); greater risk was associated with lower fat intake in boys and girls, lower consumption of energy-dense snacks in boys (OR 0.74; 95% CI 0.62, 0.88) and greater intake of vegetables in girls (OR 1.02; 95% CI 1.00, 1.03), suggesting dieting or knowledge of favourable dietary choices in overweight/obese children. Among these adolescents, fitness was negatively related to risk for overweight/obesity in boys and girls. TV-viewing was a positive predictor in boys and a negative predictor in girls but the effect size was small; other sedentary behaviours did not predict risk.

  5. Impact of an in-house emergency radiologist on report turnaround time.

    PubMed

    Lamb, Leslie; Kashani, Paria; Ryan, John; Hebert, Guy; Sheikh, Adnan; Thornhill, Rebecca; Fasih, Najla

    2015-01-01

    One of the many challenges facing emergency departments (EDs) across North America is timely access to emergency radiology services. Academic institutions, which are typically also regional referral centres, frequently require cross-sectional studies to be performed 24 hours a day with expedited final reports to accelerate patient care and ED flow. The purpose of this study was to determine if the presence of an in-house radiologist, in addition to a radiology resident dedicated to the ED, had a significant impact on report turnaround time. Preliminary and final report turnaround times, provided by the radiology resident and staff, respectively, for patients undergoing computed tomography or ultrasonography of their abdomen/pelvis in 2008 (before the implementation of emergency radiology in-house staff service) were compared to those performed during the same time frame in 2009 and 2010 (after staffing protocols were changed). A total of 1,624 reports were reviewed. Overall, there was no statistically significant decrease in the preliminary report turnaround times between 2008 and 2009 (p = 0.1102), 2009 and 2010 (p = 0.6232), or 2008 and 2010 (p = 0.0890), although times consistently decreased from a median of 2.40 hours to 2.08 hours to 2.05 hours (2008 to 2009 to 2010). There was a statistically significant decrease in final report turnaround times between 2008 and 2009 (p < 0.0001), 2009 and 2010 (p < 0.0011), and 2008 and 2010 (p < 0.0001). Median final report times decreased from 5.00 hours to 3.08 hours to 2.75 hours in 2008, 2009, and 2010, respectively. There was also a significant decrease in the time interval between preliminary and final reports between 2008 and 2009 (p < 0.0001) and 2008 and 2010 (p < 0.0001) but no significant change between 2009 and 2010 (p = 0.4144). Our results indicate that the presence of a dedicated ED radiologist significantly reduces final report turnaround time and thus may positively impact the time to ED patient disposition. Patient care is improved when attending radiologists are immediately available to read complex films, both in terms of health care outcomes and regarding the need for repeat testing. Providing emergency physicians with accurate imaging findings as rapidly as possible facilitates effective and timely management and thus optimizes patient care.

  6. Viscoelastic Finite Difference Modeling Using Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.

    2014-12-01

    Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size and the slow memory transfers are the limiting factors of our GPU implementation. Those results show the benefits of using GPUs instead of CPUs for time based finite-difference seismic simulations. The reductions in computation time and in hardware costs are significant and open the door for new approaches in seismic inversion.

  7. Does Preinterventional Flat-Panel Computer Tomography Pooled Blood Volume Mapping Predict Final Infarct Volume After Mechanical Thrombectomy in Acute Cerebral Artery Occlusion?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, Marlies, E-mail: marlies.wagner@kgu.de; Kyriakou, Yiannis, E-mail: yiannis.kyriakou@siemens.com; Mesnil de Rochemont, Richard du, E-mail: mesnil@em.uni-frankfurt.de

    2013-08-01

    PurposeDecreased cerebral blood volume is known to be a predictor for final infarct volume in acute cerebral artery occlusion. To evaluate the predictability of final infarct volume in patients with acute occlusion of the middle cerebral artery (MCA) or the distal internal carotid artery (ICA) and successful endovascular recanalization, pooled blood volume (PBV) was measured using flat-panel detector computed tomography (FPD CT).Materials and MethodsTwenty patients with acute unilateral occlusion of the MCA or distal ACI without demarcated infarction, as proven by CT at admission, and successful Thrombolysis in cerebral infarction score (TICI 2b or 3) endovascular thrombectomy were included. Cerebralmore » PBV maps were acquired from each patient immediately before endovascular thrombectomy. Twenty-four hours after recanalization, each patient underwent multislice CT to visualize final infarct volume. Extent of the areas of decreased PBV was compared with the final infarct volume proven by follow-up CT the next day.ResultsIn 15 of 20 patients, areas of distinct PBV decrease corresponded to final infarct volume. In 5 patients, areas of decreased PBV overestimated final extension of ischemia probably due to inappropriate timing of data acquisition and misery perfusion.ConclusionPBV mapping using FPD CT is a promising tool to predict areas of irrecoverable brain parenchyma in acute thromboembolic stroke. Further validation is necessary before routine use for decision making for interventional thrombectomy.« less

  8. Aligner optimization increases accuracy and decreases compute times in multi-species sequence data.

    PubMed

    Robinson, Kelly M; Hawkins, Aziah S; Santana-Cruz, Ivette; Adkins, Ricky S; Shetty, Amol C; Nagaraj, Sushma; Sadzewicz, Lisa; Tallon, Luke J; Rasko, David A; Fraser, Claire M; Mahurkar, Anup; Silva, Joana C; Dunning Hotopp, Julie C

    2017-09-01

    As sequencing technologies have evolved, the tools to analyze these sequences have made similar advances. However, for multi-species samples, we observed important and adverse differences in alignment specificity and computation time for bwa- mem (Burrows-Wheeler aligner-maximum exact matches) relative to bwa-aln. Therefore, we sought to optimize bwa-mem for alignment of data from multi-species samples in order to reduce alignment time and increase the specificity of alignments. In the multi-species cases examined, there was one majority member (i.e. Plasmodium falciparum or Brugia malayi ) and one minority member (i.e. human or the Wolbachia endosymbiont w Bm) of the sequence data. Increasing bwa-mem seed length from the default value reduced the number of read pairs from the majority sequence member that incorrectly aligned to the reference genome of the minority sequence member. Combining both source genomes into a single reference genome increased the specificity of mapping, while also reducing the central processing unit (CPU) time. In Plasmodium , at a seed length of 18 nt, 24.1 % of reads mapped to the human genome using 1.7±0.1 CPU hours, while 83.6 % of reads mapped to the Plasmodium genome using 0.2±0.0 CPU hours (total: 107.7 % reads mapping; in 1.9±0.1 CPU hours). In contrast, 97.1 % of the reads mapped to a combined Plasmodium- human reference in only 0.7±0.0 CPU hours. Overall, the results suggest that combining all references into a single reference database and using a 23 nt seed length reduces the computational time, while maximizing specificity. Similar results were found for simulated sequence reads from a mock metagenomic data set. We found similar improvements to computation time in a publicly available human-only data set.

  9. Molecular simulations and experimental studies of solubility and diffusivity for pure and mixed gases of H2, CO2, and Ar absorbed in the ionic liquid 1-n-hexyl-3-methylimidazolium bis(trifluoromethylsulfonyl)amide ([hmim][Tf2N]).

    PubMed

    Shi, Wei; Sorescu, Dan C; Luebke, David R; Keller, Murphy J; Wickramanayake, Shan

    2010-05-20

    Classical molecular dynamics and Monte Carlo simulations are used to calculate the self-diffusivity and solubility of pure and mixed CO(2), H(2), and Ar gases absorbed in the ionic liquid 1-n-hexyl-3-methylimidazolium bis(trifluoromethylsulfonyl)amide ([hmim][Tf(2)N]). Overall, the computed absorption isotherms, Henry's law constants, and partial molar enthalpies for pure H(2) agree well with the experimental data obtained by Maurer et al. [J. Chem. Eng. Data 2006, 51, 1364] and the experimental values determined in this work. However, the agreement is poor between the simulations and the experimental data by Noble et al. [Ind. Eng. Chem. Res. 2008, 47, 3453] and Costa Gomes [J. Chem. Eng. Data 2007, 52, 472] at high temperatures. The computed H(2) permeability values are in good agreement with the experimental data at 313 K obtained by Luebke et al. [J. Membr. Sci. 2007, 298, 41; ibid, 2008, 322, 28], but about three times larger than the experimental value at 573 K from the same group. Our computed H(2) solubilities using different H(2) potential models have similar values and solute polarizations were found to have a negligible effect on the predicted gas solubilities for both the H(2) and Ar. The interaction between H(2) and the ionic liquid is weak, about three times smaller than between the ionic liquid and Ar and six times smaller than that of CO(2) with the ionic liquid, results that are consistent with a decreasing solubility from CO(2) to Ar and to H(2). The molar volume of the ionic liquid was found to be the determining factor for the H(2) solubility. For mixed H(2) and Ar gases, the solubilities for both solutes decrease compared to the respective pure gas solubilities. For mixed gases of CO(2) and H(2), the solubility selectivity of CO(2) over H(2) decreases from about 30 at 313 K to about 3 at 573 K. For the permeability, the simulated values for CO(2) in [hmim][Tf(2)N] are about 20-60% different than the experimental data by Luebke et al. [J. Membr. Sci. 2008, 322, 28].

  10. Efficient biprediction decision scheme for fast high efficiency video coding encoding

    NASA Astrophysics Data System (ADS)

    Park, Sang-hyo; Lee, Seung-ho; Jang, Euee S.; Jun, Dongsan; Kang, Jung-Won

    2016-11-01

    An efficient biprediction decision scheme of high efficiency video coding (HEVC) is proposed for fast-encoding applications. For low-delay video applications, bidirectional prediction can be used to increase compression performance efficiently with previous reference frames. However, at the same time, the computational complexity of the HEVC encoder is significantly increased due to the additional biprediction search. Although a some research has attempted to reduce this complexity, whether the prediction is strongly related to both motion complexity and prediction modes in a coding unit has not yet been investigated. A method that avoids most compression-inefficient search points is proposed so that the computational complexity of the motion estimation process can be dramatically decreased. To determine if biprediction is critical, the proposed method exploits the stochastic correlation of the context of prediction units (PUs): the direction of a PU and the accuracy of a motion vector. Through experimental results, the proposed method showed that the time complexity of biprediction can be reduced to 30% on average, outperforming existing methods in view of encoding time, number of function calls, and memory access.

  11. An exactly solvable, spatial model of mutation accumulation in cancer

    NASA Astrophysics Data System (ADS)

    Paterson, Chay; Nowak, Martin A.; Waclaw, Bartlomiej

    2016-12-01

    One of the hallmarks of cancer is the accumulation of driver mutations which increase the net reproductive rate of cancer cells and allow them to spread. This process has been studied in mathematical models of well mixed populations, and in computer simulations of three-dimensional spatial models. But the computational complexity of these more realistic, spatial models makes it difficult to simulate realistically large and clinically detectable solid tumours. Here we describe an exactly solvable mathematical model of a tumour featuring replication, mutation and local migration of cancer cells. The model predicts a quasi-exponential growth of large tumours, even if different fragments of the tumour grow sub-exponentially due to nutrient and space limitations. The model reproduces clinically observed tumour growth times using biologically plausible rates for cell birth, death, and migration rates. We also show that the expected number of accumulated driver mutations increases exponentially in time if the average fitness gain per driver is constant, and that it reaches a plateau if the gains decrease over time. We discuss the realism of the underlying assumptions and possible extensions of the model.

  12. Chemotherapy appointment scheduling under uncertainty using mean-risk stochastic integer programming.

    PubMed

    Alvarado, Michelle; Ntaimo, Lewis

    2018-03-01

    Oncology clinics are often burdened with scheduling large volumes of cancer patients for chemotherapy treatments under limited resources such as the number of nurses and chairs. These cancer patients require a series of appointments over several weeks or months and the timing of these appointments is critical to the treatment's effectiveness. Additionally, the appointment duration, the acuity levels of each appointment, and the availability of clinic nurses are uncertain. The timing constraints, stochastic parameters, rising treatment costs, and increased demand of outpatient oncology clinic services motivate the need for efficient appointment schedules and clinic operations. In this paper, we develop three mean-risk stochastic integer programming (SIP) models, referred to as SIP-CHEMO, for the problem of scheduling individual chemotherapy patient appointments and resources. These mean-risk models are presented and an algorithm is devised to improve computational speed. Computational results were conducted using a simulation model and results indicate that the risk-averse SIP-CHEMO model with the expected excess mean-risk measure can decrease patient waiting times and nurse overtime when compared to deterministic scheduling algorithms by 42 % and 27 %, respectively.

  13. Incremental retinal-defocus theory of myopia development--schematic analysis and computer simulation.

    PubMed

    Hung, George K; Ciuffreda, Kenneth J

    2007-07-01

    Previous theories of myopia development involved subtle and complex processes such as the sensing and analyzing of chromatic aberration, spherical aberration, spatial gradient of blur, or spatial frequency content of the retinal image, but they have not been able to explain satisfactorily the diverse experimental results reported in the literature. On the other hand, our newly proposed incremental retinal-defocus theory (IRDT) has been able to explain all of these results. This theory is based on a relatively simple and direct mechanism for the regulation of ocular growth. It states that a time-averaged decrease in retinal-image defocus area decreases the rate of release of retinal neuromodulators, which decreases the rate of retinal proteoglycan synthesis with an associated decrease in scleral structural integrity. This increases the rate of scleral growth, and in turn the eye's axial length, which leads to myopia. Our schematic analysis has provided a clear explanation for the eye's ability to grow in the appropriate direction under a wide range of experimental conditions. In addition, the theory has been able to explain how repeated cycles of nearwork-induced transient myopia leads to repeated periods of decreased retinal-image defocus, whose cumulative effect over an extended period of time results in an increase in axial growth that leads to permanent myopia. Thus, this unifying theory forms the basis for understanding the underlying retinal and scleral mechanisms of myopia development.

  14. A study on quantitative analysis of exposure dose caused by patient depending on time and distance in nuclear medicine examination

    NASA Astrophysics Data System (ADS)

    Kim, H. S.; Cho, J. H.; Shin, S. G.; Dong, K. R.; Chung, W. K.; Chung, J. E.

    2013-01-01

    This study evaluated possible actions that can help protect against and reduce radiation exposure by measuring the exposure dose for each type of isotope that is used frequently in nuclear medicine before performing numerical analysis of the effective half-life based on the measurement results. From July to August in 2010, the study targeted 10, 6 and 5 people who underwent an 18F-FDG (fludeoxyglucose) positron emission tomography (PET) scan, 99mTc-HDP bone scan, and 201Tl myocardial single-photon emission computed tomography (SPECT) scan, respectively, in the nuclear medicine department. After injecting the required medicine into the subjects, a survey meter was used to measure the dose depending on the distance from the heart and time elapsed. For the 18F-FDG PET scan, the dose decreased by approximately 66% at 90 min compared to that immediately after the injection and by 78% at a distance of 1 m compared to that at 0.3 m. In the 99mTc-HDP bone scan, the dose decreased by approximately 71% in 200 min compared to that immediately after the injection and by approximately 78% at a distance of 1 m compared to that at 0.3 m. In the 201Tl myocardial SPECT scan, the dose decreased by approximately 30% in 250 min compared to that immediately after the injection and by approximately 55% at a distance of 1 m compared to that at 0.3 m. In conclusion, the dose decreases by a large margin depending on the distance and time. In conclusion, this study measured the exposure doses by isotopes, distance from the heart and exposure time, and found that the doses were reduced significantly according the distance and the time.

  15. Webcam mouse using face and eye tracking in various illumination environments.

    PubMed

    Lin, Yuan-Pin; Chao, Yi-Ping; Lin, Chung-Chih; Chen, Jyh-Horng

    2005-01-01

    Nowadays, due to enhancement of computer performance and popular usage of webcam devices, it has become possible to acquire users' gestures for the human-computer-interface with PC via webcam. However, the effects of illumination variation would dramatically decrease the stability and accuracy of skin-based face tracking system; especially for a notebook or portable platform. In this study we present an effective illumination recognition technique, combining K-Nearest Neighbor classifier and adaptive skin model, to realize the real-time tracking system. We have demonstrated that the accuracy of face detection based on the KNN classifier is higher than 92% in various illumination environments. In real-time implementation, the system successfully tracks user face and eyes features at 15 fps under standard notebook platforms. Although KNN classifier only initiates five environments at preliminary stage, the system permits users to define and add their favorite environments to KNN for computer access. Eventually, based on this efficient tracking algorithm, we have developed a "Webcam Mouse" system to control the PC cursor using face and eye tracking. Preliminary studies in "point and click" style PC web games also shows promising applications in consumer electronic markets in the future.

  16. Three-dimensional computer graphic animations for studying social approach behaviour in medaka fish: Effects of systematic manipulation of morphological and motion cues.

    PubMed

    Nakayasu, Tomohiro; Yasugi, Masaki; Shiraishi, Soma; Uchida, Seiichi; Watanabe, Eiji

    2017-01-01

    We studied social approach behaviour in medaka fish using three-dimensional computer graphic (3DCG) animations based on the morphological features and motion characteristics obtained from real fish. This is the first study which used 3DCG animations and examined the relative effects of morphological and motion cues on social approach behaviour in medaka. Various visual stimuli, e.g., lack of motion, lack of colour, alternation in shape, lack of locomotion, lack of body motion, and normal virtual fish in which all four features (colour, shape, locomotion, and body motion) were reconstructed, were created and presented to fish using a computer display. Medaka fish presented with normal virtual fish spent a long time in proximity to the display, whereas time spent near the display was decreased in other groups when compared with normal virtual medaka group. The results suggested that the naturalness of visual cues contributes to the induction of social approach behaviour. Differential effects between body motion and locomotion were also detected. 3DCG animations can be a useful tool to study the mechanisms of visual processing and social behaviour in medaka.

  17. Three-dimensional computer graphic animations for studying social approach behaviour in medaka fish: Effects of systematic manipulation of morphological and motion cues

    PubMed Central

    Nakayasu, Tomohiro; Yasugi, Masaki; Shiraishi, Soma; Uchida, Seiichi; Watanabe, Eiji

    2017-01-01

    We studied social approach behaviour in medaka fish using three-dimensional computer graphic (3DCG) animations based on the morphological features and motion characteristics obtained from real fish. This is the first study which used 3DCG animations and examined the relative effects of morphological and motion cues on social approach behaviour in medaka. Various visual stimuli, e.g., lack of motion, lack of colour, alternation in shape, lack of locomotion, lack of body motion, and normal virtual fish in which all four features (colour, shape, locomotion, and body motion) were reconstructed, were created and presented to fish using a computer display. Medaka fish presented with normal virtual fish spent a long time in proximity to the display, whereas time spent near the display was decreased in other groups when compared with normal virtual medaka group. The results suggested that the naturalness of visual cues contributes to the induction of social approach behaviour. Differential effects between body motion and locomotion were also detected. 3DCG animations can be a useful tool to study the mechanisms of visual processing and social behaviour in medaka. PMID:28399163

  18. Analysis of mode-locked and intracavity frequency-doubled Nd:YAG laser

    NASA Technical Reports Server (NTRS)

    Siegman, A. E.; Heritier, J.-M.

    1980-01-01

    The paper presents analytical and computer studies of the CW mode-locked and intracavity frequency-doubled Nd:YAG laser which provide new insight into the operation, including the detuning behavior, of this type of laser. Computer solutions show that the steady-state pulse shape for this laser is much closer to a truncated cosine than to a Gaussian; there is little spectral broadening for on-resonance operation; and the chirp is negligible. This leads to a simplified analytical model carried out entirely in the time domain, with atomic linewidth effects ignored. Simple analytical results for on-resonance pulse shape, pulse width, signal intensity, and harmonic conversion efficiency in terms of basic laser parameters are derived from this model. A simplified physical description of the detuning behavior is also developed. Agreement is found with experimental studies showing that the pulsewidth decreases as the modulation frequency is detuned off resonance; the harmonic power output initially increases and then decreases; and the pulse shape develops a sharp-edged asymmetry of opposite sense for opposite signs of detuning.

  19. Changes in Flat Plate Wake Characteristics Obtained With Decreasing Plate Thickness

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2016-01-01

    The near and very near wake of a flat plate with a circular trailing edge is investigated with data from direct numerical simulations. Computations were performed for four different Reynolds numbers based on plate thickness (D) and at constant plate length. The value of ?/D varies by a factor of approximately 20 in the computations (? being the boundary layer momentum thickness at the trailing edge). The separating boundary layers are turbulent in all the cases. One objective of the study is to understand the changes in wake characteristics as the plate thickness is reduced (increasing ?/D). Vortex shedding is vigorous in the low ?/D cases with a substantial decrease in shedding intensity in the largest ?/D case (for all practical purposes shedding becomes almost intermittent). Other characteristics that are significantly altered with increasing ?/D are the roll-up of the detached shear layers and the magnitude of fluctuations in shedding period. These effects are explored in depth. The effects of changing ?/D on the distributions of the time-averaged, near-wake velocity statistics are discussed.

  20. Dynamics of retinal photocoagulation and rupture

    NASA Astrophysics Data System (ADS)

    Sramek, Christopher; Paulus, Yannis; Nomoto, Hiroyuki; Huie, Phil; Brown, Jefferson; Palanker, Daniel

    2009-05-01

    In laser retinal photocoagulation, short (<20 ms) pulses have been found to reduce thermal damage to the inner retina, decrease treatment time, and minimize pain. However, the safe therapeutic window (defined as the ratio of power for producing a rupture to that of mild coagulation) decreases with shorter exposures. To quantify the extent of retinal heating and maximize the therapeutic window, a computational model of millisecond retinal photocoagulation and rupture was developed. Optical attenuation of 532-nm laser light in ocular tissues was measured, including retinal pigment epithelial (RPE) pigmentation and cell-size variability. Threshold powers for vaporization and RPE damage were measured with pulse durations ranging from 1 to 200 ms. A finite element model of retinal heating inferred that vaporization (rupture) takes place at 180-190°C. RPE damage was accurately described by the Arrhenius model with activation energy of 340 kJ/mol. Computed photocoagulation lesion width increased logarithmically with pulse duration, in agreement with histological findings. The model will allow for the optimization of beam parameters to increase the width of the therapeutic window for short exposures.

  1. An Evolutionary Algorithm for Fast Intensity Based Image Matching Between Optical and SAR Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Fischer, Peter; Schuegraf, Philipp; Merkle, Nina; Storch, Tobias

    2018-04-01

    This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR) optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search) and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.

  2. Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1998-01-01

    Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.

  3. Short- and medium-term efficacy of a Web-based computer-tailored nutrition education intervention for adults including cognitive and environmental feedback: randomized controlled trial.

    PubMed

    Springvloet, Linda; Lechner, Lilian; de Vries, Hein; Candel, Math J J M; Oenema, Anke

    2015-01-19

    Web-based, computer-tailored nutrition education interventions can be effective in modifying self-reported dietary behaviors. Traditional computer-tailored programs primarily targeted individual cognitions (knowledge, awareness, attitude, self-efficacy). Tailoring on additional variables such as self-regulation processes and environmental-level factors (the home food environment arrangement and perception of availability and prices of healthy food products in supermarkets) may improve efficacy and effect sizes (ES) of Web-based computer-tailored nutrition education interventions. This study evaluated the short- and medium-term efficacy and educational differences in efficacy of a cognitive and environmental feedback version of a Web-based computer-tailored nutrition education intervention on self-reported fruit, vegetable, high-energy snack, and saturated fat intake compared to generic nutrition information in the total sample and among participants who did not comply with dietary guidelines (the risk groups). A randomized controlled trial was conducted with a basic (tailored intervention targeting individual cognition and self-regulation processes; n=456), plus (basic intervention additionally targeting environmental-level factors; n=459), and control (generic nutrition information; n=434) group. Participants were recruited from the general population and randomly assigned to a study group. Self-reported fruit, vegetable, high-energy snack, and saturated fat intake were assessed at baseline and at 1- (T1) and 4-months (T2) postintervention using online questionnaires. Linear mixed model analyses examined group differences in change over time. Educational differences were examined with group×time×education interaction terms. In the total sample, the basic (T1: ES=-0.30; T2: ES=-0.18) and plus intervention groups (T1: ES=-0.29; T2: ES=-0.27) had larger decreases in high-energy snack intake than the control group. The basic version resulted in a larger decrease in saturated fat intake than the control intervention (T1: ES=-0.19; T2: ES=-0.17). In the risk groups, the basic version caused larger decreases in fat (T1: ES=-0.28; T2: ES=-0.28) and high-energy snack intake (T1: ES=-0.34; T2: ES=-0.20) than the control intervention. The plus version resulted in a larger increase in fruit (T1: ES=0.25; T2: ES=0.37) and a larger decrease in high-energy snack intake (T1: ES=-0.38; T2: ES=-0.32) than the control intervention. For high-energy snack intake, educational differences were found. Stratified analyses showed that the plus version was most effective for high-educated participants. Both intervention versions were more effective in improving some of the self-reported dietary behaviors than generic nutrition information, especially in the risk groups, among both higher- and lower-educated participants. For fruit intake, only the plus version was more effective than providing generic nutrition information. Although feasible, incorporating environmental-level information is time-consuming. Therefore, the basic version may be more feasible for further implementation, although inclusion of feedback on the arrangement of the home food environment and on availability and prices may be considered for fruit and, for high-educated people, for high-energy snack intake. Netherlands Trial Registry NTR3396; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=3396 (Archived by WebCite at http://www.webcitation.org/6VNZbdL6w).

  4. Short- and Medium-Term Efficacy of a Web-Based Computer-Tailored Nutrition Education Intervention for Adults Including Cognitive and Environmental Feedback: Randomized Controlled Trial

    PubMed Central

    Lechner, Lilian; de Vries, Hein; Candel, Math JJM; Oenema, Anke

    2015-01-01

    Background Web-based, computer-tailored nutrition education interventions can be effective in modifying self-reported dietary behaviors. Traditional computer-tailored programs primarily targeted individual cognitions (knowledge, awareness, attitude, self-efficacy). Tailoring on additional variables such as self-regulation processes and environmental-level factors (the home food environment arrangement and perception of availability and prices of healthy food products in supermarkets) may improve efficacy and effect sizes (ES) of Web-based computer-tailored nutrition education interventions. Objective This study evaluated the short- and medium-term efficacy and educational differences in efficacy of a cognitive and environmental feedback version of a Web-based computer-tailored nutrition education intervention on self-reported fruit, vegetable, high-energy snack, and saturated fat intake compared to generic nutrition information in the total sample and among participants who did not comply with dietary guidelines (the risk groups). Methods A randomized controlled trial was conducted with a basic (tailored intervention targeting individual cognition and self-regulation processes; n=456), plus (basic intervention additionally targeting environmental-level factors; n=459), and control (generic nutrition information; n=434) group. Participants were recruited from the general population and randomly assigned to a study group. Self-reported fruit, vegetable, high-energy snack, and saturated fat intake were assessed at baseline and at 1- (T1) and 4-months (T2) postintervention using online questionnaires. Linear mixed model analyses examined group differences in change over time. Educational differences were examined with group×time×education interaction terms. Results In the total sample, the basic (T1: ES=–0.30; T2: ES=–0.18) and plus intervention groups (T1: ES=–0.29; T2: ES=–0.27) had larger decreases in high-energy snack intake than the control group. The basic version resulted in a larger decrease in saturated fat intake than the control intervention (T1: ES=–0.19; T2: ES=–0.17). In the risk groups, the basic version caused larger decreases in fat (T1: ES=–0.28; T2: ES=–0.28) and high-energy snack intake (T1: ES=–0.34; T2: ES=–0.20) than the control intervention. The plus version resulted in a larger increase in fruit (T1: ES=0.25; T2: ES=0.37) and a larger decrease in high-energy snack intake (T1: ES=–0.38; T2: ES=–0.32) than the control intervention. For high-energy snack intake, educational differences were found. Stratified analyses showed that the plus version was most effective for high-educated participants. Conclusions Both intervention versions were more effective in improving some of the self-reported dietary behaviors than generic nutrition information, especially in the risk groups, among both higher- and lower-educated participants. For fruit intake, only the plus version was more effective than providing generic nutrition information. Although feasible, incorporating environmental-level information is time-consuming. Therefore, the basic version may be more feasible for further implementation, although inclusion of feedback on the arrangement of the home food environment and on availability and prices may be considered for fruit and, for high-educated people, for high-energy snack intake. Trial Registration Netherlands Trial Registry NTR3396; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=3396 (Archived by WebCite at http://www.webcitation.org/6VNZbdL6w). PMID:25599828

  5. Corrosion Prediction with Parallel Finite Element Modeling for Coupled Hygro-Chemo Transport into Concrete under Chloride-Rich Environment

    PubMed Central

    Na, Okpin; Cai, Xiao-Chuan; Xi, Yunping

    2017-01-01

    The prediction of the chloride-induced corrosion is very important because of the durable life of concrete structure. To simulate more realistic durability performance of concrete structures, complex scientific methods and more accurate material models are needed. In order to predict the robust results of corrosion initiation time and to describe the thin layer from concrete surface to reinforcement, a large number of fine meshes are also used. The purpose of this study is to suggest more realistic physical model regarding coupled hygro-chemo transport and to implement the model with parallel finite element algorithm. Furthermore, microclimate model with environmental humidity and seasonal temperature is adopted. As a result, the prediction model of chloride diffusion under unsaturated condition was developed with parallel algorithms and was applied to the existing bridge to validate the model with multi-boundary condition. As the number of processors increased, the computational time decreased until the number of processors became optimized. Then, the computational time increased because the communication time between the processors increased. The framework of present model can be extended to simulate the multi-species de-icing salts ingress into non-saturated concrete structures in future work. PMID:28772714

  6. Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora

    NASA Astrophysics Data System (ADS)

    Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke

    The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.

  7. Comparison of turbulence mitigation algorithms

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric

    2017-07-01

    When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.

  8. Heat and Moisture Transport and Storage Parameters of Bricks Affected by the Environment

    NASA Astrophysics Data System (ADS)

    Kočí, Václav; Čáchová, Monika; Koňáková, Dana; Vejmelková, Eva; Jerman, Miloš; Keppert, Martin; Maděra, Jiří; Černý, Robert

    2018-05-01

    The effect of external environment on heat and moisture transport and storage properties of the traditional fired clay brick, sand-lime brick and highly perforated ceramic block commonly used in the Czech Republic and on their hygrothermal performance in building envelopes is analyzed by a combination of experimental and computational techniques. The experimental measurements of thermal, hygric and basic physical parameters are carried out in the reference state and after a 3-year exposure of the bricks to real climatic conditions of the city of Prague. The obtained results showed that after 3 years of weathering the porosity of the analyzed bricks increased up to five percentage points which led to an increase in liquid and gaseous moisture transport parameters and a decrease in thermal conductivity. Computational modeling of hygrothermal performance of building envelopes made of the studied bricks was done using both reference and weather-affected data. The simulated results indicated an improvement in the annual energy balances and a decrease in the time-of-wetness functions as a result of the use of data obtained after the 3-year exposure to the environment. The effects of weathering on both heat and moisture transport and storage parameters of the analyzed bricks and on their hygrothermal performance were found significant despite the occurrence of warm winters in the time period of 2012-2015 when the brick specimens were exposed to the environment.

  9. A modular finite-element model (MODFE) for areal and axisymmetric ground-water-flow problems, Part 3: Design philosophy and programming details

    USGS Publications Warehouse

    Torak, L.J.

    1993-01-01

    A MODular Finite-Element, digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water-flow. The modular structure of MODFE places the computationally independent tasks that are performed routinely by digital-computer programs simulating ground-water flow into separate subroutines, which are executed from the main program by control statements. Each subroutine consists of complete sets of computations, or modules, which are identified by comment statements, and can be modified by the user without affecting unrelated computations elsewhere in the program. Simulation capabilities can be added or modified by either adding or modifying subroutines that perform specific computational tasks, and the modular-program structure allows the user to create versions of MODFE that contain only the simulation capabilities that pertain to the ground-water problem of interest. MODFE is written in a Fortran programming language that makes it virtually device independent and compatible with desk-top personal computers and large mainframes. MODFE uses computer storage and execution time efficiently by taking advantage of symmetry and sparseness within the coefficient matrices of the finite-element equations. Parts of the matrix coefficients are computed and stored as single-subscripted variables, which are assembled into a complete coefficient just prior to solution. Computer storage is reused during simulation to decrease storage requirements. Descriptions of subroutines that execute the computational steps of the modular-program structure are given in tables that cross reference the subroutines with particular versions of MODFE. Programming details of linear and nonlinear hydrologic terms are provided. Structure diagrams for the main programs show the order in which subroutines are executed for each version and illustrate some of the linear and nonlinear versions of MODFE that are possible. Computational aspects of changing stresses and boundary conditions with time and of mass-balance and error terms are given for each hydrologic feature. Program variables are listed and defined according to their occurrence in the main programs and in subroutines. Listings of the main programs and subroutines are given.

  10. LORETA EEG phase reset of the default mode network.

    PubMed

    Thatcher, Robert W; North, Duane M; Biver, Carl J

    2014-01-01

    The purpose of this study was to explore phase reset of 3-dimensional current sources in Brodmann areas located in the human default mode network (DMN) using Low Resolution Electromagnetic Tomography (LORETA) of the human electroencephalogram (EEG). The EEG was recorded from 19 scalp locations from 70 healthy normal subjects ranging in age from 13 to 20 years. A time point by time point computation of LORETA current sources were computed for 14 Brodmann areas comprising the DMN in the delta frequency band. The Hilbert transform of the LORETA time series was used to compute the instantaneous phase differences between all pairs of Brodmann areas. Phase shift and lock durations were calculated based on the 1st and 2nd derivatives of the time series of phase differences. Phase shift duration exhibited three discrete modes at approximately: (1) 25 ms, (2) 50 ms, and (3) 65 ms. Phase lock duration present primarily at: (1) 300-350 ms and (2) 350-450 ms. Phase shift and lock durations were inversely related and exhibited an exponential change with distance between Brodmann areas. The results are explained by local neural packing density of network hubs and an exponential decrease in connections with distance from a hub. The results are consistent with a discrete temporal model of brain function where anatomical hubs behave like a "shutter" that opens and closes at specific durations as nodes of a network giving rise to temporarily phase locked clusters of neurons for specific durations.

  11. Computational model for behavior shaping as an adaptive health intervention strategy.

    PubMed

    Berardi, Vincent; Carretero-González, Ricardo; Klepeis, Neil E; Ghanipoor Machiani, Sahar; Jahangiri, Arash; Bellettiere, John; Hovell, Melbourne

    2018-03-01

    Adaptive behavioral interventions that automatically adjust in real-time to participants' changing behavior, environmental contexts, and individual history are becoming more feasible as the use of real-time sensing technology expands. This development is expected to improve shortcomings associated with traditional behavioral interventions, such as the reliance on imprecise intervention procedures and limited/short-lived effects. JITAI adaptation strategies often lack a theoretical foundation. Increasing the theoretical fidelity of a trial has been shown to increase effectiveness. This research explores the use of shaping, a well-known process from behavioral theory for engendering or maintaining a target behavior, as a JITAI adaptation strategy. A computational model of behavior dynamics and operant conditioning was modified to incorporate the construct of behavior shaping by adding the ability to vary, over time, the range of behaviors that were reinforced when emitted. Digital experiments were performed with this updated model for a range of parameters in order to identify the behavior shaping features that optimally generated target behavior. Narrowing the range of reinforced behaviors continuously in time led to better outcomes compared with a discrete narrowing of the reinforcement window. Rapid narrowing followed by more moderate decreases in window size was more effective in generating target behavior than the inverse scenario. The computational shaping model represents an effective tool for investigating JITAI adaptation strategies. Model parameters must now be translated from the digital domain to real-world experiments so that model findings can be validated.

  12. Characteristics of screen media use associated with higher BMI in young adolescents.

    PubMed

    Bickham, David S; Blood, Emily A; Walls, Courtney E; Shrier, Lydia A; Rich, Michael

    2013-05-01

    This study investigates how characteristics of young adolescents' screen media use are associated with their BMI. By examining relationships between BMI and both time spent using each of 3 screen media and level of attention allocated to use, we sought to contribute to the understanding of mechanisms linking media use and obesity. We measured heights and weights of 91 13- to 15-year-olds and calculated their BMIs. Over 1 week, participants completed a weekday and a Saturday 24-hour time-use diary in which they reported the amount of time they spent using TV, computers, and video games. Participants carried handheld computers and responded to 4 to 7 random signals per day by completing onscreen questionnaires reporting activities to which they were paying primary, secondary, and tertiary attention. Higher proportions of primary attention to TV were positively associated with higher BMI. The difference between 25th and 75th percentiles of attention to TV corresponded to an estimated +2.4 BMI points. Time spent watching television was unrelated to BMI. Neither duration of use nor extent of attention paid to video games or computers was associated with BMI. These findings support the notion that attention to TV is a key element of the increased obesity risk associated with TV viewing. Mechanisms may include the influence of TV commercials on preferences for energy-dense, nutritionally questionable foods and/or eating while distracted by TV. Interventions that interrupt these processes may be effective in decreasing obesity among screen media users.

  13. Processing Shotgun Proteomics Data on the Amazon Cloud with the Trans-Proteomic Pipeline*

    PubMed Central

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W.; Moritz, Robert L.

    2015-01-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. PMID:25418363

  14. Processing shotgun proteomics data on the Amazon cloud with the trans-proteomic pipeline.

    PubMed

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W; Moritz, Robert L

    2015-02-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  15. Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling

    NASA Technical Reports Server (NTRS)

    Rios, Joseph Lucio; Ross, Kevin

    2009-01-01

    Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.

  16. A novel approach for discovering condition-specific correlations of gene expressions within biological pathways by using cloud computing technology.

    PubMed

    Chang, Tzu-Hao; Wu, Shih-Lin; Wang, Wei-Jen; Horng, Jorng-Tzong; Chang, Cheng-Wei

    2014-01-01

    Microarrays are widely used to assess gene expressions. Most microarray studies focus primarily on identifying differential gene expressions between conditions (e.g., cancer versus normal cells), for discovering the major factors that cause diseases. Because previous studies have not identified the correlations of differential gene expression between conditions, crucial but abnormal regulations that cause diseases might have been disregarded. This paper proposes an approach for discovering the condition-specific correlations of gene expressions within biological pathways. Because analyzing gene expression correlations is time consuming, an Apache Hadoop cloud computing platform was implemented. Three microarray data sets of breast cancer were collected from the Gene Expression Omnibus, and pathway information from the Kyoto Encyclopedia of Genes and Genomes was applied for discovering meaningful biological correlations. The results showed that adopting the Hadoop platform considerably decreased the computation time. Several correlations of differential gene expressions were discovered between the relapse and nonrelapse breast cancer samples, and most of them were involved in cancer regulation and cancer-related pathways. The results showed that breast cancer recurrence might be highly associated with the abnormal regulations of these gene pairs, rather than with their individual expression levels. The proposed method was computationally efficient and reliable, and stable results were obtained when different data sets were used. The proposed method is effective in identifying meaningful biological regulation patterns between conditions.

  17. Children's accuracy of portion size estimation using digital food images: effects of interface design and size of image on computer screen.

    PubMed

    Baranowski, Tom; Baranowski, Janice C; Watson, Kathleen B; Martin, Shelby; Beltran, Alicia; Islam, Noemi; Dadabhoy, Hafza; Adame, Su-heyla; Cullen, Karen; Thompson, Debbe; Buday, Richard; Subar, Amy

    2011-03-01

    To test the effect of image size and presence of size cues on the accuracy of portion size estimation by children. Children were randomly assigned to seeing images with or without food size cues (utensils and checked tablecloth) and were presented with sixteen food models (foods commonly eaten by children) in varying portion sizes, one at a time. They estimated each food model's portion size by selecting a digital food image. The same food images were presented in two ways: (i) as small, graduated portion size images all on one screen or (ii) by scrolling across large, graduated portion size images, one per sequential screen. Laboratory-based with computer and food models. Volunteer multi-ethnic sample of 120 children, equally distributed by gender and ages (8 to 13 years) in 2008-2009. Average percentage of correctly classified foods was 60·3 %. There were no differences in accuracy by any design factor or demographic characteristic. Multiple small pictures on the screen at once took half the time to estimate portion size compared with scrolling through large pictures. Larger pictures had more overestimation of size. Multiple images of successively larger portion sizes of a food on one computer screen facilitated quicker portion size responses with no decrease in accuracy. This is the method of choice for portion size estimation on a computer.

  18. Preventive and Abortive Strategies for Stimulation Based Control of Epilepsy: A Computational Model Study.

    PubMed

    Koppert, Marc; Kalitzin, Stiliyan; Velis, Demetrios; Lopes Da Silva, Fernando; Viergever, Max A

    2016-12-01

    Epilepsy is a condition in which periods of ongoing normal EEG activity alternate with periods of oscillatory behavior characteristic of epileptic seizures. The dynamics of the transitions between the two states are still unclear. Computational models provide a powerful tool to explore the underlying mechanisms of such transitions, with the purpose of eventually finding therapeutic interventions for this debilitating condition. In this study, the possibility to postpone seizures elicited by a decrease of inhibition is investigated by using external stimulation in a realistic bistable neuronal model consisting of two interconnected neuronal populations representing pyramidal cells and interneurons. In the simulations, seizures are induced by slowly decreasing the conductivity of GABA[Formula: see text] synaptic channels over time. Since the model is bistable, the system will change state from the initial steady state (SS) to the limit cycle (LS) state because of internal noise, when the inhibition falls below a certain threshold. Several state-independent stimulations paradigms are simulated. Their effectiveness is analyzed for various stimulation frequencies and intensities in combination with periodic and random stimulation sequences. The distributions of the time to first seizure in the presence of stimulation are compared with the situation without stimulation. In addition, stimulation protocols targeted to specific subsystems are applied with the objective of counteracting the baseline shift due to decreased inhibition in the system. Furthermore, an analytical model is used to investigate the effects of random noise. The relation between the strength of random noise stimulation, the control parameter of the system and the transitions between steady state and limit cycle are investigated. The study shows that it is possible to postpone epileptic activity by targeted stimulation in a realistic neuronal model featuring bistability and that it is possible to stop seizures by random noise in an analytical model.

  19. Two-Phase Helical Computed Tomography Study of Salivary Gland Warthin Tumors: A Radiologic Findings and Surgical Applications

    PubMed Central

    Joo, Yeon Hee; Kim, Jin Pyeong; Park, Jung Je

    2014-01-01

    Objectives The goal of this study was to define the radiologic characteristics of two-phase computed tomography (CT) of salivary gland Warthin tumors and to compare them to pleomorphic adenomas. We also aimed to provide a foundation for selecting a surgical method on the basis of radiologic findings. Methods We prospectively enrolled 116 patients with parotid gland tumors, who underwent two-phase CT preoperatively. Early and delayed phase scans were obtained, with scanning delays of 30 and 120 seconds, respectively. The attenuation changes and enhancement patterns were analyzed. In cases when the attenuation changes were decreased, we presumed Warthin tumor preoperatively and performed extracapsular dissection. When the attenuation changes were increased, superficial parotidectomy was performed on the parotid gland tumors. We analyzed the operation times, incision sizes, complications, and recurrence rates. Results Attenuation of Warthin tumors was decreased from early to delayed scans. The ratio of CT numbers in Warthin tumors was also significantly different from other tumors. Warthin tumors were diagnosed with a sensitivity of 96.1% and specificity of 97% using two-phase CT. The mean operation time was 38 minutes and the mean incision size was 36.2 mm for Warthin tumors. However, for the other parotid tumors, the average operation time was 122 minutes and the average incision size was 91.8 mm (P<0.05). Conclusion Salivary Warthin tumor has a distinct pattern of contrast enhancement on two-phase CT, which can guide treatment decisions. The preoperative diagnosis of Warthin tumor made extracapsular dissection possible instead of superficial parotidectomy. PMID:25177439

  20. Two-phase helical computed tomography study of salivary gland warthin tumors: a radiologic findings and surgical applications.

    PubMed

    Joo, Yeon Hee; Kim, Jin Pyeong; Park, Jung Je; Woo, Seung Hoon

    2014-09-01

    The goal of this study was to define the radiologic characteristics of two-phase computed tomography (CT) of salivary gland Warthin tumors and to compare them to pleomorphic adenomas. We also aimed to provide a foundation for selecting a surgical method on the basis of radiologic findings. We prospectively enrolled 116 patients with parotid gland tumors, who underwent two-phase CT preoperatively. Early and delayed phase scans were obtained, with scanning delays of 30 and 120 seconds, respectively. The attenuation changes and enhancement patterns were analyzed. In cases when the attenuation changes were decreased, we presumed Warthin tumor preoperatively and performed extracapsular dissection. When the attenuation changes were increased, superficial parotidectomy was performed on the parotid gland tumors. We analyzed the operation times, incision sizes, complications, and recurrence rates. Attenuation of Warthin tumors was decreased from early to delayed scans. The ratio of CT numbers in Warthin tumors was also significantly different from other tumors. Warthin tumors were diagnosed with a sensitivity of 96.1% and specificity of 97% using two-phase CT. The mean operation time was 38 minutes and the mean incision size was 36.2 mm for Warthin tumors. However, for the other parotid tumors, the average operation time was 122 minutes and the average incision size was 91.8 mm (P<0.05). Salivary Warthin tumor has a distinct pattern of contrast enhancement on two-phase CT, which can guide treatment decisions. The preoperative diagnosis of Warthin tumor made extracapsular dissection possible instead of superficial parotidectomy.

  1. A hybrid meta-heuristic algorithm for the vehicle routing problem with stochastic travel times considering the driver's satisfaction

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Alinaghian, Mehdi; Salamat-Bakhsh, Alireza; Norouzi, Narges

    2012-05-01

    A vehicle routing problem is a significant problem that has attracted great attention from researchers in recent years. The main objectives of the vehicle routing problem are to minimize the traveled distance, total traveling time, number of vehicles and cost function of transportation. Reducing these variables leads to decreasing the total cost and increasing the driver's satisfaction level. On the other hand, this satisfaction, which will decrease by increasing the service time, is considered as an important logistic problem for a company. The stochastic time dominated by a probability variable leads to variation of the service time, while it is ignored in classical routing problems. This paper investigates the problem of the increasing service time by using the stochastic time for each tour such that the total traveling time of the vehicles is limited to a specific limit based on a defined probability. Since exact solutions of the vehicle routing problem that belong to the category of NP-hard problems are not practical in a large scale, a hybrid algorithm based on simulated annealing with genetic operators was proposed to obtain an efficient solution with reasonable computational cost and time. Finally, for some small cases, the related results of the proposed algorithm were compared with results obtained by the Lingo 8 software. The obtained results indicate the efficiency of the proposed hybrid simulated annealing algorithm.

  2. Economic analysis of linking operating room scheduling and hospital material management information systems for just-in-time inventory control.

    PubMed

    Epstein, R H; Dexter, F

    2000-08-01

    Operating room (OR) scheduling information systems can decrease perioperative labor costs. Material management information systems can decrease perioperative inventory costs. We used computer simulation to investigate whether using the OR schedule to trigger purchasing of perioperative supplies is likely to further decrease perioperative inventory costs, as compared with using sophisticated, stand-alone material management inventory control. Although we designed the simulations to favor financially linking the information systems, we found that this strategy would be expected to decrease inventory costs substantively only for items of high price ($1000 each) and volume (>1000 used each year). Because expensive items typically have different models and sizes, each of which is used by a hospital less often than this, for almost all items there will be no benefit to making daily adjustments to the order volume based on booked cases. We conclude that, in a hospital with a sophisticated material management information system, OR managers will probably achieve greater cost reductions from focusing on negotiating less expensive purchase prices for items than on trying to link the OR information system with the hospital's material management information system to achieve just-in-time inventory control. In a hospital with a sophisticated material management information system, operating room managers will probably achieve greater cost reductions from focusing on negotiating less expensive purchase prices for items than on trying to link the operating room information system with the hospital's material management information system to achieve just-in-time inventory control.

  3. Detailed mechanism of toluene oxidation and comparison with benzene

    NASA Technical Reports Server (NTRS)

    Bittker, David A.

    1988-01-01

    A detailed mechanism for the oxidation of toluene in both argon and nitrogen dilutents is presented. The mechanism was used to compute experimentally ignition delay times for shock-heated toluene-oxygen-argon mixtures with resonably good success over a wide range of initial temperatures and pressures. Attempts to compute experimentally measured concentration profiles for toluene oxidation in a turbulent reactor were partially successful. An extensive sensitivity analysis was performed to determine the reactions which control the ignition process and the rates of formation and destruction of various species. The most important step was found to be the reaction of toluene with molecular oxygen, followed by the reactions of hydroperoxyl and atomic oxygen with benzyl radicals. These findings contrast with the benzene oxidation, where the benzene-molecular oxygen reaction is quite unimportant and the reaction of phenyl with molecular oxygen dominates. In the toluene mechanism the corresponding reaction of benzyl radicals with oxygen is unimportant. Two reactions which are important in the oxidation of benzene also influence the oxidation of toluene for several conditions. These are the oxidations of phenyl and cyclopentadienyl radicals by molecular oxygen. The mechanism presented successfully computes the decrease of toluene concentration with time in the nitrogen diluted turbulent reactor. This fact, in addition to the good prediction of ignition delay times, shows that this mechanism can be used for modeling the ignition and combustion process in practical, well-mixed combustion systems.

  4. Effect of Mesoscale and Multiscale Modeling on the Performance of Kevlar Woven Fabric Subjected to Ballistic Impact: A Numerical Study

    NASA Astrophysics Data System (ADS)

    Jia, Xin; Huang, Zhengxiang; Zu, Xudong; Gu, Xiaohui; Xiao, Qiangqiang

    2013-12-01

    In this study, an optimal finite element model of Kevlar woven fabric that is more computational efficient compared with existing models was developed to simulate ballistic impact onto fabric. Kevlar woven fabric was modeled to yarn level architecture by using the hybrid elements analysis (HEA), which uses solid elements in modeling the yarns at the impact region and uses shell elements in modeling the yarns away from the impact region. Three HEA configurations were constructed, in which the solid element region was set as about one, two, and three times that of the projectile's diameter with impact velocities of 30 m/s (non-perforation case) and 200 m/s (perforation case) to determine the optimal ratio between the solid element region and the shell element region. To further reduce computational time and to maintain the necessary accuracy, three multiscale models were presented also. These multiscale models combine the local region with the yarn level architecture by using the HEA approach and the global region with homogenous level architecture. The effect of the varying ratios of the local and global area on the ballistic performance of fabric was discussed. The deformation and damage mechanisms of fabric were analyzed and compared among numerical models. Simulation results indicate that the multiscale model based on HEA accurately reproduces the baseline results and obviously decreases computational time.

  5. Using quantum chemistry muscle to flex massive systems: How to respond to something perturbing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertoni, Colleen

    Computational chemistry uses the theoretical advances of quantum mechanics and the algorithmic and hardware advances of computer science to give insight into chemical problems. It is currently possible to do highly accurate quantum chemistry calculations, but the most accurate methods are very computationally expensive. Thus it is only feasible to do highly accurate calculations on small molecules, since typically more computationally efficient methods are also less accurate. The overall goal of my dissertation work has been to try to decrease the computational expense of calculations without decreasing the accuracy. In particular, my dissertation work focuses on fragmentation methods, intermolecular interactionsmore » methods, analytic gradients, and taking advantage of new hardware.« less

  6. QuickNGS elevates Next-Generation Sequencing data analysis to a new level of automation.

    PubMed

    Wagle, Prerana; Nikolić, Miloš; Frommolt, Peter

    2015-07-01

    Next-Generation Sequencing (NGS) has emerged as a widely used tool in molecular biology. While time and cost for the sequencing itself are decreasing, the analysis of the massive amounts of data remains challenging. Since multiple algorithmic approaches for the basic data analysis have been developed, there is now an increasing need to efficiently use these tools to obtain results in reasonable time. We have developed QuickNGS, a new workflow system for laboratories with the need to analyze data from multiple NGS projects at a time. QuickNGS takes advantage of parallel computing resources, a comprehensive back-end database, and a careful selection of previously published algorithmic approaches to build fully automated data analysis workflows. We demonstrate the efficiency of our new software by a comprehensive analysis of 10 RNA-Seq samples which we can finish in only a few minutes of hands-on time. The approach we have taken is suitable to process even much larger numbers of samples and multiple projects at a time. Our approach considerably reduces the barriers that still limit the usability of the powerful NGS technology and finally decreases the time to be spent before proceeding to further downstream analysis and interpretation of the data.

  7. Computers in the exam room: differences in physician-patient interaction may be due to physician experience.

    PubMed

    Rouf, Emran; Whittle, Jeff; Lu, Na; Schwartz, Mark D

    2007-01-01

    The use of electronic medical records can improve the technical quality of care, but requires a computer in the exam room. This could adversely affect interpersonal aspects of care, particularly when physicians are inexperienced users of exam room computers. To determine whether physician experience modifies the impact of exam room computers on the physician-patient interaction. Cross-sectional surveys of patients and physicians. One hundred fifty five adults seen for scheduled visits by 11 faculty internists and 12 internal medicine residents in a VA primary care clinic. Physician and patient assessment of the effect of the computer on the clinical encounter. Patients seeing residents, compared to those seeing faculty, were more likely to agree that the computer adversely affected the amount of time the physician spent talking to (34% vs 15%, P = 0.01), looking at (45% vs 24%, P = 0.02), and examining them (32% vs 13%, P = 0.009). Moreover, they were more likely to agree that the computer made the visit feel less personal (20% vs 5%, P = 0.017). Few patients thought the computer interfered with their relationship with their physicians (8% vs 8%). Residents were more likely than faculty to report these same adverse effects, but these differences were smaller and not statistically significant. Patients seen by residents more often agreed that exam room computers decreased the amount of interpersonal contact. More research is needed to elucidate key tasks and behaviors that facilitate doctor-patient communication in such a setting.

  8. Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea

    2015-09-01

    The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less

  9. Spatial and temporal distributions of surface mass balance between Concordia and Vostok stations, Antarctica, from combined radar and ice core data: first results and detailed error analysis

    NASA Astrophysics Data System (ADS)

    Le Meur, Emmanuel; Magand, Olivier; Arnaud, Laurent; Fily, Michel; Frezzotti, Massimo; Cavitte, Marie; Mulvaney, Robert; Urbini, Stefano

    2018-05-01

    Results from ground-penetrating radar (GPR) measurements and shallow ice cores carried out during a scientific traverse between Dome Concordia (DC) and Vostok stations are presented in order to infer both spatial and temporal characteristics of snow accumulation over the East Antarctic Plateau. Spatially continuous accumulation rates along the traverse are computed from the identification of three equally spaced radar reflections spanning about the last 600 years. Accurate dating of these internal reflection horizons (IRHs) is obtained from a depth-age relationship derived from volcanic horizons and bomb testing fallouts on a DC ice core and shows a very good consistency when tested against extra ice cores drilled along the radar profile. Accumulation rates are then inferred by accounting for density profiles down to each IRH. For the latter purpose, a careful error analysis showed that using a single and more accurate density profile along a DC core provided more reliable results than trying to include the potential spatial variability in density from extra (but less accurate) ice cores distributed along the profile. The most striking feature is an accumulation pattern that remains constant through time with persistent gradients such as a marked decrease from 26 mm w.e. yr-1 at DC to 20 mm w.e. yr-1 at the south-west end of the profile over the last 234 years on average (with a similar decrease from 25 to 19 mm w.e. yr-1 over the last 592 years). As for the time dependency, despite an overall consistency with similar measurements carried out along the main East Antarctic divides, interpreting possible trends remains difficult. Indeed, error bars in our measurements are still too large to unambiguously infer an apparent time increase in accumulation rate. For the proposed absolute values, maximum margins of error are in the range 4 mm w.e. yr-1 (last 234 years) to 2 mm w.e. yr-1 (last 592 years), a decrease with depth mainly resulting from the time-averaging when computing accumulation rates.

  10. Stents: Biomechanics, Biomaterials, and Insights from Computational Modeling.

    PubMed

    Karanasiou, Georgia S; Papafaklis, Michail I; Conway, Claire; Michalis, Lampros K; Tzafriri, Rami; Edelman, Elazer R; Fotiadis, Dimitrios I

    2017-04-01

    Coronary stents have revolutionized the treatment of coronary artery disease. Improvement in clinical outcomes requires detailed evaluation of the performance of stent biomechanics and the effectiveness as well as safety of biomaterials aiming at optimization of endovascular devices. Stents need to harmonize the hemodynamic environment and promote beneficial vessel healing processes with decreased thrombogenicity. Stent design variables and expansion properties are critical for vessel scaffolding. Drug-elution from stents, can help inhibit in-stent restenosis, but adds further complexity as drug release kinetics and coating formulations can dominate tissue responses. Biodegradable and bioabsorbable stents go one step further providing complete absorption over time governed by corrosion and erosion mechanisms. The advances in computing power and computational methods have enabled the application of numerical simulations and the in silico evaluation of the performance of stent devices made up of complex alloys and bioerodible materials in a range of dimensions and designs and with the capacity to retain and elute bioactive agents. This review presents the current knowledge on stent biomechanics, stent fatigue as well as drug release and mechanisms governing biodegradability focusing on the insights from computational modeling approaches.

  11. Using Cellular Automata for Parking Recommendations in Smart Environments

    PubMed Central

    Horng, Gwo-Jiun

    2014-01-01

    In this work, we propose an innovative adaptive recommendation mechanism for smart parking. The cognitive RF module will transmit the vehicle location information and the parking space requirements to the parking congestion computing center (PCCC) when the driver must find a parking space. Moreover, for the parking spaces, we use a cellular automata (CA) model mechanism that can adjust to full and not full parking lot situations. Here, the PCCC can compute the nearest parking lot, the parking lot status and the current or opposite driving direction with the vehicle location information. By considering the driving direction, we can determine when the vehicles must turn around and thus reduce road congestion and speed up finding a parking space. The recommendation will be sent to the drivers through a wireless communication cognitive radio (CR) model after the computation and analysis by the PCCC. The current study evaluates the performance of this approach by conducting computer simulations. The simulation results show the strengths of the proposed smart parking mechanism in terms of avoiding increased congestion and decreasing the time to find a parking space. PMID:25153671

  12. Computational materials chemistry for carbon capture using porous materials

    NASA Astrophysics Data System (ADS)

    Sharma, Abhishek; Huang, Runhong; Malani, Ateeque; Babarao, Ravichandar

    2017-11-01

    Control over carbon dioxide (CO2) release is extremely important to decrease its hazardous effects on the environment such as global warming, ocean acidification, etc. For CO2 capture and storage at industrial point sources, nanoporous materials offer an energetically viable and economically feasible approach compared to chemisorption in amines. There is a growing need to design and synthesize new nanoporous materials with enhanced capability for carbon capture. Computational materials chemistry offers tools to screen and design cost-effective materials for CO2 separation and storage, and it is less time consuming compared to trial and error experimental synthesis. It also provides a guide to synthesize new materials with better properties for real world applications. In this review, we briefly highlight the various carbon capture technologies and the need of computational materials design for carbon capture. This review discusses the commonly used computational chemistry-based simulation methods for structural characterization and prediction of thermodynamic properties of adsorbed gases in porous materials. Finally, simulation studies reported on various potential porous materials, such as zeolites, porous carbon, metal organic frameworks (MOFs) and covalent organic frameworks (COFs), for CO2 capture are discussed.

  13. Atmospheric-radiation boundary conditions for high-frequency waves in time-distance helioseismology

    NASA Astrophysics Data System (ADS)

    Fournier, D.; Leguèbe, M.; Hanson, C. S.; Gizon, L.; Barucq, H.; Chabassier, J.; Duruflé, M.

    2017-12-01

    The temporal covariance between seismic waves measured at two locations on the solar surface is the fundamental observable in time-distance helioseismology. Above the acoustic cut-off frequency ( 5.3 mHz), waves are not trapped in the solar interior and the covariance function can be used to probe the upper atmosphere. We wish to implement appropriate radiative boundary conditions for computing the propagation of high-frequency waves in the solar atmosphere. We consider recently developed and published radiative boundary conditions for atmospheres in which sound-speed is constant and density decreases exponentially with radius. We compute the cross-covariance function using a finite element method in spherical geometry and in the frequency domain. The ratio between first- and second-skip amplitudes in the time-distance diagram is used as a diagnostic to compare boundary conditions and to compare with observations. We find that a boundary condition applied 500 km above the photosphere and derived under the approximation of small angles of incidence accurately reproduces the "infinite atmosphere" solution for high-frequency waves. When the radiative boundary condition is applied 2 Mm above the photosphere, we find that the choice of atmospheric model affects the time-distance diagram. In particular, the time-distance diagram exhibits double-ridge structure when using a Vernazza Avrett Loeser atmospheric model.

  14. A micro-epidemic model for primary dengue infection

    NASA Astrophysics Data System (ADS)

    Mishra, Arti; Gakkhar, Sunita

    2017-06-01

    In this paper, a micro-epidemic non-linear dynamical model has been proposed and analyzed for primary dengue infection. The model incorporates the effects of T cells immune response as well as humoral response during pathogenesis of dengue infection. The time delay has been accounted for production of antibodies from B cells. The basic reproduction number (R0) has been computed. Three equilibrium states are obtained. The existence and stability conditions for infection-free and ineffective cellular immune response state have been discussed. The conditions for existence of endemic state have been obtained. Further, the parametric region is obtained where system exhibits complex behavior. The threshold value of time delay has been computed which is critical for change in stability of endemic state. A threshold level for antibodies production rate has been obtained over which the infection will die out even though R0 > 1. The model is in line with the clinical observation that viral load decreases within 7-14 days from the onset of primary infection.

  15. An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling

    NASA Astrophysics Data System (ADS)

    Wang, Enjiang; Liu, Yang

    2018-01-01

    The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.

  16. Combining point context and dynamic time warping for online gesture recognition

    NASA Astrophysics Data System (ADS)

    Mao, Xia; Li, Chen

    2017-05-01

    Previous gesture recognition methods usually focused on recognizing gestures after the entire gesture sequences were obtained. However, in many practical applications, a system has to identify gestures before they end to give instant feedback. We present an online gesture recognition approach that can realize early recognition of unfinished gestures with low latency. First, a curvature buffer-based point context (CBPC) descriptor is proposed to extract the shape feature of a gesture trajectory. The CBPC descriptor is a complete descriptor with a simple computation, and thus has its superiority in online scenarios. Then, we introduce an online windowed dynamic time warping algorithm to realize online matching between the ongoing gesture and the template gestures. In the algorithm, computational complexity is effectively decreased by adding a sliding window to the accumulative distance matrix. Lastly, the experiments are conducted on the Australian sign language data set and the Kinect hand gesture (KHG) data set. Results show that the proposed method outperforms other state-of-the-art methods especially when gesture information is incomplete.

  17. Impact of reduced-radiation dual-energy protocols using 320-detector row computed tomography for analyzing urinary calculus components: initial in vitro evaluation.

    PubMed

    Cai, Xiangran; Zhou, Qingchun; Yu, Juan; Xian, Zhaohui; Feng, Youzhen; Yang, Wencai; Mo, Xukai

    2014-10-01

    To evaluate the impact of reduced-radiation dual-energy (DE) protocols using 320-detector row computed tomography on the differentiation of urinary calculus components. A total of 58 urinary calculi were placed into the same phantom and underwent DE scanning with 320-detector row computed tomography. Each calculus was scanned 4 times with the DE protocols using 135 kV and 80 kV tube voltage and different tube current combinations, including 100 mA and 570 mA (group A), 50 mA and 290 mA (group B), 30 mA and 170 mA (group C), and 10 mA and 60 mA (group D). The acquisition data of all 4 groups were then analyzed by stone DE analysis software, and the results were compared with x-ray diffraction analysis. Noise, contrast-to-noise ratio, and radiation dose were compared. Calculi were correctly identified in 56 of 58 stones (96.6%) using group A and B protocols. However, only 35 stones (60.3%) and 16 stones (27.6%) were correctly diagnosed using group C and D protocols, respectively. Mean noise increased significantly and mean contrast-to-noise ratio decreased significantly from groups A to D (P <.05). In addition, the effective dose decreased markedly from groups A to D at 3.78, 1.81, 1.07, and 0.37 mSv, respectively. Decreasing the DE tube currents from 100 mA and 570 mA to 50 mA and 290 mA resulted in 96.6% accuracy for urinary calculus component analysis while reducing patient radiation exposure to 1.81 mSv. Further reduction of tube currents may compromise diagnostic accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Small scale sequence automation pays big dividends

    NASA Technical Reports Server (NTRS)

    Nelson, Bill

    1994-01-01

    Galileo sequence design and integration are supported by a suite of formal software tools. Sequence review, however, is largely a manual process with reviewers scanning hundreds of pages of cryptic computer printouts to verify sequence correctness. Beginning in 1990, a series of small, PC based sequence review tools evolved. Each tool performs a specific task but all have a common 'look and feel'. The narrow focus of each tool means simpler operation, and easier creation, testing, and maintenance. Benefits from these tools are (1) decreased review time by factors of 5 to 20 or more with a concomitant reduction in staffing, (2) increased review accuracy, and (3) excellent returns on time invested.

  19. Secular trends: a ten-year comparison of the amount and type of physical activity and inactivity of random samples of adolescents in the Czech Republic

    PubMed Central

    2011-01-01

    Background An optimal level of physical activity (PA) in adolescence influences the level of PA in adulthood. Although PA declines with age have been demonstrated repeatedly, few studies have been carried out on secular trends. The present study assessed levels, types and secular trends of PA and sedentary behaviour of a sample of adolescents in the Czech Republic. Methods The study comprised two cross-sectional cohorts of adolescents ten years apart. The analysis compared data collected through a week-long monitoring of adolescents' PA in 1998-2000 and 2008-2010. Adolescents wore either Yamax SW-701 or Omron HJ-105 pedometer continuously for 7 days (at least 10 hours per day) excluding sleeping, hygiene and bathing. They also recorded their number of steps per day, the type and duration of PA and sedentary behaviour (in minutes) on record sheets. In total, 902 adolescents (410 boys; 492 girls) aged 14-18 were eligible for analysis. Results Overweight and obesity in Czech adolescents participating in this study increased from 5.5% (older cohort, 1998-2000) to 10.4% (younger cohort, 2008-2010). There were no inter-cohort significant changes in the total amount of sedentary behaviour in boys. However in girls, on weekdays, there was a significant increase in the total duration of sedentary behaviour of the younger cohort (2008-2010) compared with the older one (1998-2000). Studying and screen time (television and computer) were among the main sedentary behaviours in Czech adolescents. The types of sedentary behaviour also changed: watching TV (1998-2000) was replaced by time spent on computers (2008-2010). The Czech health-related criterion (achieving 11,000 steps per day) decreased only in boys from 68% (1998-2000) to 55% (2008-2010). Across both genders, 55%-75% of Czech adolescents met the health-related criterion of recommended steps per day, however less participants in the younger cohort (2008-2010) met this criterion than in the older cohort (1998-2000) ten years ago. Adolescents' PA levels for the monitored periods of 1998-2000 and 2008-2010 suggest a secular decrease in the weekly number of steps achieved by adolescent boys and girls. Conclusion In the younger cohort (2008-2010), every tenth adolescent was either overweight or obese; roughly twice the rate when compared to the older cohort (1998-2000). Sedentary behaviour seems relatively stable across the two cohorts as the increased time that the younger cohort (2008-2010) spent on computers is compensated with an equally decreased time spent watching TV or studying. Across both cohorts about half to three quarters of the adolescents met the health-related criterion for achieved number of steps. The findings show a secular decrease in PA amongst adolescents. The significant interaction effects (cohort × age; and cohort × gender) that this study found suggested that secular trends in PA differ by age and gender. PMID:21943194

  20. Secular trends: a ten-year comparison of the amount and type of physical activity and inactivity of random samples of adolescents in the Czech Republic.

    PubMed

    Sigmundová, Dagmar; El Ansari, Walid; Sigmund, Erik; Frömel, Karel

    2011-09-26

    An optimal level of physical activity (PA) in adolescence influences the level of PA in adulthood. Although PA declines with age have been demonstrated repeatedly, few studies have been carried out on secular trends. The present study assessed levels, types and secular trends of PA and sedentary behaviour of a sample of adolescents in the Czech Republic. The study comprised two cross-sectional cohorts of adolescents ten years apart. The analysis compared data collected through a week-long monitoring of adolescents' PA in 1998-2000 and 2008-2010. Adolescents wore either Yamax SW-701 or Omron HJ-105 pedometer continuously for 7 days (at least 10 hours per day) excluding sleeping, hygiene and bathing. They also recorded their number of steps per day, the type and duration of PA and sedentary behaviour (in minutes) on record sheets. In total, 902 adolescents (410 boys; 492 girls) aged 14-18 were eligible for analysis. Overweight and obesity in Czech adolescents participating in this study increased from 5.5% (older cohort, 1998-2000) to 10.4% (younger cohort, 2008-2010). There were no inter-cohort significant changes in the total amount of sedentary behaviour in boys. However in girls, on weekdays, there was a significant increase in the total duration of sedentary behaviour of the younger cohort (2008-2010) compared with the older one (1998-2000). Studying and screen time (television and computer) were among the main sedentary behaviours in Czech adolescents. The types of sedentary behaviour also changed: watching TV (1998-2000) was replaced by time spent on computers (2008-2010).The Czech health-related criterion (achieving 11,000 steps per day) decreased only in boys from 68% (1998-2000) to 55% (2008-2010). Across both genders, 55%-75% of Czech adolescents met the health-related criterion of recommended steps per day, however less participants in the younger cohort (2008-2010) met this criterion than in the older cohort (1998-2000) ten years ago. Adolescents' PA levels for the monitored periods of 1998-2000 and 2008-2010 suggest a secular decrease in the weekly number of steps achieved by adolescent boys and girls. In the younger cohort (2008-2010), every tenth adolescent was either overweight or obese; roughly twice the rate when compared to the older cohort (1998-2000). Sedentary behaviour seems relatively stable across the two cohorts as the increased time that the younger cohort (2008-2010) spent on computers is compensated with an equally decreased time spent watching TV or studying. Across both cohorts about half to three quarters of the adolescents met the health-related criterion for achieved number of steps. The findings show a secular decrease in PA amongst adolescents. The significant interaction effects (cohort × age; and cohort × gender) that this study found suggested that secular trends in PA differ by age and gender.

  1. Land use change and precipitation feedbacks across the Tropics

    NASA Astrophysics Data System (ADS)

    McCurley, K.; Jawitz, J. W.

    2017-12-01

    We investigated the relationship between agricultural land expansion, resulting in deforestation in the Tropics (South America, Africa, and Southeast Asia), and the local/regional hydroclimatic cycle. We hypothesized that changes in physical catchment properties in recent decades have resulted in measurable impacts on elements of the water budget, specifically evapotranspiration and precipitation. Using high resolution, gridded global precipitation and potential evapotranspiration data, as well as discharge time series (1960-2007) from the Global Runoff Data Center, we computed the components of the water budget on a catchment scale from 81 tropical basins that have experienced land use change. We estimated the landscape-driven component of evapotranspiration for two time periods, 1960-1983 and 1984-2007, and compared it to the relative change in forest cover across time. The findings show a negative relationship between the landscape-driven component of evapotranspiration and deforestation, suggesting that a decrease in forest cover causes a decrease in evapotranspiration. We further illustrate how this dynamic implicates basin-scale water availability due to land use change stimulated by agricultural production, including potential negative feedback of agricultural area expansion onto precipitation recycling.

  2. A fully automated and scalable timing probe-based method for time alignment of the LabPET II scanners

    NASA Astrophysics Data System (ADS)

    Samson, Arnaud; Thibaudeau, Christian; Bouchard, Jonathan; Gaudin, Émilie; Paulin, Caroline; Lecomte, Roger; Fontaine, Réjean

    2018-05-01

    A fully automated time alignment method based on a positron timing probe was developed to correct the channel-to-channel coincidence time dispersion of the LabPET II avalanche photodiode-based positron emission tomography (PET) scanners. The timing probe was designed to directly detect positrons and generate an absolute time reference. The probe-to-channel coincidences are recorded and processed using firmware embedded in the scanner hardware to compute the time differences between detector channels. The time corrections are then applied in real-time to each event in every channel during PET data acquisition to align all coincidence time spectra, thus enhancing the scanner time resolution. When applied to the mouse version of the LabPET II scanner, the calibration of 6 144 channels was performed in less than 15 min and showed a 47% improvement on the overall time resolution of the scanner, decreasing from 7 ns to 3.7 ns full width at half maximum (FWHM).

  3. GPUs benchmarking in subpixel image registration algorithm

    NASA Astrophysics Data System (ADS)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  4. Pure JavaScript Storyline Layout Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    This is a JavaScript library for a storyline layout algorithm. Storylines are adept at communicating complex change by encoding time on the x-axis and using the proximity of lines in the y direction to represent interaction between entities. The library in this disclosure takes as input a list of objects containing an id, time, and state. The output is a data structure that can be used to conveniently render a storyline visualization. Most importantly, the library computes the y-coordinate of the entities over time that decreases layout artifacts including crossings, wiggles, and whitespace. This is accomplished through multi-objective, multi-stage optimizationmore » problem, where the output of one stage produces input and constraints for the next stage.« less

  5. Multiplexed memory-insensitive quantum repeaters.

    PubMed

    Collins, O A; Jenkins, S D; Kuzmich, A; Kennedy, T A B

    2007-02-09

    Long-distance quantum communication via distant pairs of entangled quantum bits (qubits) is the first step towards secure message transmission and distributed quantum computing. To date, the most promising proposals require quantum repeaters to mitigate the exponential decrease in communication rate due to optical fiber losses. However, these are exquisitely sensitive to the lifetimes of their memory elements. We propose a multiplexing of quantum nodes that should enable the construction of quantum networks that are largely insensitive to the coherence times of the quantum memory elements.

  6. The Medical Duty Officer: An Attempt to Mitigate the Ambulance At-Hospital Interval

    PubMed Central

    Halliday, Megan H.; Bouland, Andrew J.; Lawner, Benjamin J.; Comer, Angela C.; Ramos, Daniel C.; Fletcher, Mark

    2016-01-01

    Introduction A lack of coordination between emergency medical services (EMS), emergency departments (ED) and systemwide management has contributed to extended ambulance at-hospital times at local EDs. In an effort to improve communication within the local EMS system, the Baltimore City Fire Department (BCFD) placed a medical duty officer (MDO) in the fire communications bureau. It was hypothesized that any real-time intervention suggested by the MDO would be manifested in a decrease in the EMS at-hospital time. Methods The MDO was implemented on November 11, 2013. A senior EMS paramedic was assigned to the position and was placed in the fire communication bureau from 9 a.m. to 9 p.m., seven days a week. We defined the pre-intervention period as August 2013 – October 2013 and the post-intervention period as December 2013 – February 2014. We also compared the post-intervention period to the “seasonal match control” one year earlier to adjust for seasonal variation in EMS volume. The MDO was tasked with the prospective management of city EMS resources through intensive monitoring of unit availability and hospital ED traffic. The MDO could suggest alternative transport destinations in the event of ED crowding. We collected and analyzed data from BCFD computer-aided dispatch (CAD) system for the following: ambulance response times, ambulance at-hospital interval, hospital diversion and alert status, and “suppression wait time” (defined as the total time suppression units remained on scene until ambulance arrival). The data analysis used a pre/post intervention design to examine the MDO impact on the BCFD EMS system. Results There were a total of 15,567 EMS calls during the pre-intervention period, 13,921 in the post-intervention period and 14,699 in the seasonal match control period one year earlier. The average at-hospital time decreased by 1.35 minutes from pre- to post-intervention periods and 4.53 minutes from the pre- to seasonal match control, representing a statistically significant decrease in this interval. There was also a statistically significant decrease in hospital alert time (approximately 1,700 hour decrease pre- to post-intervention periods) and suppression wait time (less than one minute decrease from pre- to post- and pre- to seasonal match control periods). The decrease in ambulance response time was not statistically significant. Conclusion Proactive deployment of a designated MDO was associated with a small, contemporaneous reduction in at-hospital time within an urban EMS jurisdiction. This project emphasized the importance of better communication between EMS systems and area hospitals as well as uniform reporting of variables for future iterations of this and similar projects. PMID:27625737

  7. Differential transcriptional regulation by alternatively designed mechanisms: A mathematical modeling approach.

    PubMed

    Yildirim, Necmettin; Aktas, Mehmet Emin; Ozcan, Seyma Nur; Akbas, Esra; Ay, Ahmet

    2017-01-01

    Cells maintain cellular homeostasis employing different regulatory mechanisms to respond external stimuli. We study two groups of signal-dependent transcriptional regulatory mechanisms. In the first group, we assume that repressor and activator proteins compete for binding to the same regulatory site on DNA (competitive mechanisms). In the second group, they can bind to different regulatory regions in a noncompetitive fashion (noncompetitive mechanisms). For both competitive and noncompetitive mechanisms, we studied the gene expression dynamics by increasing the repressor or decreasing the activator abundance (inhibition mechanisms), or by decreasing the repressor or increasing the activator abundance (activation mechanisms). We employed delay differential equation models. Our simulation results show that the competitive and noncompetitive inhibition mechanisms exhibit comparable repression effectiveness. However, response time is fastest in the noncompetitive inhibition mechanism due to increased repressor abundance, and slowest in the competitive inhibition mechanism by increased repressor level. The competitive and noncompetitive inhibition mechanisms through decreased activator abundance show comparable and moderate response times, while the competitive and noncompetitive activation mechanisms by increased activator protein level display more effective and faster response. Our study exemplifies the importance of mathematical modeling and computer simulation in the analysis of gene expression dynamics.

  8. The use of a surveillance system to measure changes in mental health in Australian adults during the global financial crisis.

    PubMed

    Shi, Zumin; Taylor, Anne W; Goldney, Robert; Winefield, Helen; Gill, Tiffany K; Tuckerman, Jane; Wittert, Gary

    2011-08-01

    This study aimed to describe trends in a range of mental health indicators in South Australia where a surveillance system has been in operation since July 2002 and assess the impact of the global financial crisis (GFC). Data were collected using a risk factor surveillance system. Participants, aged 16 years and above, were asked about doctor-diagnosed anxiety, stress or depression, suicidal ideation, psychological distress (PD), demographic and socioeconomic factors using Computer-Assisted Telephone Interviewing (CATI). Overall, there was a decreasing trend in the prevalence of PD between 2002 and 2009. Stress has decreased since 2004 although anxiety has increased. Comparing 2008 or 2009 (the economic crisis period) with 2005 or 2007, there was significant increase in anxiety for part-time workers but a decrease for full-time workers. There were significant differences for stress by various demographic variables. The overall prevalence of mental health conditions has not increased during the GFC. Some subgroups in the population have been disproportionately impacted by changes in mental health status. The use of a surveillance system enables rapid and specifically targeted public health and policy responses to socioeconomic and environmental stressors, and the evaluation of outcomes.

  9. Collaborative Interventions Reduce Time-to-Thrombolysis for Acute Ischemic Stroke in a Public Safety Net Hospital.

    PubMed

    Threlkeld, Zachary D; Kozak, Benjamin; McCoy, David; Cole, Sara; Martin, Christine; Singh, Vineeta

    2017-07-01

    Shorter time-to-thrombolysis in acute ischemic stroke (AIS) is associated with improved functional outcome and reduced morbidity. We evaluate the effect of several interventions to reduce time-to-thrombolysis at an urban, public safety net hospital. All patients treated with tissue plasminogen activator for AIS at our institution between 2008 and 2015 were included in a retrospective analysis of door-to-needle (DTN) time and associated factors. Between 2011 and 2014, we implemented 11 distinct interventions to reduce DTN time. Here, we assess the relative impact of each intervention on DTN time. The median DTN time pre- and postintervention decreased from 87 (interquartile range: 68-109) minutes to 49 (interquartile range: 39-63) minutes. The reduction was comprised primarily of a decrease in median time from computed tomography scan order to interpretation. The goal DTN time of 60 minutes or less was achieved in 9% (95% confidence interval: 5%-22%) of cases preintervention, compared with 70% (58%-81%) postintervention. Interventions with the greatest impact on DTN time included the implementation of a stroke group paging system, dedicated emergency department stroke pharmacists, and the development of a stroke code supply box. Multidisciplinary, collaborative interventions are associated with a significant and substantial reduction in time-to-thrombolysis. Such targeted interventions are efficient and achievable in resource-limited settings, where they are most needed. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  10. Is physical activity differentially associated with different types of sedentary pursuits?

    PubMed

    Feldman, Debbie Ehrmann; Barnett, Tracie; Shrier, Ian; Rossignol, Michel; Abenhaim, Lucien

    2003-08-01

    To determine whether there is a relationship between the time adolescents spend in physical activity and time they spend in different sedentary pursuits: watching television, playing video games, working on computers, doing homework, and reading, taking into account the effect of part-time work on students' residual time. Cross-sectional cohort design. Seven hundred forty-three high school students from 2 inner-city public schools and 1 private school. Students completed a self-administered questionnaire that addressed time spent in physical activity, time spent in sedentary pursuits, musculoskeletal pain, and psychosocial issues and were also measured for height and weight. Main Outcome Measure Level of physical activity (low, moderate, high). There were more girls than boys in the low and moderate physical activity groups and more boys than girls in the high activity group. Ordinal logistic regression showed that increased time spent in "productive sedentary behavior" (reading or doing homework and working on computers) was associated with increased physical activity (odds ratio, 1.7; 95% confidence interval, 1.2-2.4), as was time spent working (odds ratio, 1.3; 95% confidence interval, 1.2-1.4). Time spent watching television and playing video games was not associated with decreased physical activity. Physical activity was not inversely associated with watching television or playing video games, but was positively associated with productive sedentary behavior and part-time work. Some students appear capable of managing their time better than others. Future studies should explore the ability of students to manage their time and also determine what characteristics are conducive to better time management.

  11. Quasi-3D Modeling and Efficient Simulation of Laminar Flows in Microfluidic Devices.

    PubMed

    Islam, Md Zahurul; Tsui, Ying Yin

    2016-10-03

    A quasi-3D model has been developed to simulate the flow in planar microfluidic systems with low Reynolds numbers. The model was developed by decomposing the flow profile along the height of a microfluidic system into a Fourier series. It was validated against the analytical solution for flow in a straight rectangular channel and the full 3D numerical COMSOL Navier-Stokes solver for flow in a T-channel. Comparable accuracy to the full 3D numerical solution was achieved by using only three Fourier terms with a significant decrease in computation time. The quasi-3D model was used to model flows in a micro-flow cytometer chip on a desktop computer and good agreement between the simulation and the experimental results was found.

  12. Quasi-3D Modeling and Efficient Simulation of Laminar Flows in Microfluidic Devices

    PubMed Central

    Islam, Md. Zahurul; Tsui, Ying Yin

    2016-01-01

    A quasi-3D model has been developed to simulate the flow in planar microfluidic systems with low Reynolds numbers. The model was developed by decomposing the flow profile along the height of a microfluidic system into a Fourier series. It was validated against the analytical solution for flow in a straight rectangular channel and the full 3D numerical COMSOL Navier-Stokes solver for flow in a T-channel. Comparable accuracy to the full 3D numerical solution was achieved by using only three Fourier terms with a significant decrease in computation time. The quasi-3D model was used to model flows in a micro-flow cytometer chip on a desktop computer and good agreement between the simulation and the experimental results was found. PMID:27706104

  13. Fast Ss-Ilm a Computationally Efficient Algorithm to Discover Socially Important Locations

    NASA Astrophysics Data System (ADS)

    Dokuz, A. S.; Celik, M.

    2017-11-01

    Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.

  14. Use of application containers and workflows for genomic data analysis.

    PubMed

    Schulz, Wade L; Durant, Thomas J S; Siddon, Alexa J; Torres, Richard

    2016-01-01

    The rapid acquisition of biological data and development of computationally intensive analyses has led to a need for novel approaches to software deployment. In particular, the complexity of common analytic tools for genomics makes them difficult to deploy and decreases the reproducibility of computational experiments. Recent technologies that allow for application virtualization, such as Docker, allow developers and bioinformaticians to isolate these applications and deploy secure, scalable platforms that have the potential to dramatically increase the efficiency of big data processing. While limitations exist, this study demonstrates a successful implementation of a pipeline with several discrete software applications for the analysis of next-generation sequencing (NGS) data. With this approach, we significantly reduced the amount of time needed to perform clonal analysis from NGS data in acute myeloid leukemia.

  15. Frequency-selective near-field radiative heat transfer between photonic crystal slabs: a computational approach for arbitrary geometries and materials.

    PubMed

    Rodriguez, Alejandro W; Ilic, Ognjen; Bermel, Peter; Celanovic, Ivan; Joannopoulos, John D; Soljačić, Marin; Johnson, Steven G

    2011-09-09

    We demonstrate the possibility of achieving enhanced frequency-selective near-field radiative heat transfer between patterned (photonic-crystal) slabs at designable frequencies and separations, exploiting a general numerical approach for computing heat transfer in arbitrary geometries and materials based on the finite-difference time-domain method. Our simulations reveal a tradeoff between selectivity and near-field enhancement as the slab-slab separation decreases, with the patterned heat transfer eventually reducing to the unpatterned result multiplied by a fill factor (described by a standard proximity approximation). We also find that heat transfer can be further enhanced at selective frequencies when the slabs are brought into a glide-symmetric configuration, a consequence of the degeneracies associated with the nonsymmorphic symmetry group.

  16. Nanotip analysis for dielectrophoretic concentration of nanosized viral particles.

    PubMed

    Yeo, Woon-Hong; Lee, Hyun-Boo; Kim, Jong-Hoon; Lee, Kyong-Hoon; Chung, Jae-Hyun

    2013-05-10

    Rapid and sensitive detection of low-abundance viral particles is strongly demanded in health care, environmental control, military defense, and homeland security. Current detection methods, however, lack either assay speed or sensitivity, mainly due to the nanosized viral particles. In this paper, we compare a dendritic, multi-terminal nanotip ('dendritic nanotip') with a single terminal nanotip ('single nanotip') for dielectrophoretic (DEP) concentration of viral particles. The numerical computation studies the concentration efficiency of viral particles ranging from 25 to 100 nm in radius for both nanotips. With DEP and Brownian motion considered, when the particle radius decreases by two times, the concentration time for both nanotips increases by 4-5 times. In the computational study, a dendritic nanotip shows about 1.5 times faster concentration than a single nanotip for the viral particles because the dendritic structure increases the DEP-effective area to overcome the Brownian motion. For the qualitative support of the numerical results, the comparison experiment of a dendritic nanotip and a single nanotip is conducted. Under 1 min of concentration time, a dendritic nanotip shows a higher sensitivity than a single nanotip. When the concentration time is 5 min, the sensitivity of a dendritic nanotip for T7 phage is 10(4) particles ml(-1). The dendritic nanotip-based concentrator has the potential for rapid identification of viral particles.

  17. Nanotip analysis for dielectrophoretic concentration of nanosized viral particles

    NASA Astrophysics Data System (ADS)

    Yeo, Woon-Hong; Lee, Hyun-Boo; Kim, Jong-Hoon; Lee, Kyong-Hoon; Chung, Jae-Hyun

    2013-05-01

    Rapid and sensitive detection of low-abundance viral particles is strongly demanded in health care, environmental control, military defense, and homeland security. Current detection methods, however, lack either assay speed or sensitivity, mainly due to the nanosized viral particles. In this paper, we compare a dendritic, multi-terminal nanotip (‘dendritic nanotip’) with a single terminal nanotip (‘single nanotip’) for dielectrophoretic (DEP) concentration of viral particles. The numerical computation studies the concentration efficiency of viral particles ranging from 25 to 100 nm in radius for both nanotips. With DEP and Brownian motion considered, when the particle radius decreases by two times, the concentration time for both nanotips increases by 4-5 times. In the computational study, a dendritic nanotip shows about 1.5 times faster concentration than a single nanotip for the viral particles because the dendritic structure increases the DEP-effective area to overcome the Brownian motion. For the qualitative support of the numerical results, the comparison experiment of a dendritic nanotip and a single nanotip is conducted. Under 1 min of concentration time, a dendritic nanotip shows a higher sensitivity than a single nanotip. When the concentration time is 5 min, the sensitivity of a dendritic nanotip for T7 phage is 104 particles ml-1. The dendritic nanotip-based concentrator has the potential for rapid identification of viral particles.

  18. VirSSPA- a virtual reality tool for surgical planning workflow.

    PubMed

    Suárez, C; Acha, B; Serrano, C; Parra, C; Gómez, T

    2009-03-01

    A virtual reality tool, called VirSSPA, was developed to optimize the planning of surgical processes. Segmentation algorithms for Computed Tomography (CT) images: a region growing procedure was used for soft tissues and a thresholding algorithm was implemented to segment bones. The algorithms operate semiautomati- cally since they only need seed selection with the mouse on each tissue segmented by the user. The novelty of the paper is the adaptation of an enhancement method based on histogram thresholding applied to CT images for surgical planning, which simplifies subsequent segmentation. A substantial improvement of the virtual reality tool VirSSPA was obtained with these algorithms. VirSSPA was used to optimize surgical planning, to decrease the time spent on surgical planning and to improve operative results. The success rate increases due to surgeons being able to see the exact extent of the patient's ailment. This tool can decrease operating room time, thus resulting in reduced costs. Virtual simulation was effective for optimizing surgical planning, which could, consequently, result in improved outcomes with reduced costs.

  19. Computation of aerodynamic interference effects on oscillating airfoils with controls in ventilated subsonic wind tunnels

    NASA Technical Reports Server (NTRS)

    Fromme, J. A.; Golberg, M. A.

    1979-01-01

    Lift interference effects are discussed based on Bland's (1968) integral equation. A mathematical existence theory is utilized for which convergence of the numerical method has been proved for general (square-integrable) downwashes. Airloads are computed using orthogonal airfoil polynomial pairs in conjunction with a collocation method which is numerically equivalent to Galerkin's method and complex least squares. Convergence exhibits exponentially decreasing error with the number n of collocation points for smooth downwashes, whereas errors are proportional to 1/n for discontinuous downwashes. The latter can be reduced to 1/n to the m+1 power with mth-order Richardson extrapolation (by using m = 2, hundredfold error reductions were obtained with only a 13% increase of computer time). Numerical results are presented showing acoustic resonance, as well as the effect of Mach number, ventilation, height-to-chord ratio, and mode shape on wind-tunnel interference. Excellent agreement with experiment is obtained in steady flow, and good agreement is obtained for unsteady flow.

  20. Are computer and cell phone use associated with body mass index and overweight? A population study among twin adolescents

    PubMed Central

    Lajunen, Hanna-Reetta; Keski-Rahkonen, Anna; Pulkkinen, Lea; Rose, Richard J; Rissanen, Aila; Kaprio, Jaakko

    2007-01-01

    Background Overweight in children and adolescents has reached dimensions of a global epidemic during recent years. Simultaneously, information and communication technology use has rapidly increased. Methods A population-based sample of Finnish twins born in 1983–1987 (N = 4098) was assessed by self-report questionnaires at 17 y during 2000–2005. The association of overweight (defined by Cole's BMI-for-age cut-offs) with computer and cell phone use and ownership was analyzed by logistic regression and their association with BMI by linear regression models. The effect of twinship was taken into account by correcting for clustered sampling of families. All models were adjusted for gender, physical exercise, and parents' education and occupational class. Results The proportion of adolescents who did not have a computer at home decreased from 18% to 8% from 2000 to 2005. Compared to them, having a home computer (without an Internet connection) was associated with a higher risk of overweight (odds ratio 2.3, 95% CI 1.4 to 3.8) and BMI (beta coefficient 0.57, 95% CI 0.15 to 0.98). However, having a computer with an Internet connection was not associated with weight status. Belonging to the highest quintile (OR 1.8 95% CI 1.2 to 2.8) and second-highest quintile (OR 1.6 95% CI 1.1 to 2.4) of weekly computer use was positively associated with overweight. The proportion of adolescents without a personal cell phone decreased from 12% to 1% across 2000 to 2005. There was a positive linear trend of increasing monthly phone bill with BMI (beta 0.18, 95% CI 0.06 to 0.30), but the association of a cell phone bill with overweight was very weak. Conclusion Time spent using a home computer was associated with an increased risk of overweight. Cell phone use correlated weakly with BMI. Increasing use of information and communication technology may be related to the obesity epidemic among adolescents. PMID:17324280

  1. The composition of intern work while on call.

    PubMed

    Fletcher, Kathlyn E; Visotcky, Alexis M; Slagle, Jason M; Tarima, Sergey; Weinger, Matthew B; Schapira, Marilyn M

    2012-11-01

    The work of house staff is being increasingly scrutinized as duty hours continue to be restricted. To describe the distribution of work performed by internal medicine interns while on call. Prospective time motion study on general internal medicine wards at a VA hospital affiliated with a tertiary care medical center and internal medicine residency program. Internal medicine interns. Trained observers followed interns during a "call" day. The observers continuously recorded the tasks performed by interns, using customized task analysis software. We measured the amount of time spent on each task. We calculated means and standard deviations for the amount of time spent on six categories of tasks: clinical computer work (e.g., writing orders and notes), non-patient communication, direct patient care (work done at the bedside), downtime, transit and teaching/learning. We also calculated means and standard deviations for time spent on specific tasks within each category. We compared the amount of time spent on the top three categories using analysis of variance. The largest proportion of intern time was spent in clinical computer work (40 %). Thirty percent of time was spent on non-patient communication. Only 12 % of intern time was spent at the bedside. Downtime activities, transit and teaching/learning accounted for 11 %, 5 % and 2 % of intern time, respectively. Our results suggest that during on call periods, relatively small amounts of time are spent on direct patient care and teaching/learning activities. As intern duty hours continue to decrease, attention should be directed towards preserving time with patients and increasing time in education.

  2. Language and infant mortality in a large Canadian province.

    PubMed

    Auger, N; Bilodeau-Bertrand, M; Costopoulos, A

    2016-10-01

    Infant mortality in minority populations of Canada is poorly understood, despite evidence of ethnic inequality in other countries. We studied infant mortality in different linguistic groups of Quebec, and assessed how language and deprivation impacted rates over time. Population-level study of vital statistics data for 1,985,287 live births and 10,283 infant deaths reported in Quebec from 1989 through 2012. We computed infant mortality rates for French, English, and foreign languages according to level of material deprivation. Using Kitagawa's method, we evaluated the impact of changes in mortality rates, and population distribution of language groups, on infant mortality in the province. Infant mortality declined from 6.05 to 4.61 per 1000 between 1989-1994 and 2007-2012. Most of the decline was driven by Francophones who contributed 1.39 fewer deaths per 1000 births over time, and Anglophones of wealthy and middle socio-economic status who contributed 0.13 fewer deaths per 1000 births. The foreign language population and poor Anglophones contributed more births over time, including 0.08 and 0.02 more deaths per 1000 births, respectively. Mortality decreased for Francophones and Anglophones in each level of deprivation. Rates were lower for foreign languages, but increased over time, especially for the poor. Infant mortality rates decreased for Francophones and Anglophones in Quebec, but increased for foreign languages. Poor Anglophones and individuals of foreign languages contributed more births over time, and slowed the decrease in infant mortality. Language may be useful for identifying inequality in infant mortality in multicultural nations. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  3. Management of Liver Cancer Argon-helium Knife Therapy with Functional Computer Tomography Perfusion Imaging.

    PubMed

    Wang, Hongbo; Shu, Shengjie; Li, Jinping; Jiang, Huijie

    2016-02-01

    The objective of this study was to observe the change in blood perfusion of liver cancer following argon-helium knife treatment with functional computer tomography perfusion imaging. Twenty-seven patients with primary liver cancer treated with argon-helium knife and were included in this study. Plain computer tomography (CT) and computer tomography perfusion (CTP) imaging were conducted in all patients before and after treatment. Perfusion parameters including blood flows, blood volume, hepatic artery perfusion fraction, hepatic artery perfusion, and hepatic portal venous perfusion were used for evaluating therapeutic effect. All parameters in liver cancer were significantly decreased after argon-helium knife treatment (p < 0.05 to all). Significant decrease in hepatic artery perfusion was also observed in pericancerous liver tissue, but other parameters kept constant. CT perfusion imaging is able to detect decrease in blood perfusion of liver cancer post-argon-helium knife therapy. Therefore, CTP imaging would play an important role for liver cancer management followed argon-helium knife therapy. © The Author(s) 2014.

  4. LORETA EEG phase reset of the default mode network

    PubMed Central

    Thatcher, Robert W.; North, Duane M.; Biver, Carl J.

    2014-01-01

    Objectives: The purpose of this study was to explore phase reset of 3-dimensional current sources in Brodmann areas located in the human default mode network (DMN) using Low Resolution Electromagnetic Tomography (LORETA) of the human electroencephalogram (EEG). Methods: The EEG was recorded from 19 scalp locations from 70 healthy normal subjects ranging in age from 13 to 20 years. A time point by time point computation of LORETA current sources were computed for 14 Brodmann areas comprising the DMN in the delta frequency band. The Hilbert transform of the LORETA time series was used to compute the instantaneous phase differences between all pairs of Brodmann areas. Phase shift and lock durations were calculated based on the 1st and 2nd derivatives of the time series of phase differences. Results: Phase shift duration exhibited three discrete modes at approximately: (1) 25 ms, (2) 50 ms, and (3) 65 ms. Phase lock duration present primarily at: (1) 300–350 ms and (2) 350–450 ms. Phase shift and lock durations were inversely related and exhibited an exponential change with distance between Brodmann areas. Conclusions: The results are explained by local neural packing density of network hubs and an exponential decrease in connections with distance from a hub. The results are consistent with a discrete temporal model of brain function where anatomical hubs behave like a “shutter” that opens and closes at specific durations as nodes of a network giving rise to temporarily phase locked clusters of neurons for specific durations. PMID:25100976

  5. Characteristics of Screen Media Use Associated With Higher BMI in Young Adolescents

    PubMed Central

    Blood, Emily A.; Walls, Courtney E.; Shrier, Lydia A.; Rich, Michael

    2013-01-01

    OBJECTIVES: This study investigates how characteristics of young adolescents’ screen media use are associated with their BMI. By examining relationships between BMI and both time spent using each of 3 screen media and level of attention allocated to use, we sought to contribute to the understanding of mechanisms linking media use and obesity. METHODS: We measured heights and weights of 91 13- to 15-year-olds and calculated their BMIs. Over 1 week, participants completed a weekday and a Saturday 24-hour time-use diary in which they reported the amount of time they spent using TV, computers, and video games. Participants carried handheld computers and responded to 4 to 7 random signals per day by completing onscreen questionnaires reporting activities to which they were paying primary, secondary, and tertiary attention. RESULTS: Higher proportions of primary attention to TV were positively associated with higher BMI. The difference between 25th and 75th percentiles of attention to TV corresponded to an estimated +2.4 BMI points. Time spent watching television was unrelated to BMI. Neither duration of use nor extent of attention paid to video games or computers was associated with BMI. CONCLUSIONS: These findings support the notion that attention to TV is a key element of the increased obesity risk associated with TV viewing. Mechanisms may include the influence of TV commercials on preferences for energy-dense, nutritionally questionable foods and/or eating while distracted by TV. Interventions that interrupt these processes may be effective in decreasing obesity among screen media users. PMID:23569098

  6. Performance analysis of a dual-tree algorithm for computing spatial distance histograms

    PubMed Central

    Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni

    2011-01-01

    Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753

  7. An ergonomic evaluation comparing desktop, notebook, and subnotebook computers.

    PubMed

    Szeto, Grace P; Lee, Raymond

    2002-04-01

    To evaluate and compare the postures and movements of the cervical and upper thoracic spine, the typing performance, and workstation ergonomic factors when using a desktop, notebook, and subnotebook computers. Repeated-measures design. A motion analysis laboratory with an electromagnetic tracking device. A convenience sample of 21 university students between ages 20 and 24 years with no history of neck or shoulder discomfort. Each subject performed a standardized typing task by using each of the 3 computers. Measurements during the typing task were taken at set intervals. Cervical and thoracic spines adopted a more flexed posture in using the smaller-sized computers. There were significantly greater neck movements in using desktop computers when compared with the notebook and subnotebook computers. The viewing distances adopted by the subjects decreased as the computer size decreased. Typing performance and subjective rating of difficulty in using the keyboards were also significantly different among the 3 types of computers. Computer users need to consider the posture of the spine and potential risk of developing musculoskeletal discomfort in choosing computers. Copyright 2002 by the American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation

  8. Fertility time trends in dairy herds in northern Portugal.

    PubMed

    Rocha, A; Martins, A; Carvalheira, J

    2010-10-01

    The economics of dairy production are in great part dictated by the reproductive efficiency of the herds. Many studies have reported a widespread decrease in fertility of dairy cows. In a previous work (Rocha et al. 2001), we found a very poor oestrus detection rate (38%), and consequently a delayed calving to 1st AI and calving to conception intervals. However, a good conception rate at 1st AI was noted (51%) resulting in a low number of inseminations per pregnancy (IAP) (1.4). Here, results from a subsequent fertility time trend assessment study carried out in the same region for cows born from 1992 to 2002 are reported. Statistical linear models were used to analyse the data. Estimate linear contrasts of least square means were computed from each model. The number of observations per studied index varied from 12,130 (culling rate) to 57,589 (non-return rate). Mean age at first calving was 28.9 ± 0.14 months, without (p > 0.05) variation over time. There was a small, but significant (p < 0.05), deterioration of all other parameters. Non-return rates at 90 days and calving rate at 1st AI decreased 0.3% per trimester, with a consequent increase of 0.04 IA/parturition. Oestrus detection rate decreased 0.13% per year, and calving at 1st AI and calving-conception intervals increased 0.17 and 0.07 days/year respectively, while intercalving interval increased 1.7 days per year. From 12,130 cows calving, only 1,816 had a 4th lactation (85% culling/losses). The data was not meant to draw conclusions on the causes for the decreased fertility over time, but an increase of milk production from 6537 kg to 8590 kg (305 days) from 1996 to 2002 is probably one factor to take into consideration. Specific measures to revert or slow down this trend of decreasing fertility are warranted. Available strategies are discussed. © 2009 Blackwell Verlag GmbH.

  9. Direct numerical simulation of particulate flows with an overset grid method

    NASA Astrophysics Data System (ADS)

    Koblitz, A. R.; Lovett, S.; Nikiforakis, N.; Henshaw, W. D.

    2017-08-01

    We evaluate an efficient overset grid method for two-dimensional and three-dimensional particulate flows for small numbers of particles at finite Reynolds number. The rigid particles are discretised using moving overset grids overlaid on a Cartesian background grid. This allows for strongly-enforced boundary conditions and local grid refinement at particle surfaces, thereby accurately capturing the viscous boundary layer at modest computational cost. The incompressible Navier-Stokes equations are solved with a fractional-step scheme which is second-order-accurate in space and time, while the fluid-solid coupling is achieved with a partitioned approach including multiple sub-iterations to increase stability for light, rigid bodies. Through a series of benchmark studies we demonstrate the accuracy and efficiency of this approach compared to other boundary conformal and static grid methods in the literature. In particular, we find that fully resolving boundary layers at particle surfaces is crucial to obtain accurate solutions to many common test cases. With our approach we are able to compute accurate solutions using as little as one third the number of grid points as uniform grid computations in the literature. A detailed convergence study shows a 13-fold decrease in CPU time over a uniform grid test case whilst maintaining comparable solution accuracy.

  10. Artificial acoustic stiffness reduction in fully compressible, direct numerical simulation of combustion

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Trouvé, Arnaud

    2004-09-01

    A pseudo-compressibility method is proposed to modify the acoustic time step restriction found in fully compressible, explicit flow solvers. The method manipulates terms in the governing equations of order Ma2, where Ma is a characteristic flow Mach number. A decrease in the speed of acoustic waves is obtained by adding an extra term in the balance equation for total energy. This term is proportional to flow dilatation and uses a decomposition of the dilatational field into an acoustic component and a component due to heat transfer. The present method is a variation of the pressure gradient scaling (PGS) method proposed in Ramshaw et al (1985 Pressure gradient scaling method for fluid flow with nearly uniform pressure J. Comput. Phys. 58 361-76). It achieves gains in computational efficiencies similar to PGS: at the cost of a slightly more involved right-hand-side computation, the numerical time step increases by a full order of magnitude. It also features the added benefit of preserving the hydrodynamic pressure field. The original and modified PGS methods are implemented into a parallel direct numerical simulation solver developed for applications to turbulent reacting flows with detailed chemical kinetics. The performance of the pseudo-compressibility methods is illustrated in a series of test problems ranging from isothermal sound propagation to laminar premixed flame problems.

  11. Smartphones as image processing systems for prosthetic vision.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  12. Virtual reality in the assessment of selected cognitive function after brain injury.

    PubMed

    Zhang, L; Abreu, B C; Masel, B; Scheibel, R S; Christiansen, C H; Huddleston, N; Ottenbacher, K J

    2001-08-01

    To assess selected cognitive functions of persons with traumatic brain injury using a computer-simulated virtual reality environment. A computer-simulated virtual kitchen was used to assess the ability of 30 patients with brain injury and 30 volunteers without brain injury to process and sequence information. The overall assessment score was based on the number of correct responses and the time needed to complete daily living tasks. Identical daily living tasks were tested and scored in participants with and without brain injury. Each subject was evaluated twice within 7 to 10 days. A total of 30 tasks were categorized as follows: information processing, problem solving, logical sequencing, and speed of responding. Persons with brain injuries consistently demonstrated a significant decrease in the ability to process information (P = 0.04-0.01), identify logical sequencing (P = 0.04-0.01), and complete the overall assessment (P < 0.01), compared with volunteers without brain injury. The time needed to process tasks, representing speed of cognitive responding, was also significantly different between the two groups (P < 0.01). A computer-generated virtual reality environment represents a reproducible tool to assess selected cognitive functions and can be used as a supplement to traditional rehabilitation assessment in persons with acquired brain injury.

  13. Temperature dependence of the NMR spin-lattice relaxation rate for spin-1/2 chains

    NASA Astrophysics Data System (ADS)

    Coira, E.; Barmettler, P.; Giamarchi, T.; Kollath, C.

    2016-10-01

    We use recent developments in the framework of a time-dependent matrix product state method to compute the nuclear magnetic resonance relaxation rate 1 /T1 for spin-1/2 chains under magnetic field and for different Hamiltonians (XXX, XXZ, isotropically dimerized). We compute numerically the temperature dependence of the 1 /T1 . We consider both gapped and gapless phases, and also the proximity of quantum critical points. At temperatures much lower than the typical exchange energy scale, our results are in excellent agreement with analytical results, such as the ones derived from the Tomonaga-Luttinger liquid (TLL) theory and bosonization, which are valid in this regime. We also cover the regime for which the temperature T is comparable to the exchange coupling. In this case analytical theories are not appropriate, but this regime is relevant for various new compounds with exchange couplings in the range of tens of Kelvin. For the gapped phases, either the fully polarized phase for spin chains or the low-magnetic-field phase for the dimerized systems, we find an exponential decrease in Δ /(kBT ) of the relaxation time and can compute the gap Δ . Close to the quantum critical point our results are in good agreement with the scaling behavior based on the existence of free excitations.

  14. COMPUTER PREDICTION OF BIOLOGICAL ACTIVITY OF DIMETHYL-N-(BENZOYL)AMIDOPHOSPHATE AND DIMETHYL-N-(PHENYLSULFONYL)AMIDOPHOSPHATE, EVALUATION OF THEIR CYTOTOXIC ACTIVITY AGAINST LEUKEMIC CELLS IN VITRO.

    PubMed

    Grynyuk, I I; Prylutska, S V; Kariaka, N S; Sliva, T Yu; Moroz, O V; Franskevych, D V; Amirkhanov, V M; Matyshevska, O P; Slobodyanik, M S

    2015-01-01

    Structural analogues of β-diketones--dimethyl-N-(benzoyl)amidophosphate (HCP) and dimethyl-N-(phenylsulfonyl)amidophosphate (HSP) were synthesized and identified by the methods of IR, 1H and 31P NMR spectroscopy. Screening of biological activity and calculation of physicochemical parameters of HCP and HSP compounds were done with the use of PASS and ACD/Labs computer programs. A wide range of biological activity of synthesized compounds, antitumor activity in particular, has been found. Calculations of the bioavailability criteria indicate that the investigated compounds have no deviations from Lipinski's rules. HCP compound is characterized by a high lipophilicity at physiological pH as compared to HSP. It was found that cytotoxic effect of the studied compounds on the leukemic L1210 cells was of time- and dose-dependent character. HCP is characterized by more pronounced and early cytotoxic effects as compared to HSP. It was shown that 2.5 mM HCP increased ROS production 3 times in the early period of incubation, and decreased cell viability by 40% after 48 h, and by 66%--after 72 h. Based on the computer calculation and undertaken research, HCP was selected for target chemical modifications and enhancement of its antitumor effect.

  15. Similarity based false-positive reduction for breast cancer using radiographic and pathologic imaging features

    NASA Astrophysics Data System (ADS)

    Pai, Akshay; Samala, Ravi K.; Zhang, Jianying; Qian, Wei

    2010-03-01

    Mammography reading by radiologists and breast tissue image interpretation by pathologists often leads to high False Positive (FP) Rates. Similarly, current Computer Aided Diagnosis (CADx) methods tend to concentrate more on sensitivity, thus increasing the FP rates. A novel method is introduced here which employs similarity based method to decrease the FP rate in the diagnosis of microcalcifications. This method employs the Principal Component Analysis (PCA) and the similarity metrics in order to achieve the proposed goal. The training and testing set is divided into generalized (Normal and Abnormal) and more specific (Abnormal, Normal, Benign) classes. The performance of this method as a standalone classification system is evaluated in both the cases (general and specific). In another approach the probability of each case belonging to a particular class is calculated. If the probabilities are too close to classify, the augmented CADx system can be instructed to have a detailed analysis of such cases. In case of normal cases with high probability, no further processing is necessary, thus reducing the computation time. Hence, this novel method can be employed in cascade with CADx to reduce the FP rate and also avoid unnecessary computational time. Using this methodology, a false positive rate of 8% and 11% is achieved for mammography and cellular images respectively.

  16. Brownian dynamics simulations on a hypersphere in 4-space

    NASA Astrophysics Data System (ADS)

    Nissfolk, Jarl; Ekholm, Tobias; Elvingson, Christer

    2003-10-01

    We describe an algorithm for performing Brownian dynamics simulations of particles diffusing on S3, a hypersphere in four dimensions. The system is chosen due to recent interest in doing computer simulations in a closed space where periodic boundary conditions can be avoided. We specifically address the question how to generate a random walk on the 3-sphere, starting from the solution of the corresponding diffusion equation, and we also discuss an efficient implementation based on controlled approximations. Since S3 is a closed manifold (space), the average square displacement during a random walk is no longer proportional to the elapsed time, as in R3. Instead, its time rate of change is continuously decreasing, and approaches zero as time becomes large. We show, however, that the effective diffusion coefficient can still be obtained from the time dependence of the square displacement.

  17. Flow dynamics and energy efficiency of flow in the left ventricle during myocardial infarction.

    PubMed

    Vasudevan, Vivek; Low, Adriel Jia Jun; Annamalai, Sarayu Parimal; Sampath, Smita; Poh, Kian Keong; Totman, Teresa; Mazlan, Muhammad; Croft, Grace; Richards, A Mark; de Kleijn, Dominique P V; Chin, Chih-Liang; Yap, Choon Hwai

    2017-10-01

    Cardiovascular disease is a leading cause of death worldwide, where myocardial infarction (MI) is a major category. After infarction, the heart has difficulty providing sufficient energy for circulation, and thus, understanding the heart's energy efficiency is important. We induced MI in a porcine animal model via circumflex ligation and acquired multiple-slice cine magnetic resonance (MR) images in a longitudinal manner-before infarction, and 1 week (acute) and 4 weeks (chronic) after infarction. Computational fluid dynamic simulations were performed based on MR images to obtain detailed fluid dynamics and energy dynamics of the left ventricles. Results showed that energy efficiency flow through the heart decreased at the acute time point. Since the heart was observed to experience changes in heart rate, stroke volume and chamber size over the two post-infarction time points, simulations were performed to test the effect of each of the three parameters. Increasing heart rate and stroke volume were found to significantly decrease flow energy efficiency, but the effect of chamber size was inconsistent. Strong complex interplay was observed between the three parameters, necessitating the use of non-dimensional parameterization to characterize flow energy efficiency. The ratio of Reynolds to Strouhal number, which is a form of Womersley number, was found to be the most effective non-dimensional parameter to represent energy efficiency of flow in the heart. We believe that this non-dimensional number can be computed for clinical cases via ultrasound and hypothesize that it can serve as a biomarker for clinical evaluations.

  18. Exact and approximate stochastic simulation of intracellular calcium dynamics.

    PubMed

    Wieder, Nicolas; Fink, Rainer H A; Wegner, Frederic von

    2011-01-01

    In simulations of chemical systems, the main task is to find an exact or approximate solution of the chemical master equation (CME) that satisfies certain constraints with respect to computation time and accuracy. While Brownian motion simulations of single molecules are often too time consuming to represent the mesoscopic level, the classical Gillespie algorithm is a stochastically exact algorithm that provides satisfying results in the representation of calcium microdomains. Gillespie's algorithm can be approximated via the tau-leap method and the chemical Langevin equation (CLE). Both methods lead to a substantial acceleration in computation time and a relatively small decrease in accuracy. Elimination of the noise terms leads to the classical, deterministic reaction rate equations (RRE). For complex multiscale systems, hybrid simulations are increasingly proposed to combine the advantages of stochastic and deterministic algorithms. An often used exemplary cell type in this context are striated muscle cells (e.g., cardiac and skeletal muscle cells). The properties of these cells are well described and they express many common calcium-dependent signaling pathways. The purpose of the present paper is to provide an overview of the aforementioned simulation approaches and their mutual relationships in the spectrum ranging from stochastic to deterministic algorithms.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  20. A Novel Disintegration Tester for Solid Dosage Forms Enabling Adjustable Hydrodynamics.

    PubMed

    Kindgen, Sarah; Rach, Regine; Nawroth, Thomas; Abrahamsson, Bertil; Langguth, Peter

    2016-08-01

    A modified in vitro disintegration test device was designed that enables the investigation of the influence of hydrodynamic conditions on disintegration of solid oral dosage forms. The device represents an improved derivative of the compendial PhEur/USP disintegration test device. By the application of a computerized numerical control, a variety of physiologically relevant moving velocities and profiles can be applied. With the help of computational fluid dynamics, the hydrodynamic and mechanical forces present in the probe chamber were characterized for a variety of device moving speeds. Furthermore, a proof of concept study aimed at the investigation of the influence of hydrodynamic conditions on disintegration times of immediate release tablets. The experiments demonstrated the relevance of hydrodynamics for tablet disintegration, especially in media simulating the fasted state. Disintegration times increased with decreasing moving velocity. A correlation between experimentally determined disintegration times and computational fluid dynamics predicted shear stress on tablet surface was established. In conclusion, the modified disintegration test device is a valuable tool for biorelevant in vitro disintegration testing of solid oral dosage forms. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  1. Measuring sperm movement within the female reproductive tract using Fourier analysis.

    PubMed

    Nicovich, Philip R; Macartney, Erin L; Whan, Renee M; Crean, Angela J

    2015-02-01

    The adaptive significance of variation in sperm phenotype is still largely unknown, in part due to the difficulties of observing and measuring sperm movement in its natural, selective environment (i.e., within the female reproductive tract). Computer-assisted sperm analysis systems allow objective and accurate measurement of sperm velocity, but rely on being able to track individual sperm, and are therefore unable to measure sperm movement in species where sperm move in trains or bundles. Here we describe a newly developed computational method for measuring sperm movement using Fourier analysis to estimate sperm tail beat frequency. High-speed time-lapse videos of sperm movement within the female tract of the neriid fly Telostylinus angusticollis were recorded, and a map of beat frequencies generated by converting the periodic signal of an intensity versus time trace at each pixel to the frequency domain using the Fourier transform. We were able to detect small decreases in sperm tail beat frequency over time, indicating the method is sensitive enough to identify consistent differences in sperm movement. Fourier analysis can be applied to a wide range of species and contexts, and should therefore facilitate novel exploration of the causes and consequences of variation in sperm movement.

  2. Simulation of Transcritical CO2 Refrigeration System with Booster Hot Gas Bypass in Tropical Climate

    NASA Astrophysics Data System (ADS)

    Santosa, I. D. M. C.; Sudirman; Waisnawa, IGNS; Sunu, PW; Temaja, IW

    2018-01-01

    A Simulation computer becomes significant important for performance analysis since there is high cost and time allocation to build an experimental rig, especially for CO2 refrigeration system. Besides, to modify the rig also need additional cos and time. One of computer program simulation that is very eligible to refrigeration system is Engineering Equation System (EES). In term of CO2 refrigeration system, environmental issues becomes priority on the refrigeration system development since the Carbon dioxide (CO2) is natural and clean refrigerant. This study aims is to analysis the EES simulation effectiveness to perform CO2 transcritical refrigeration system with booster hot gas bypass in high outdoor temperature. The research was carried out by theoretical study and numerical analysis of the refrigeration system using the EES program. Data input and simulation validation were obtained from experimental and secondary data. The result showed that the coefficient of performance (COP) decreased gradually with the outdoor temperature variation increasing. The results show the program can calculate the performance of the refrigeration system with quick running time and accurate. So, it will be significant important for the preliminary reference to improve the CO2 refrigeration system design for the hot climate temperature.

  3. The combination of circle topology and leaky integrator neurons remarkably improves the performance of echo state network on time series prediction.

    PubMed

    Xue, Fangzheng; Li, Qian; Li, Xiumin

    2017-01-01

    Recently, echo state network (ESN) has attracted a great deal of attention due to its high accuracy and efficient learning performance. Compared with the traditional random structure and classical sigmoid units, simple circle topology and leaky integrator neurons have more advantages on reservoir computing of ESN. In this paper, we propose a new model of ESN with both circle reservoir structure and leaky integrator units. By comparing the prediction capability on Mackey-Glass chaotic time series of four ESN models: classical ESN, circle ESN, traditional leaky integrator ESN, circle leaky integrator ESN, we find that our circle leaky integrator ESN shows significantly better performance than other ESNs with roughly 2 orders of magnitude reduction of the predictive error. Moreover, this model has stronger ability to approximate nonlinear dynamics and resist noise than conventional ESN and ESN with only simple circle structure or leaky integrator neurons. Our results show that the combination of circle topology and leaky integrator neurons can remarkably increase dynamical diversity and meanwhile decrease the correlation of reservoir states, which contribute to the significant improvement of computational performance of Echo state network on time series prediction.

  4. Computational Fluid Dynamics Modeling of Supersonic Coherent Jets for Electric Arc Furnace Steelmaking Process

    NASA Astrophysics Data System (ADS)

    Alam, Morshed; Naser, Jamal; Brooks, Geoffrey; Fontana, Andrea

    2010-12-01

    Supersonic coherent gas jets are now used widely in electric arc furnace steelmaking and many other industrial applications to increase the gas-liquid mixing, reaction rates, and energy efficiency of the process. However, there has been limited research on the basic physics of supersonic coherent jets. In the present study, computational fluid dynamics (CFD) simulation of the supersonic jet with and without a shrouding flame at room ambient temperature was carried out and validated against experimental data. The numerical results show that the potential core length of the supersonic oxygen and nitrogen jet with shrouding flame is more than four times and three times longer, respectively, than that without flame shrouding, which is in good agreement with the experimental data. The spreading rate of the supersonic jet decreased dramatically with the use of the shrouding flame compared with a conventional supersonic jet. The present CFD model was used to investigate the characteristics of the supersonic coherent oxygen jet at steelmaking conditions of around 1700 K (1427 °C). The potential core length of the supersonic coherent oxygen jet at steelmaking conditions was 1.4 times longer than that at room ambient temperature.

  5. Application of Wavelet-Based Methods for Accelerating Multi-Time-Scale Simulation of Bistable Heterogeneous Catalysis

    DOE PAGES

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...

    2017-02-16

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  6. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation

    PubMed Central

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033

  7. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation.

    PubMed

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  8. Dynamical behavior of surface tension on rotating fluids in low and microgravity environments

    NASA Technical Reports Server (NTRS)

    Hung, R. J.; Tsao, Y. D.; Hong, B. B.; Leslie, F. W.

    1989-01-01

    Consideration is given to the time-dependent evolutions of the free surface profile (bubble shapes) of a cylindrical container, partially filled with a Newtonian fluid of constant density, rotating about its axis of symmetry in low and microgravity environments. The dynamics of the bubble shapes are calculated for four cases: linear time-dependent functions of spin-up and spin-down in low and microgravity, linear time-dependent functions of increasing and decreasing gravity at high and low rotating cylinder speeds, time-dependent step functions of spin-up and spin-down in low gravity, and sinusoidal function oscillation of the gravity environment in high and low rotating cylinder speeds. It is shown that the computer algorithms developed by Hung et al. (1988) may be used to simulate the profile of time-dependent bubble shapes under variations of centrifugal, capillary, and gravity forces.

  9. Accounting for the Decreasing Reaction Potential of Heterogeneous Aquifers in a Stochastic Framework of Aquifer-Scale Reactive Transport

    NASA Astrophysics Data System (ADS)

    Loschko, Matthias; Wöhling, Thomas; Rudolph, David L.; Cirpka, Olaf A.

    2018-01-01

    Many groundwater contaminants react with components of the aquifer matrix, causing a depletion of the aquifer's reactivity with time. We discuss conceptual simplifications of reactive transport that allow the implementation of a decreasing reaction potential in reactive-transport simulations in chemically and hydraulically heterogeneous aquifers without relying on a fully explicit description. We replace spatial coordinates by travel-times and use the concept of relative reactivity, which represents the reaction-partner supply from the matrix relative to a reference. Microorganisms facilitating the reactions are not explicitly modeled. Solute mixing is neglected. Streamlines, obtained by particle tracking, are discretized in travel-time increments with variable content of reaction partners in the matrix. As exemplary reactive system, we consider aerobic respiration and denitrification with simplified reaction equations: Dissolved oxygen undergoes conditional zero-order decay, nitrate follows first-order decay, which is inhibited in the presence of dissolved oxygen. Both reactions deplete the bioavailable organic carbon of the matrix, which in turn determines the relative reactivity. These simplifications reduce the computational effort, facilitating stochastic simulations of reactive transport on the aquifer scale. In a one-dimensional test case with a more detailed description of the reactions, we derive a potential relationship between the bioavailable organic-carbon content and the relative reactivity. In a three-dimensional steady-state test case, we use the simplified model to calculate the decreasing denitrification potential of an artificial aquifer over 200 years in an ensemble of 200 members. We demonstrate that the uncertainty in predicting the nitrate breakthrough in a heterogeneous aquifer decreases with increasing scale of observation.

  10. Compression-based distance (CBD): a simple, rapid, and accurate method for microbiota composition comparison

    PubMed Central

    2013-01-01

    Background Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. Results We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. Conclusion CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets. PMID:23617892

  11. Functional asymmetry in Caenorhabditis elegans taste neurons and its computational role in chemotaxis.

    PubMed

    Suzuki, Hiroshi; Thiele, Tod R; Faumont, Serge; Ezcurra, Marina; Lockery, Shawn R; Schafer, William R

    2008-07-03

    Chemotaxis in Caenorhabditis elegans, like chemotaxis in bacteria, involves a random walk biased by the time derivative of attractant concentration, but how the derivative is computed is unknown. Laser ablations have shown that the strongest deficits in chemotaxis to salts are obtained when the ASE chemosensory neurons (ASEL and ASER) are ablated, indicating that this pair has a dominant role. Although these neurons are left-right homologues anatomically, they exhibit marked asymmetries in gene expression and ion preference. Here, using optical recordings of calcium concentration in ASE neurons in intact animals, we demonstrate an additional asymmetry: ASEL is an ON-cell, stimulated by increases in NaCl concentration, whereas ASER is an OFF-cell, stimulated by decreases in NaCl concentration. Both responses are reliable yet transient, indicating that ASE neurons report changes in concentration rather than absolute levels. Recordings from synaptic and sensory transduction mutants show that the ON-OFF asymmetry is the result of intrinsic differences between ASE neurons. Unilateral activation experiments indicate that the asymmetry extends to the level of behavioural output: ASEL lengthens bouts of forward locomotion (runs) whereas ASER promotes direction changes (turns). Notably, the input and output asymmetries of ASE neurons are precisely those of a simple yet novel neuronal motif for computing the time derivative of chemosensory information, which is the fundamental computation of C. elegans chemotaxis. Evidence for ON and OFF cells in other chemosensory networks suggests that this motif may be common in animals that navigate by taste and smell.

  12. Compression-based distance (CBD): a simple, rapid, and accurate method for microbiota composition comparison.

    PubMed

    Yang, Fang; Chia, Nicholas; White, Bryan A; Schook, Lawrence B

    2013-04-23

    Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets.

  13. Atmospheric changes caused by galactic cosmic rays over the period 1960–2010

    DOE PAGES

    Jackman, Charles H.; Marsh, Daniel R.; Kinnison, Douglas E.; ...

    2016-05-13

    The Specified Dynamics version of the Whole Atmosphere Community Climate Model (SD-WACCM) and the Goddard Space Flight Center two-dimensional (GSFC 2-D) models are used to investigate the effect of galactic cosmic rays (GCRs) on the atmosphere over the 1960–2010 time period. The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) computation of the GCR-caused ionization rates are used in these simulations. GCR-caused maximum NO x increases of 4–15 % are computed in the Southern polar troposphere with associated ozone increases of 1–2 %. NO x increases of ~1–6 % are calculated for the lower stratosphere with associated ozone decreasesmore » of 0.2–1 %. The primary impact of GCRs on ozone was due to their production of NO x. The impact of GCRs varies with the atmospheric chlorine loading, sulfate aerosol loading, and solar cycle variation. Because of the interference between the NO x and ClO x ozone loss cycles (e.g., the ClO + NO 2+ M → ClONO 2+ M reaction) and the change in the importance of ClO x in the ozone budget, GCRs cause larger atmospheric impacts with less chlorine loading. GCRs also cause larger atmospheric impacts with less sulfate aerosol loading and for years closer to solar minimum. GCR-caused decreases of annual average global total ozone (AAGTO) were computed to be 0.2 % or less with GCR-caused column ozone increases between 1000 and 100 hPa of 0.08 % or less and GCR-caused column ozone decreases between 100 and 1 hPa of 0.23 % or less. Although these computed ozone impacts are small, GCRs provide a natural influence on ozone and need to be quantified over long time periods. This result serves as a lower limit because of the use of the ionization model NAIRAS/HZETRN which underestimates the ion production by neglecting electromagnetic and muon branches of the cosmic ray induced cascade. Furthermore, this will be corrected in future works.« less

  14. Atmospheric changes caused by galactic cosmic rays over the period 1960–2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackman, Charles H.; Marsh, Daniel R.; Kinnison, Douglas E.

    The Specified Dynamics version of the Whole Atmosphere Community Climate Model (SD-WACCM) and the Goddard Space Flight Center two-dimensional (GSFC 2-D) models are used to investigate the effect of galactic cosmic rays (GCRs) on the atmosphere over the 1960–2010 time period. The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) computation of the GCR-caused ionization rates are used in these simulations. GCR-caused maximum NO x increases of 4–15 % are computed in the Southern polar troposphere with associated ozone increases of 1–2 %. NO x increases of ~1–6 % are calculated for the lower stratosphere with associated ozone decreasesmore » of 0.2–1 %. The primary impact of GCRs on ozone was due to their production of NO x. The impact of GCRs varies with the atmospheric chlorine loading, sulfate aerosol loading, and solar cycle variation. Because of the interference between the NO x and ClO x ozone loss cycles (e.g., the ClO + NO 2+ M → ClONO 2+ M reaction) and the change in the importance of ClO x in the ozone budget, GCRs cause larger atmospheric impacts with less chlorine loading. GCRs also cause larger atmospheric impacts with less sulfate aerosol loading and for years closer to solar minimum. GCR-caused decreases of annual average global total ozone (AAGTO) were computed to be 0.2 % or less with GCR-caused column ozone increases between 1000 and 100 hPa of 0.08 % or less and GCR-caused column ozone decreases between 100 and 1 hPa of 0.23 % or less. Although these computed ozone impacts are small, GCRs provide a natural influence on ozone and need to be quantified over long time periods. This result serves as a lower limit because of the use of the ionization model NAIRAS/HZETRN which underestimates the ion production by neglecting electromagnetic and muon branches of the cosmic ray induced cascade. Furthermore, this will be corrected in future works.« less

  15. Variation in the use of advanced imaging at the time of breast cancer diagnosis in a statewide registry.

    PubMed

    Henry, N Lynn; Braun, Thomas M; Breslin, Tara M; Gorski, David H; Silver, Samuel M; Griggs, Jennifer J

    2017-08-01

    Although national guidelines do not recommend extent of disease imaging for patients with newly diagnosed early stage breast cancer given that the harm outweighs the benefits, high rates of testing have been documented. The 2012 Choosing Wisely guidelines specifically addressed this issue. We examined the change over time in imaging use across a statewide collaborative, as well as the reasons for performing imaging and the impact on cost of care. Clinicopathologic data and use of advanced imaging tests (positron emission tomography, computed tomography, and bone scan) were abstracted from the medical records of patients treated at 25 participating sites in the Michigan Breast Oncology Quality Initiative (MiBOQI). For patients diagnosed in 2014 and 2015, reasons for testing were abstracted from the medical record. Of the 34,078 patients diagnosed with stage 0-II breast cancer between 2008 and 2015 in MiBOQI, 6853 (20.1%) underwent testing with at least 1 imaging modality in the 90 days after diagnosis. There was considerable variability in rates of testing across the 25 sites for all stages of disease. Between 2008 and 2015, testing decreased over time for patients with stage 0-IIA disease (all P < .001) and remained stable for stage IIB disease (P = .10). This decrease in testing over time resulted in a cost savings, especially for patients with stage I disease. Use of advanced imaging at the time of diagnosis decreased over time in a large statewide collaborative. Additional interventions are warranted to further reduce rates of unnecessary imaging to improve quality of care for patients with breast cancer. Cancer 2017;123:2975-83. © 2017 American Cancer Society. © 2017 American Cancer Society.

  16. Covariance analyses of satellite-derived mesoscale wind fields

    NASA Technical Reports Server (NTRS)

    Maddox, R. A.; Vonder Haar, T. H.

    1979-01-01

    Statistical structure functions have been computed independently for nine satellite-derived mesoscale wind fields that were obtained on two different days. Small cumulus clouds were tracked at 5 min intervals, but since these clouds occurred primarily in the warm sectors of midlatitude cyclones the results cannot be considered representative of the circulations within cyclones in general. The field structure varied considerably with time and was especially affected if mesoscale features were observed. The wind fields on the 2 days studied were highly anisotropic with large gradients in structure occurring approximately normal to the mean flow. Structure function calculations for the combined set of satellite winds were used to estimate random error present in the fields. It is concluded for these data that the random error in vector winds derived from cumulus cloud tracking using high-frequency satellite data is less than 1.75 m/s. Spatial correlation functions were also computed for the nine data sets. Normalized correlation functions were considerably different for u and v components and decreased rapidly as data point separation increased for both components. The correlation functions for transverse and longitudinal components decreased less rapidly as data point separation increased.

  17. An efficient approach for improving virtual machine placement in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Ghobaei-Arani, Mostafa; Shamsi, Mahboubeh; Rahmanian, Ali A.

    2017-11-01

    The ever increasing demand for the cloud services requires more data centres. The power consumption in the data centres is a challenging problem for cloud computing, which has not been considered properly by the data centre developer companies. Especially, large data centres struggle with the power cost and the Greenhouse gases production. Hence, employing the power efficient mechanisms are necessary to optimise the mentioned effects. Moreover, virtual machine (VM) placement can be used as an effective method to reduce the power consumption in data centres. In this paper by grouping both virtual and physical machines, and taking into account the maximum absolute deviation during the VM placement, the power consumption as well as the service level agreement (SLA) deviation in data centres are reduced. To this end, the best-fit decreasing algorithm is utilised in the simulation to reduce the power consumption by about 5% compared to the modified best-fit decreasing algorithm, and at the same time, the SLA violation is improved by 6%. Finally, the learning automata are used to a trade-off between power consumption reduction from one side, and SLA violation percentage from the other side.

  18. Evolutionary Action Score of TP53 Identifies High-Risk Mutations Associated with Decreased Survival and Increased Distant Metastases in Head and Neck Cancer.

    PubMed

    Neskey, David M; Osman, Abdullah A; Ow, Thomas J; Katsonis, Panagiotis; McDonald, Thomas; Hicks, Stephanie C; Hsu, Teng-Kuei; Pickering, Curtis R; Ward, Alexandra; Patel, Ameeta; Yordy, John S; Skinner, Heath D; Giri, Uma; Sano, Daisuke; Story, Michael D; Beadle, Beth M; El-Naggar, Adel K; Kies, Merrill S; William, William N; Caulin, Carlos; Frederick, Mitchell; Kimmel, Marek; Myers, Jeffrey N; Lichtarge, Olivier

    2015-04-01

    TP53 is the most frequently altered gene in head and neck squamous cell carcinoma, with mutations occurring in over two-thirds of cases, but the prognostic significance of these mutations remains elusive. In the current study, we evaluated a novel computational approach termed evolutionary action (EAp53) to stratify patients with tumors harboring TP53 mutations as high or low risk, and validated this system in both in vivo and in vitro models. Patients with high-risk TP53 mutations had the poorest survival outcomes and the shortest time to the development of distant metastases. Tumor cells expressing high-risk TP53 mutations were more invasive and tumorigenic and they exhibited a higher incidence of lung metastases. We also documented an association between the presence of high-risk mutations and decreased expression of TP53 target genes, highlighting key cellular pathways that are likely to be dysregulated by this subset of p53 mutations that confer particularly aggressive tumor behavior. Overall, our work validated EAp53 as a novel computational tool that may be useful in clinical prognosis of tumors harboring p53 mutations. ©2015 American Association for Cancer Research.

  19. Evolutionary Action score of TP53 (EAp53) identifies high risk mutations associated with decreased survival and increased distant metastases in head and neck cancer

    PubMed Central

    Neskey, David M.; Osman, Abdullah A.; Ow, Thomas J.; Katsonis, Panagiotis; McDonald, Thomas; Hicks, Stephanie C.; Hsu, Teng-Kuei; Pickering, Curtis R.; Ward, Alexandra; Patel, Ameeta; Yordy, John S.; Skinner, Heath D.; Giri, Uma; Sano, Daisuke; Story, Michael D.; Beadle, Beth M.; El-Naggar, Adel K.; Kies, Merrill S.; William, William N.; Caulin, Carlos; Frederick, Mitchell; Kimmel, Marek; Myers, Jeffrey N.; Lichtarge, Olivier

    2015-01-01

    TP53 is the most frequently altered gene in head and neck squamous cell carcinoma (HNSCC) with mutations occurring in over two third of cases, but the prognostic significance of these mutations remains elusive. In the current study, we evaluated a novel computational approach termed Evolutionary Action (EAp53) to stratify patients with tumors harboring TP53 mutations as high or low risk, and validated this system in both in vivo and in vitro models. Patients with high risk TP53 mutations had the poorest survival outcomes and the shortest time to the development of distant metastases. Tumor cells expressing high risk TP53 mutations were more invasive and tumorigenic and they exhibited a higher incidence of lung metastases. We also documented an association between the presence of high risk mutations and decreased expression of TP53 target genes, highlighting key cellular pathways that are likely to be dysregulated by this subset of p53 mutations which confer particularly aggressive tumor behavior. Overall, our work validated EAp53 as a novel computational tool that may be useful in clinical prognosis of tumors harboring p53 mutations. PMID:25634208

  20. A revised radiation package of G-packed McICA and two-stream approximation: Performance evaluation in a global weather forecasting model

    NASA Astrophysics Data System (ADS)

    Baek, Sunghye

    2017-07-01

    For more efficient and accurate computation of radiative flux, improvements have been achieved in two aspects, integration of the radiative transfer equation over space and angle. First, the treatment of the Monte Carlo-independent column approximation (MCICA) is modified focusing on efficiency using a reduced number of random samples ("G-packed") within a reconstructed and unified radiation package. The original McICA takes 20% of CPU time of radiation in the Global/Regional Integrated Model systems (GRIMs). The CPU time consumption of McICA is reduced by 70% without compromising accuracy. Second, parameterizations of shortwave two-stream approximations are revised to reduce errors with respect to the 16-stream discrete ordinate method. Delta-scaled two-stream approximation (TSA) is almost unanimously used in Global Circulation Model (GCM) but contains systematic errors which overestimate forward peak scattering as solar elevation decreases. These errors are alleviated by adjusting the parameterizations of each scattering element—aerosol, liquid, ice and snow cloud particles. Parameterizations are determined with 20,129 atmospheric columns of the GRIMs data and tested with 13,422 independent data columns. The result shows that the root-mean-square error (RMSE) over the all atmospheric layers is decreased by 39% on average without significant increase in computational time. Revised TSA developed and validated with a separate one-dimensional model is mounted on GRIMs for mid-term numerical weather forecasting. Monthly averaged global forecast skill scores are unchanged with revised TSA but the temperature at lower levels of the atmosphere (pressure ≥ 700 hPa) is slightly increased (< 0.5 K) with corrected atmospheric absorption.

  1. Impact of Booster Breaks and Computer Prompts on Physical Activity and Sedentary Behavior Among Desk-Based Workers: A Cluster-Randomized Controlled Trial.

    PubMed

    Taylor, Wendell C; Paxton, Raheem J; Shegog, Ross; Coan, Sharon P; Dubin, Allison; Page, Timothy F; Rempel, David M

    2016-11-17

    The 15-minute work break provides an opportunity to promote health, yet few studies have examined this part of the workday. We studied physical activity and sedentary behavior among office workers and compared the results of the Booster Break program with those of a second intervention and a control group to determine whether the Booster Break program improved physical and behavioral health outcomes. We conducted a 3-arm, cluster-randomized controlled trial at 4 worksites in Texas from 2010 through 2013 to compare a group-based, structured Booster Break program to an individual-based computer-prompt intervention and a usual-break control group; we analyzed physiologic, behavioral, and employee measures such as work social support, quality of life, and perceived stress. We also identified consistent and inconsistent attendees of the Booster Break sessions. We obtained data from 175 participants (mean age, 43 y; 67% racial/ethnic minority). Compared with the other groups, the consistent Booster Break attendees had greater weekly pedometer counts (P < .001), significant decreases in sedentary behavior and self-reported leisure-time physical activity (P < .001), and a significant increase in triglyceride concentrations (P = .02) (levels remained within the normal range). Usual-break participants significantly increased their body mass index, whereas Booster Break participants maintained body mass index status during the 6 months. Overall, Booster Break participants were 6.8 and 4.3 times more likely to have decreases in BMI and weekend sedentary time, respectively, than usual-break participants. Findings varied among the 3 study groups; however, results indicate the potential for consistent attendees of the Booster Break intervention to achieve significant, positive changes related to physical activity, sedentary behavior, and body mass index.

  2. Impact of Booster Breaks and Computer Prompts on Physical Activity and Sedentary Behavior Among Desk-Based Workers: A Cluster-Randomized Controlled Trial

    PubMed Central

    Paxton, Raheem J.; Shegog, Ross; Coan, Sharon P.; Dubin, Allison; Page, Timothy F.; Rempel, David M.

    2016-01-01

    Introduction The 15-minute work break provides an opportunity to promote health, yet few studies have examined this part of the workday. We studied physical activity and sedentary behavior among office workers and compared the results of the Booster Break program with those of a second intervention and a control group to determine whether the Booster Break program improved physical and behavioral health outcomes. Methods We conducted a 3-arm, cluster-randomized controlled trial at 4 worksites in Texas from 2010 through 2013 to compare a group-based, structured Booster Break program to an individual-based computer-prompt intervention and a usual-break control group; we analyzed physiologic, behavioral, and employee measures such as work social support, quality of life, and perceived stress. We also identified consistent and inconsistent attendees of the Booster Break sessions. Results We obtained data from 175 participants (mean age, 43 y; 67% racial/ethnic minority). Compared with the other groups, the consistent Booster Break attendees had greater weekly pedometer counts (P < .001), significant decreases in sedentary behavior and self-reported leisure-time physical activity (P < .001), and a significant increase in triglyceride concentrations (P = .02) (levels remained within the normal range). Usual-break participants significantly increased their body mass index, whereas Booster Break participants maintained body mass index status during the 6 months. Overall, Booster Break participants were 6.8 and 4.3 times more likely to have decreases in BMI and weekend sedentary time, respectively, than usual-break participants. Conclusion Findings varied among the 3 study groups; however, results indicate the potential for consistent attendees of the Booster Break intervention to achieve significant, positive changes related to physical activity, sedentary behavior, and body mass index. PMID:27854422

  3. Impact of introduction of an acute surgical unit on management and outcomes of small bowel obstruction.

    PubMed

    Musiienko, Anton M; Shakerian, Rose; Gorelik, Alexandra; Thomson, Benjamin N J; Skandarajah, Anita R

    2016-10-01

    The acute surgical unit (ASU) is a recently established model of care in Australasia and worldwide. Limited data are available regarding its effect on the management of small bowel obstruction. We compared the management of small bowel obstruction before and after introduction of ASU at a major tertiary referral centre. We hypothesized that introduction of ASU would correlate with improved patient outcomes. A retrospective review of prospectively maintained databases was performed over two separate 2-year periods, before and after the introduction of ASU. Data collected included demographics, co-morbidity status, use of water-soluble contrast agent and computed tomography. Outcome measures included surgical intervention, time to surgery, hospital length of stay, complications, 30-day readmissions, use of total parenteral nutrition, intensive care unit admissions and overall mortality. Total emergency admissions to the ASU increased from 2640 to 4575 between the two time periods. A total of 481 cases were identified (225 prior and 256 after introduction of ASU). Mortality decreased from 5.8% to 2.0% (P = 0.03), which remained significant after controlling for confounders with multivariate analysis (odds ratio = 0.24, 95% confidence interval 0.08-0.73, P = 0.012). The proportion of surgically managed patients increased (20.9% versus 32.0%, P = 0.003) and more operations were performed within 5 days from presentation (76.6% versus 91.5%, P = 0.02). Fewer patients received water-soluble contrast agent (27.1% versus 18.4%, P = 0.02), but more patients were investigated with a computed tomography (70.7% versus 79.7%, P = 0.02). The ASU model of care resulted in decreased mortality, shorter time to intervention and increased surgical management. Overall complications rate and length of stay did not change. © 2015 Royal Australasian College of Surgeons.

  4. Computed radiography utilizing laser-stimulated luminescence: detectability of simulated low-contrast radiographic objects.

    PubMed

    Higashida, Y; Moribe, N; Hirata, Y; Morita, K; Doudanuki, S; Sonoda, Y; Katsuda, N; Hiai, Y; Misumi, W; Matsumoto, M

    1988-01-01

    Threshold contrasts of low-contrast objects with computed radiography (CR) images were compared with those of blue and green emitting screen-film systems by employing the 18-alternative forced choice (18-AFC) procedure. The dependence of the threshold contrast on the incident X-ray exposure and also the object size was studied. The results indicated that the threshold contrasts of CR system were comparable to those of blue and green screen-film systems and decreased with increasing object size, and increased with decreasing incident X-ray exposure. The increase in threshold contrasts was small when the relative incident exposure decreased from 1 to 1/4, and was large when incident exposure was decreased further.

  5. The UF/NCI family of hybrid computational phantoms representing the current US population of male and female children, adolescents, and adults—application to CT dosimetry

    NASA Astrophysics Data System (ADS)

    Geyer, Amy M.; O'Reilly, Shannon; Lee, Choonsik; Long, Daniel J.; Bolch, Wesley E.

    2014-09-01

    Substantial increases in pediatric and adult obesity in the US have prompted a major revision to the current UF/NCI (University of Florida/National Cancer Institute) family of hybrid computational phantoms to more accurately reflect current trends in larger body morphometry. A decision was made to construct the new library in a gridded fashion by height/weight without further reference to age-dependent weight/height percentiles as these become quickly outdated. At each height/weight combination, circumferential parameters were defined and used for phantom construction. All morphometric data for the new library were taken from the CDC NHANES survey data over the time period 1999-2006, the most recent reported survey period. A subset of the phantom library was then used in a CT organ dose sensitivity study to examine the degree to which body morphometry influences the magnitude of organ doses for patients that are underweight to morbidly obese in body size. Using primary and secondary morphometric parameters, grids containing 100 adult male height/weight bins, 93 adult female height/weight bins, 85 pediatric male height/weight bins and 73 pediatric female height/weight bins were constructed. These grids served as the blueprints for construction of a comprehensive library of patient-dependent phantoms containing 351 computational phantoms. At a given phantom standing height, normalized CT organ doses were shown to linearly decrease with increasing phantom BMI for pediatric males, while curvilinear decreases in organ dose were shown with increasing phantom BMI for adult females. These results suggest that one very useful application of the phantom library would be the construction of a pre-computed dose library for CT imaging as needed for patient dose-tracking.

  6. A compilation of rate parameters of water-mineral interaction kinetics for application to geochemical modeling

    USGS Publications Warehouse

    Palandri, James L.; Kharaka, Yousif K.

    2004-01-01

    Geochemical reaction path modeling is useful for rapidly assessing the extent of water-aqueous-gas interactions both in natural systems and in industrial processes. Modeling of some systems, such as those at low temperature with relatively high hydrologic flow rates, or those perturbed by the subsurface injection of industrial waste such as CO2 or H2S, must account for the relatively slow kinetics of mineral-gas-water interactions. We have therefore compiled parameters conforming to a general Arrhenius-type rate equation, for over 70 minerals, including phases from all the major classes of silicates, most carbonates, and many other non-silicates. The compiled dissolution rate constants range from -0.21 log moles m-2 s-1 for halite, to -17.44 log moles m-2 s-1 for kyanite, for conditions far from equilibrium, at 25 ?C, and pH near neutral. These data have been added to a computer code that simulates an infinitely well-stirred batch reactor, allowing computation of mass transfer as a function of time. Actual equilibration rates are expected to be much slower than those predicted by the selected computer code, primarily because actual geochemical processes commonly involve flow through porous or fractured media, wherein the development of concentration gradients in the aqueous phase near mineral surfaces, which results in decreased absolute chemical affinity and slower reaction rates. Further differences between observed and computed reaction rates may occur because of variables beyond the scope of most geochemical simulators, such as variation in grain size, aquifer heterogeneity, preferred fluid flow paths, primary and secondary mineral coatings, and secondary minerals that may lead to decreased porosity and clogged pore throats.

  7. Probabilistic Assessment of Hypobaric Decompression Sickness Treatment Success

    NASA Technical Reports Server (NTRS)

    Conkin, Johnny; Abercromby, Andrew F. J.; Dervay, Joseph P.; Feiveson, Alan H.; Gernhardt, Michael L.; Norcross, Jason R.; Ploutz-Snyder, Robert; Wessel, James H., III

    2014-01-01

    The Hypobaric Decompression Sickness (DCS) Treatment Model links a decrease in computed bubble volume from increased pressure (DeltaP), increased oxygen (O2) partial pressure, and passage of time during treatment to the probability of symptom resolution [P(symptom resolution)]. The decrease in offending volume is realized in 2 stages: a) during compression via Boyle's Law and b) during subsequent dissolution of the gas phase via the O2 window. We established an empirical model for the P(symptom resolution) while accounting for multiple symptoms within subjects. The data consisted of 154 cases of hypobaric DCS symptoms along with ancillary information from tests on 56 men and 18 women. Our best estimated model is P(symptom resolution) = 1 / (1+exp(-(ln(Delta P) - 1.510 + 0.795×AMB - 0.00308×Ts) / 0.478)), where (DeltaP) is pressure difference (psid), AMB = 1 if ambulation took place during part of the altitude exposure, otherwise AMB = 0; and where Ts is the elapsed time in mins from start of the altitude exposure to recognition of a DCS symptom. To apply this model in future scenarios, values of DeltaP as inputs to the model would be calculated from the Tissue Bubble Dynamics Model based on the effective treatment pressure: (DeltaP) = P2 - P1 | = P1×V1/V2 - P1, where V1 is the computed volume of a spherical bubble in a unit volume of tissue at low pressure P1 and V2 is computed volume after a change to a higher pressure P2. If 100% ground level O2 (GLO) was breathed in place of air, then V2 continues to decrease through time at P2 at a faster rate. This calculated value of (DeltaP then represents the effective treatment pressure at any point in time. Simulation of a "pain-only" symptom at 203 min into an ambulatory extravehicular activity (EVA) at 4.3 psia on Mars resulted in a P(symptom resolution) of 0.49 (0.36 to 0.62 95% confidence intervals) on immediate return to 8.2 psia in the Multi-Mission Space Exploration Vehicle. The P(symptom resolution) increased to near certainty (0.99) after 2 hrs of GLO at 8.2 psia or with less certainty on immediate pressurization to 14.7 psia [0.90 (0.83 - 0.95)]. Given the low probability of DCS during EVA and the prompt treatment of a symptom with guidance from the model, it is likely that the symptom and gas phase will resolve with minimum resources and minimal impact on astronaut health, safety, and productivity.

  8. Flow-induced corrosion of absorbable magnesium alloy: In-situ and real-time electrochemical study

    PubMed Central

    Wang, Juan; Jang, Yongseok; Wan, Guojiang; Giridharan, Venkataraman; Song, Guang-Ling; Xu, Zhigang; Koo, Youngmi; Qi, Pengkai; Sankar, Jagannathan; Huang, Nan; Yun, Yeoheung

    2016-01-01

    An in-situ and real-time electrochemical study in a vascular bioreactor was designed to analyze corrosion mechanism of magnesium alloy (MgZnCa) under mimetic hydrodynamic conditions. Effect of hydrodynamics on corrosion kinetics, types, rates and products was analyzed. Flow-induced shear stress (FISS) accelerated mass and electron transfer, leading to an increase in uniform and localized corrosions. FISS increased the thickness of uniform corrosion layer, but filiform corrosion decreased this layer resistance at high FISS conditions. FISS also increased the removal rate of localized corrosion products. Impedance-estimated and linear polarization-measured polarization resistances provided a consistent correlation to corrosion rate calculated by computed tomography. PMID:28626241

  9. Flow-induced corrosion of absorbable magnesium alloy: In-situ and real-time electrochemical study.

    PubMed

    Wang, Juan; Jang, Yongseok; Wan, Guojiang; Giridharan, Venkataraman; Song, Guang-Ling; Xu, Zhigang; Koo, Youngmi; Qi, Pengkai; Sankar, Jagannathan; Huang, Nan; Yun, Yeoheung

    2016-03-01

    An in-situ and real-time electrochemical study in a vascular bioreactor was designed to analyze corrosion mechanism of magnesium alloy (MgZnCa) under mimetic hydrodynamic conditions. Effect of hydrodynamics on corrosion kinetics, types, rates and products was analyzed. Flow-induced shear stress (FISS) accelerated mass and electron transfer, leading to an increase in uniform and localized corrosions. FISS increased the thickness of uniform corrosion layer, but filiform corrosion decreased this layer resistance at high FISS conditions. FISS also increased the removal rate of localized corrosion products. Impedance-estimated and linear polarization-measured polarization resistances provided a consistent correlation to corrosion rate calculated by computed tomography.

  10. Propagation of quasifracture in viscoelastic media under low-cycle repeated stressing

    NASA Technical Reports Server (NTRS)

    Liu, X. P.; Hsiao, C. C.

    1985-01-01

    The propagation of a craze as quasifracture under repeated cyclic stressing in polymeric systems has been under intensive investigation recently. Based upon a time-dependent crazing theory, the governing differential equation describing the propagation of a single craze as quasifracture in an infinite viscoelastic plate has been solved for sinusoidal stresses. Numerical methods have been employed to obtain the normalized craze length as a function of time. The computed results indicate that the length of a quasifracture may decelerate and decrease indicating that its velocity can reverse. This behavior may be consistent with the observed and much discussed craze healing and the enclosure model in fatigue and fracture of solids.

  11. Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method for the parameter estimation on geographically weighted ordinal logistic regression model (GWOLR)

    NASA Astrophysics Data System (ADS)

    Saputro, Dewi Retno Sari; Widyaningsih, Purnami

    2017-08-01

    In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).

  12. Power Management and SRAM for Energy-Autonomous and Low-Power Systems

    NASA Astrophysics Data System (ADS)

    Chen, Gregory K.

    We demonstrate the two first-known, complete, self-powered millimeter-scale computer systems. These microsystems achieve zero-net-energy operation using solar energy harvesting and ultra-low-power circuits. A medical implant for monitoring intraocular pressure (IOP) is presented as part of a treatment for glaucoma. The 1.5mm3 IOP monitor is easily implantable because of its small size and measures IOP with 0.5mmHg accuracy. It wirelessly transmits data to an external wand while consuming 4.70nJ/bit. This provides rapid feedback about treatment efficacies to decrease physician response time and potentially prevent unnecessary vision loss. A nearly-perpetual temperature sensor is presented that processes data using a 2.1muW near-threshold ARMRTM Cortex-M3(TM) muP that provides a widely-used and trusted programming platform. Energy harvesting and power management techniques for these two microsystems enable energy-autonomous operation. The IOP monitor harvests 80nW of solar power while consuming only 5.3nW, extending lifetime indefinitely. This allows the device to provide medical information for extended periods of time, giving doctors time to converge upon the best glaucoma treatment. The temperature sensor uses on-demand power delivery to improve low-load dc-dc voltage conversion efficiency by 4.75x. It also performs linear regulation to deliver power with low noise, improved load regulation, and tight line regulation. Low-power high-throughput SRAM techniques help millimeter-scale microsystems meet stringent power budgets. VDD scaling in memory decreases energy per access, but also decreases stability margins. These margins can be improved using sizing, VTH selection, and assist circuits, as well as new bitcell designs. Adaptive Crosshairs modulation of SRAM power supplies fixes 70% of parametric failures. Half-differential SRAM design improves stability, reducing VMIN by 72mV. The circuit techniques for energy autonomy presented in this dissertation enable millimeter-scale microsystems for medical implants, such as blood pressure and glucose sensors, as well as non-medical applications, such as supply chain and infrastructure monitoring. These pervasive sensors represent the continuation of Bell's Law, which accurately traces the evolution of computers as they have become smaller, more numerous, and more powerful. The development of millimeter-scale massively-deployed ubiquitous computers ensures the continued expansion and profitability of the semiconductor industry. NanoWatt circuit techniques will allow us to meet this next frontier in IC design.

  13. Investigation of Response Amplitude Operators for Floating Offshore Wind Turbines: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramachandran, G. K. V.; Robertson, A.; Jonkman, J. M.

    This paper examines the consistency between response amplitude operators (RAOs) computed from WAMIT, a linear frequency-domain tool, to RAOs derived from time-domain computations based on white-noise wave excitation using FAST, a nonlinear aero-hydro-servo-elastic tool. The RAO comparison is first made for a rigid floating wind turbine without wind excitation. The investigation is further extended to examine how these RAOs change for a flexible and operational wind turbine. The RAOs are computed for below-rated, rated, and above-rated wind conditions. The method is applied to a floating wind system composed of the OC3-Hywind spar buoy and NREL 5-MW wind turbine. The responsesmore » are compared between FAST and WAMIT to verify the FAST model and to understand the influence of structural flexibility, aerodynamic damping, control actions, and waves on the system responses. The results show that based on the RAO computation procedure implemented, the WAMIT- and FAST-computed RAOs are similar (as expected) for a rigid turbine subjected to waves only. However, WAMIT is unable to model the excitation from a flexible turbine. Further, the presence of aerodynamic damping decreased the platform surge and pitch responses, as computed by both WAMIT and FAST when wind was included. Additionally, the influence of gyroscopic excitation increased the yaw response, which was captured by both WAMIT and FAST.« less

  14. EIVAN - AN INTERACTIVE ORBITAL TRAJECTORY PLANNING TOOL

    NASA Technical Reports Server (NTRS)

    Brody, A. R.

    1994-01-01

    The Interactive Orbital Trajectory planning Tool, EIVAN, is a forward looking interactive orbit trajectory plotting tool for use with Proximity Operations (operations occurring within a one kilometer sphere of the space station) and other maneuvers. The result of vehicle burns on-orbit is very difficult to anticipate because of non-linearities in the equations of motion governing orbiting bodies. EIVAN was developed to plot resulting trajectories, to provide a better comprehension of orbital mechanics effects, and to help the user develop heuristics for onorbit mission planning. EIVAN comprises a worksheet and a chart from Microsoft Excel on a Macintosh computer. The orbital path for a user-specified time interval is plotted given operator burn inputs. Fuel use is also calculated. After the thrust parameters (magnitude, direction, and time) are input, EIVAN plots the resulting trajectory. Up to five burns may be inserted at any time in the mission. Twenty data points are plotted for each burn and the time interval can be varied to accommodate any desired time frame or degree of resolution. Since the number of data points for each burn is constant, the mission duration can be increased or decreased by increasing or decreasing the time interval. The EIVAN program runs with Microsoft's Excel for execution on a Macintosh running Macintosh OS. A working knowledge of Excel is helpful, but not imperative, for interacting with EIVAN. The program was developed in 1989.

  15. The value of continuity: Refined isogeometric analysis and fast direct solvers

    DOE PAGES

    Garcia, Daniel; Pardo, David; Dalcin, Lisandro; ...

    2016-08-24

    Here, we propose the use of highly continuous finite element spaces interconnected with low continuity hyperplanes to maximize the performance of direct solvers. Starting from a highly continuous Isogeometric Analysis (IGA) discretization, we introduce C0-separators to reduce the interconnection between degrees of freedom in the mesh. By doing so, both the solution time and best approximation errors are simultaneously improved. We call the resulting method “refined Isogeometric Analysis (rIGA)”. To illustrate the impact of the continuity reduction, we analyze the number of Floating Point Operations (FLOPs), computational times, and memory required to solve the linear system obtained by discretizing themore » Laplace problem with structured meshes and uniform polynomial orders. Theoretical estimates demonstrate that an optimal continuity reduction may decrease the total computational time by a factor between p 2 and p 3, with pp being the polynomial order of the discretization. Numerical results indicate that our proposed refined isogeometric analysis delivers a speed-up factor proportional to p 2. In a 2D mesh with four million elements and p=5, the linear system resulting from rIGA is solved 22 times faster than the one from highly continuous IGA. In a 3D mesh with one million elements and p=3, the linear system is solved 15 times faster for the refined than the maximum continuity isogeometric analysis.« less

  16. Timeliness in the German surveillance system for infectious diseases: Amendment of the infection protection act in 2013 decreased local reporting time to 1 day

    PubMed Central

    Diercke, Michaela; Salmon, Maëlle; Czogiel, Irina; Schumacher, Dirk; Claus, Hermann; Gilsdorf, Andreas

    2017-01-01

    Time needed to report surveillance data within the public health service delays public health actions. The amendment to the infection protection act (IfSG) from 29 March 2013 requires local and state public health agencies to report surveillance data within one working day instead of one week. We analysed factors associated with reporting time and evaluated the IfSG amendment. Local reporting time is the time between date of notification and date of export to the state public health agency and state reporting time is time between date of arrival at the state public health agency and the date of export. We selected cases reported between 28 March 2012 and 28 March 2014. We calculated the median local and state reporting time, stratified by potentially influential factors, computed a negative binominal regression model and assessed quality and workload parameters. Before the IfSG amendment the median local reporting time was 4 days and 1 day afterwards. The state reporting time was 0 days before and after. Influential factors are the individual local public health agency, the notified disease, the notification software and the day of the week. Data quality and workload parameters did not change. The IfSG amendment has decreased local reporting time, no relevant loss of data quality or identifiable workload-increase could be detected. State reporting time is negligible. We recommend efforts to harmonise practices of local public health agencies including the exclusive use of software with fully compatible interfaces. PMID:29088243

  17. Computer task performance by subjects with Duchenne muscular dystrophy.

    PubMed

    Malheiros, Silvia Regina Pinheiro; da Silva, Talita Dias; Favero, Francis Meire; de Abreu, Luiz Carlos; Fregni, Felipe; Ribeiro, Denise Cardoso; de Mello Monteiro, Carlos Bandeira

    2016-01-01

    Two specific objectives were established to quantify computer task performance among people with Duchenne muscular dystrophy (DMD). First, we compared simple computational task performance between subjects with DMD and age-matched typically developing (TD) subjects. Second, we examined correlations between the ability of subjects with DMD to learn the computational task and their motor functionality, age, and initial task performance. The study included 84 individuals (42 with DMD, mean age of 18±5.5 years, and 42 age-matched controls). They executed a computer maze task; all participants performed the acquisition (20 attempts) and retention (five attempts) phases, repeating the same maze. A different maze was used to verify transfer performance (five attempts). The Motor Function Measure Scale was applied, and the results were compared with maze task performance. In the acquisition phase, a significant decrease was found in movement time (MT) between the first and last acquisition block, but only for the DMD group. For the DMD group, MT during transfer was shorter than during the first acquisition block, indicating improvement from the first acquisition block to transfer. In addition, the TD group showed shorter MT than the DMD group across the study. DMD participants improved their performance after practicing a computational task; however, the difference in MT was present in all attempts among DMD and control subjects. Computational task improvement was positively influenced by the initial performance of individuals with DMD. In turn, the initial performance was influenced by their distal functionality but not their age or overall functionality.

  18. A Fog Computing and Cloudlet Based Augmented Reality System for the Industry 4.0 Shipyard.

    PubMed

    Fernández-Caramés, Tiago M; Fraga-Lamas, Paula; Suárez-Albela, Manuel; Vilar-Montesinos, Miguel

    2018-06-02

    Augmented Reality (AR) is one of the key technologies pointed out by Industry 4.0 as a tool for enhancing the next generation of automated and computerized factories. AR can also help shipbuilding operators, since they usually need to interact with information (e.g., product datasheets, instructions, maintenance procedures, quality control forms) that could be handled easily and more efficiently through AR devices. This is the reason why Navantia, one of the 10 largest shipbuilders in the world, is studying the application of AR (among other technologies) in different shipyard environments in a project called "Shipyard 4.0". This article presents Navantia's industrial AR (IAR) architecture, which is based on cloudlets and on the fog computing paradigm. Both technologies are ideal for supporting physically-distributed, low-latency and QoS-aware applications that decrease the network traffic and the computational load of traditional cloud computing systems. The proposed IAR communications architecture is evaluated in real-world scenarios with payload sizes according to demanding Microsoft HoloLens applications and when using a cloud, a cloudlet and a fog computing system. The results show that, in terms of response delay, the fog computing system is the fastest when transferring small payloads (less than 128 KB), while for larger file sizes, the cloudlet solution is faster than the others. Moreover, under high loads (with many concurrent IAR clients), the cloudlet in some cases is more than four times faster than the fog computing system in terms of response delay.

  19. A Markov computer simulation model of the economics of neuromuscular blockade in patients with acute respiratory distress syndrome

    PubMed Central

    Macario, Alex; Chow, John L; Dexter, Franklin

    2006-01-01

    Background Management of acute respiratory distress syndrome (ARDS) in the intensive care unit (ICU) is clinically challenging and costly. Neuromuscular blocking agents may facilitate mechanical ventilation and improve oxygenation, but may result in prolonged recovery of neuromuscular function and acute quadriplegic myopathy syndrome (AQMS). The goal of this study was to address a hypothetical question via computer modeling: Would a reduction in intubation time of 6 hours and/or a reduction in the incidence of AQMS from 25% to 21%, provide enough benefit to justify a drug with an additional expenditure of $267 (the difference in acquisition cost between a generic and brand name neuromuscular blocker)? Methods The base case was a 55 year-old man in the ICU with ARDS who receives neuromuscular blockade for 3.5 days. A Markov model was designed with hypothetical patients in 1 of 6 mutually exclusive health states: ICU-intubated, ICU-extubated, hospital ward, long-term care, home, or death, over a period of 6 months. The net monetary benefit was computed. Results Our computer simulation modeling predicted the mean cost for ARDS patients receiving standard care for 6 months to be $62,238 (5% – 95% percentiles $42,259 – $83,766), with an overall 6-month mortality of 39%. Assuming a ceiling ratio of $35,000, even if a drug (that cost $267 more) hypothetically reduced AQMS from 25% to 21% and decreased intubation time by 6 hours, the net monetary benefit would only equal $137. Conclusion ARDS patients receiving a neuromuscular blocker have a high mortality, and unpredictable outcome, which results in large variability in costs per case. If a patient dies, there is no benefit to any drug that reduces ventilation time or AQMS incidence. A prospective, randomized pharmacoeconomic study of neuromuscular blockers in the ICU to asses AQMS or intubation times is impractical because of the highly variable clinical course of patients with ARDS. PMID:16539706

  20. Theoretical basal Ca II fluxes for late-type stars: results from magnetic wave models with time-dependent ionization and multi-level radiation treatments

    NASA Astrophysics Data System (ADS)

    Fawzy, Diaa E.; Stȩpień, K.

    2018-03-01

    In the current study we present ab initio numerical computations of the generation and propagation of longitudinal waves in magnetic flux tubes embedded in the atmospheres of late-type stars. The interaction between convective turbulence and the magnetic structure is computed and the obtained longitudinal wave energy flux is used in a self-consistent manner to excite the small-scale magnetic flux tubes. In the current study we reduce the number of assumptions made in our previous studies by considering the full magnetic wave energy fluxes and spectra as well as time-dependent ionization (TDI) of hydrogen, employing multi-level Ca II atomic models, and taking into account departures from local thermodynamic equilibrium. Our models employ the recently confirmed value of the mixing-length parameter α=1.8. Regions with strong magnetic fields (magnetic filling factors of up to 50%) are also considered in the current study. The computed Ca II emission fluxes show a strong dependence on the magnetic filling factors, and the effect of time-dependent ionization (TDI) turns out to be very important in the atmospheres of late-type stars heated by acoustic and magnetic waves. The emitted Ca II fluxes with TDI included into the model are decreased by factors that range from 1.4 to 5.5 for G0V and M0V stars, respectively, compared to models that do not consider TDI. The results of our computations are compared with observations. Excellent agreement between the observed and predicted basal flux is obtained. The predicted trend of Ca II emission flux with magnetic filling factor and stellar surface temperature also agrees well with the observations but the calculated maximum fluxes for stars of different spectral types are about two times lower than observations. Though the longitudinal MHD waves considered here are important for chromosphere heating in high activity stars, additional heating mechanism(s) are apparently present.

  1. Acceleration of the chemistry solver for modeling DI engine combustion using dynamic adaptive chemistry (DAC) schemes

    NASA Astrophysics Data System (ADS)

    Shi, Yu; Liang, Long; Ge, Hai-Wen; Reitz, Rolf D.

    2010-03-01

    Acceleration of the chemistry solver for engine combustion is of much interest due to the fact that in practical engine simulations extensive computational time is spent solving the fuel oxidation and emission formation chemistry. A dynamic adaptive chemistry (DAC) scheme based on a directed relation graph error propagation (DRGEP) method has been applied to study homogeneous charge compression ignition (HCCI) engine combustion with detailed chemistry (over 500 species) previously using an R-value-based breadth-first search (RBFS) algorithm, which significantly reduced computational times (by as much as 30-fold). The present paper extends the use of this on-the-fly kinetic mechanism reduction scheme to model combustion in direct-injection (DI) engines. It was found that the DAC scheme becomes less efficient when applied to DI engine simulations using a kinetic mechanism of relatively small size and the accuracy of the original DAC scheme decreases for conventional non-premixed combustion engine. The present study also focuses on determination of search-initiating species, involvement of the NOx chemistry, selection of a proper error tolerance, as well as treatment of the interaction of chemical heat release and the fuel spray. Both the DAC schemes were integrated into the ERC KIVA-3v2 code, and simulations were conducted to compare the two schemes. In general, the present DAC scheme has better efficiency and similar accuracy compared to the previous DAC scheme. The efficiency depends on the size of the chemical kinetics mechanism used and the engine operating conditions. For cases using a small n-heptane kinetic mechanism of 34 species, 30% of the computational time is saved, and 50% for a larger n-heptane kinetic mechanism of 61 species. The paper also demonstrates that by combining the present DAC scheme with an adaptive multi-grid chemistry (AMC) solver, it is feasible to simulate a direct-injection engine using a detailed n-heptane mechanism with 543 species with practical computer time.

  2. A Method for Growing Bio-memristors from Slime Mold.

    PubMed

    Miranda, Eduardo Reck; Braund, Edward

    2017-11-02

    Our research is aimed at gaining a better understanding of the electronic properties of organisms in order to engineer novel bioelectronic systems and computing architectures based on biology. This specific paper focuses on harnessing the unicellular slime mold Physarum polycephalum to develop bio-memristors (or biological memristors) and bio-computing devices. The memristor is a resistor that possesses memory. It is the 4th fundamental passive circuit element (the other three are the resistor, the capacitor, and the inductor), which is paving the way for the design of new kinds of computing systems; e.g., computers that might relinquish the distinction between storage and a central processing unit. When applied with an AC voltage, the current vs. voltage characteristic of a memristor is a pinched hysteresis loop. It has been shown that P. polycephalum produces pinched hysteresis loops under AC voltages and displays adaptive behavior that is comparable with the functioning of a memristor. This paper presents the method that we developed for implementing bio-memristors with P. polycephalum and introduces the development of a receptacle to culture the organism, which facilitates its deployment as an electronic circuit component. Our method has proven to decrease growth time, increase component lifespan, and standardize electrical observations.

  3. Effect of Playing Interactive Computer Game on Distress of Insulin Injection Among Type 1 Diabetic Children

    PubMed Central

    Ebrahimpour, Fatemeh; Sadeghi, Narges; Najafi, Mostafa; Iraj, Bijan; Shahrokhi, Akram

    2015-01-01

    Background: Diabetic children and their families experience high level stress because of daily insulin injection. Objectives: This study was conducted to investigate the impact of an interactive computer game on behavioral distress due to insulin injection among diabetic children. Patients and Methods: In this clinical trial, thirty children (3-12 years) with type 1 diabetes who needed daily insulin injection were recruited and allocated randomly into two groups. Children in intervention groups received an interactive computer game and asked to play at home for a week. No special intervention was done for control group. The behavioral distress of groups was assessed before, during and after the intervention by Observational Scale of Behavioral Distress–Revised (OSBD-R). Results: Repeated measure ANOVA test showed no significantly difference of OSBD-R over time for control group (P = 0.08), but this changes is signification in the study group (P = 0.001). Comparison mean score of distress were significantly different between two groups (P = 0.03). Conclusions: According to the findings, playing interactive computer game can decrease behavioral distress induced by insulin injection in type 1 diabetic children. It seems this game can be beneficial to be used alongside other interventions. PMID:26199708

  4. Acceleration and torque feedback for robotic control - Experimental results

    NASA Technical Reports Server (NTRS)

    Mclnroy, John E.; Saridis, George N.

    1990-01-01

    Gross motion control of robotic manipulators typically requires significant on-line computations to compensate for nonlinear dynamics due to gravity, Coriolis, centripetal, and friction nonlinearities. One controller proposed by Luo and Saridis avoids these computations by feeding back joint acceleration and torque. This study implements the controller on a Puma 600 robotic manipulator. Joint acceleration measurement is obtained by measuring linear accelerations of each joint, and deriving a computationally efficient transformation from the linear measurements to the angular accelerations. Torque feedback is obtained by using the previous torque sent to the joints. The implementation has stability problems on the Puma 600 due to the extremely high gains inherent in the feedback structure. Since these high gains excite frequency modes in the Puma 600, the algorithm is modified to decrease the gain inherent in the feedback structure. The resulting compensator is stable and insensitive to high frequency unmodeled dynamics. Moreover, a second compensator is proposed which uses acceleration and torque feedback, but still allows nonlinear terms to be fed forward. Thus, by feeding the increment in the easily calculated gravity terms forward, improved responses are obtained. Both proposed compensators are implemented, and the real time results are compared to those obtained with the computed torque algorithm.

  5. HiFiVS Modeling of Flow Diverter Deployment Enables Hemodynamic Characterization of Complex Intracranial Aneurysm Cases

    PubMed Central

    Xiang, Jianping; Damiano, Robert J.; Lin, Ning; Snyder, Kenneth V.; Siddiqui, Adnan H.; Levy, Elad I.; Meng, Hui

    2016-01-01

    Object Flow diversion via Pipeline Embolization Device (PED) represents the most recent advancement in endovascular therapy of intracranial aneurysms. This exploratory study aims at a proof of concept for an advanced device-modeling tool in conjunction with computational fluid dynamics (CFD) to evaluate flow modification effects by PED in real treatment cases. Methods We performed computational modeling of three PED-treated complex aneurysm cases. Case I had a fusiform vertebral aneurysm treated with a single PED. Case II had a giant internal carotid artery (ICA) aneurysm treated with 2 PEDs. Case III consisted of two tandem ICA aneurysms (a and b) treated by a single PED. Our recently developed high fidelity virtual stenting (HiFiVS) technique was used to recapitulate the clinical deployment process of PEDs in silico for these three cases. Pre- and post-treatment aneurysmal hemodynamics using CFD simulation was analyzed. Changes in aneurysmal flow velocity, inflow rate, and wall shear stress (WSS) (quantifying flow reduction) and turnover time (quantifying stasis) were calculated and compared with clinical outcome. Results In Case I (occluded within the first 3 months), the aneurysm experienced the most drastic aneurysmal flow reduction after PED placement, where the aneurysmal average velocity, inflow rate and average WSS was decreased by 76.3%, 82.5% and 74.0%, respectively, while the turnover time was increased to 572.1% of its pre-treatment value. In Case II (occluded at 6 months), aneurysmal average velocity, inflow rate and average WSS were decreased by 39.4%, 38.6%, and 59.1%, respectively, and turnover time increased to 163.0%. In Case III, Aneurysm III-a (occluded at 6 months) experienced decrease by 38.0%, 28.4%, and 50.9% in aneurysmal average velocity, inflow rate and average WSS, respectively and increase to 139.6% in turnover time, which was quite similar to Aneurysm II. Surprisingly, the adjacent Aneurysm III-b experienced more substantial flow reduction (decrease by 77.7%, 53.0%, and 84.4% in average velocity, inflow rate and average WSS, respectively, and increase to 213.0% in turnover time) than Aneurysm III-a, which qualitatively agreed with angiographic observation at 3-month follow-up. However, Aneurysm III-b remained patent at both 6 months and 9 months. A closer examination of the vascular anatomy of Case III revealed blood draining to the ophthalmic artery off Aneurysm III-b, which may have prevented its complete thrombosis. Conclusion This proof-of-concept study demonstrates that HiFiVE modeling of flow diverter deployment enables detailed characterization of hemodynamic alteration by PED placement. Post-treatment aneurysmal flow reduction may be correlated with aneurysm occlusion outcome. However, predicting aneurysm treatment outcome by flow diverters also requires consideration of other factors including vascular anatomy. PMID:26090829

  6. Utilization and viability of biologically-inspired algorithms in a dynamic multiagent camera surveillance system

    NASA Astrophysics Data System (ADS)

    Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent

    2003-10-01

    In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain computational resources. The system demonstrates the viability of biologically inspired systems in a real time tracking. In future work we plan on implementing additional biological mechanisms for cooperative management of both the sensor and processing resources in this system that include top down biasing for target specificity as well as novelty and the activity of the tracked object in relation to sensitive features of the environment.

  7. Improved digital filters for evaluating Fourier and Hankel transform integrals

    USGS Publications Warehouse

    Anderson, Walter L.

    1975-01-01

    New algorithms are described for evaluating Fourier (cosine, sine) and Hankel (J0,J1) transform integrals by means of digital filters. The filters have been designed with extended lengths so that a variable convolution operation can be applied to a large class of integral transforms having the same system transfer function. A f' lagged-convolution method is also presented to significantly decrease the computation time when computing a series of like-transforms over a parameter set spaced the same as the filters. Accuracy of the new filters is comparable to Gaussian integration, provided moderate parameter ranges and well-behaved kernel functions are used. A collection of Fortran IV subprograms is included for both real and complex functions for each filter type. The algorithms have been successfully used in geophysical applications containing a wide variety of integral transforms

  8. Use of application containers and workflows for genomic data analysis

    PubMed Central

    Schulz, Wade L.; Durant, Thomas J. S.; Siddon, Alexa J.; Torres, Richard

    2016-01-01

    Background: The rapid acquisition of biological data and development of computationally intensive analyses has led to a need for novel approaches to software deployment. In particular, the complexity of common analytic tools for genomics makes them difficult to deploy and decreases the reproducibility of computational experiments. Methods: Recent technologies that allow for application virtualization, such as Docker, allow developers and bioinformaticians to isolate these applications and deploy secure, scalable platforms that have the potential to dramatically increase the efficiency of big data processing. Results: While limitations exist, this study demonstrates a successful implementation of a pipeline with several discrete software applications for the analysis of next-generation sequencing (NGS) data. Conclusions: With this approach, we significantly reduced the amount of time needed to perform clonal analysis from NGS data in acute myeloid leukemia. PMID:28163975

  9. Connected health: cancer symptom and quality-of-life assessment using a tablet computer: a pilot study.

    PubMed

    Aktas, Aynur; Hullihen, Barbara; Shrotriya, Shiva; Thomas, Shirley; Walsh, Declan; Estfan, Bassam

    2015-03-01

    Incorporation of tablet computers (TCs) into patient assessment may facilitate safe and secure data collection. We evaluated the usefulness and acceptability of a TC as an electronic self-report symptom assessment instrument. Research Electronic Data Capture Web-based application supported data capture. Information was collected and disseminated in real time and a structured format. Completed questionnaires were printed and given to the physician before the patient visit. Most participants completed the survey without assistance. Completion rate was 100%. The median global quality of life was high for all. More than half reported pain. Based on Edmonton Symptom Assessment System, the top 3 most common symptoms were tiredness, anxiety, and decreased well-being. Patient and physician acceptability for these quick and useful TC-based surveys was excellent. © The Author(s) 2013.

  10. Development of 3D electromagnetic modeling tools for airborne vehicles

    NASA Technical Reports Server (NTRS)

    Volakis, John L.

    1992-01-01

    The main goal of this report is to advance the development of methodologies for scattering by airborne composite vehicles. Although the primary focus continues to be the development of a general purpose computer code for analyzing the entire structure as a single unit, a number of other tasks are also being pursued in parallel with this effort. One of these tasks discussed within is on new finite element formulations and mesh termination schemes. The goal here is to decrease computation time while retaining accuracy and geometric adaptability.The second task focuses on the application of wavelets to electromagnetics. Wavelet transformations are shown to be able to reduce a full matrix to a band matrix, thereby reducing the solutions memory requirements. Included within this document are two separate papers on finite element formulations and wavelets.

  11. Evaluation of user input methods for manipulating a tablet personal computer in sterile techniques.

    PubMed

    Yamada, Akira; Komatsu, Daisuke; Suzuki, Takeshi; Kurozumi, Masahiro; Fujinaga, Yasunari; Ueda, Kazuhiko; Kadoya, Masumi

    2017-02-01

    To determine a quick and accurate user input method for manipulating tablet personal computers (PCs) in sterile techniques. We evaluated three different manipulation methods, (1) Computer mouse and sterile system drape, (2) Fingers and sterile system drape, and (3) Digitizer stylus and sterile ultrasound probe cover with a pinhole, in terms of the central processing unit (CPU) performance, manipulation performance, and contactlessness. A significant decrease in CPU score ([Formula: see text]) and an increase in CPU temperature ([Formula: see text]) were observed when a system drape was used. The respective mean times taken to select a target image from an image series (ST) and the mean times for measuring points on an image (MT) were [Formula: see text] and [Formula: see text] s for the computer mouse method, [Formula: see text] and [Formula: see text] s for the finger method, and [Formula: see text] and [Formula: see text] s for the digitizer stylus method, respectively. The ST for the finger method was significantly longer than for the digitizer stylus method ([Formula: see text]). The MT for the computer mouse method was significantly longer than for the digitizer stylus method ([Formula: see text]). The mean success rate for measuring points on an image was significantly lower for the finger method when the diameter of the target was equal to or smaller than 8 mm than for the other methods. No significant difference in the adenosine triphosphate amount at the surface of the tablet PC was observed before, during, or after manipulation via the digitizer stylus method while wearing starch-powdered sterile gloves ([Formula: see text]). Quick and accurate manipulation of tablet PCs in sterile techniques without CPU load is feasible using a digitizer stylus and sterile ultrasound probe cover with a pinhole.

  12. Stochastic simulation of the spray formation assisted by a high pressure

    NASA Astrophysics Data System (ADS)

    Gorokhovski, M.; Chtab-Desportes, A.; Voloshina, I.; Askarova, A.

    2010-03-01

    The stochastic model of spray formation in the vicinity of the injector and in the far-field has been described and assessed by comparison with measurements in Diesel-like conditions. In the proposed mesh-free approach, the 3D configuration of continuous liquid core is simulated stochastically by ensemble of spatial trajectories of the specifically introduced stochastic particles. The parameters of the stochastic process are presumed from the physics of primary atomization. The spray formation model consists in computation of spatial distribution of the probability of finding the non-fragmented liquid jet in the near-to-injector region. This model is combined with KIVA II computation of atomizing Diesel spray in two-ways. First, simultaneously with the gas phase RANS computation, the ensemble of stochastic particles is tracking and the probability field of their positions is calculated, which is used for sampling of initial locations of primary blobs. Second, the velocity increment of the gas due to the liquid injection is computed from the mean volume fraction of the simulated liquid core. Two novelties are proposed in the secondary atomization modeling. The first one is due to unsteadiness of the injection velocity. When the injection velocity increment in time is decreasing, the supplementary breakup may be induced. Therefore the critical Weber number is based on such increment. Second, a new stochastic model of the secondary atomization is proposed, in which the intermittent turbulent stretching is taken into account as the main mechanism. The measurements reported by Arcoumanis et al. (time-history of the mean axial centre-line velocity of droplet, and of the centre-line Sauter Mean Diameter), are compared with computations.

  13. Use of water-soluble contrast medium (gastrografin) does not decrease the need for operative intervention nor the duration of hospital stay in uncomplicated acute adhesive small bowel obstruction? A multicenter, randomized, clinical trial (Adhesive Small Bowel Obstruction Study) and systematic review.

    PubMed

    Scotté, Michel; Mauvais, Francois; Bubenheim, Michael; Cossé, Cyril; Suaud, Leslie; Savoye-Collet, Celine; Plenier, Isabelle; Péquignot, Aurelien; Yzet, Thierry; Regimbeau, Jean Marc

    2017-05-01

    This study evaluated the association between oral gastrografin administration and the need for operative intervention in patients with presumed adhesive small bowel obstruction. Between October 2006 and August 2009, 242 patients with uncomplicated acute adhesive small bowel obstruction were included in a randomized, controlled trial (the Adhesive Small Bowel Obstruction Study, NCT00389116) and allocated to a gastrografin arm or a saline solution arm. The primary end point was the need for operative intervention within 48 hours of randomization. The secondary end points were the resection rate, the time interval between the initial computed tomography and operative intervention, the time interval between oral refeeding and discharge, risk factors for the failure of nonoperative management, in-hospital mortality, duration of stay, and recurrence or death after discharge. We performed a systematic review of the literature in order to evaluate the relationship between use of gastrografin as a diagnostic/therapeutic measure, the need for operative intervention, and the duration of stay. In the gastrografin and saline solution arms, the rate of operative intervention was 24% and 20%, respectively, the bowel resection rate was 8% and 4%, the time interval between the initial computed tomography and operative intervention, and the time interval between oral refeeding and discharge were similar in the 2 arms. Only age was identified as a potential risk factor for the failure of nonoperative management. The in-hospital mortality was 2.5%, the duration of stay was 3.8 days for patients in the gastrografin arm and 3.5 days for those in the saline solution arm (P = .19), and the recurrence rate of adhesive small bowel obstruction was 7%. These results and those of 10 published studies suggest that gastrografin did not decrease either the rate of operative intervention (21% in the saline solution arm vs 26% in the gastrografin arm) or the number of days from the initial computed tomography to discharge (3.5 vs 3.5; P = NS for both). The results of the present study and those of our systematic review suggest that gastrografin administration is of no benefit in patients with adhesive small bowel obstruction. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Prescription for trouble: Medicare Part D and patterns of computer and internet access among the elderly.

    PubMed

    Wright, David W; Hill, Twyla J

    2009-01-01

    The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 specifically encourages Medicare enrollees to use the Internet to obtain information regarding the new prescription drug insurance plans and to enroll in a plan. This reliance on computer technology and the Internet leads to practical questions regarding implementation of the insurance coverage. For example, it seems unlikely that all Medicare enrollees have access to computers and the Internet or that they are all computer literate. This study uses the 2003 Current Population Survey to examine the effects of disability and income on computer access and Internet use among the elderly. Internet access declines with age and is exacerbated by disabilities. Also, decreases in income lead to decreases in computer ownership and use. Therefore, providing prescription drug coverage primarily through the Internet seems likely to maintain or increase stratification of access to health care, especially for low-income, disabled elderly, who are also a group most in need of health care access.

  15. Time-domain approach for the transient responses in stratified viscoelastic Earth models

    NASA Technical Reports Server (NTRS)

    Hanyk, L.; Moser, J.; Yuen, D. A.; Matyska, C.

    1995-01-01

    We have developed the numerical algorithm for the computation of transient viscoelastic responses in the time domain for a radially stratified Earth model. Stratifications in both the elastic parameters and the viscosity profile have been considered. The particular viscosity profile employed has a viscosity maximum with a constrast of O(100) in the mid lower mantle. The distribution of relaxation times reveals the presence of a continuous spectrum situated between O(100) and O(exp 4) years. The principal mode is embedded within this continuous spectrum. From this initial-value approach we have found that for the low degree harmonics the non-modal contributions are comparable to the modal contributions. For this viscosity model the differences between the time-domain and normal-mode results are found to decrease strongly with increasing angular order. These calculations also show that a time-dependent effective relaxation time can be defined, which can be bounded by the relaxation times of the principal modes.

  16. Object recognition of real targets using modelled SAR images

    NASA Astrophysics Data System (ADS)

    Zherdev, D. A.

    2017-12-01

    In this work the problem of recognition is studied using SAR images. The algorithm of recognition is based on the computation of conjugation indices with vectors of class. The support subspaces for each class are constructed by exception of the most and the less correlated vectors in a class. In the study we examine the ability of a significant feature vector size reduce that leads to recognition time decrease. The images of targets form the feature vectors that are transformed using pre-trained convolutional neural network (CNN).

  17. Collision rates and impact velocities in the Main Asteroid Belt

    NASA Technical Reports Server (NTRS)

    Farinella, Paolo; Davis, Donald R.

    1992-01-01

    Wetherill's (1967) algorithm is presently used to compute the mutual collision probabilities and impact velocities of a set of 682 asteroids with large-than-50-km radius representative of a bias-free sample of asteroid orbits. While collision probabilities are nearly independent of eccentricities, a significant decrease is associated with larger inclinations. Collisional velocities grow steeply with orbital eccentricity and inclination, but with curiously small variation across the asteroid belt. Family asteroids are noted to undergo collisions with other family members 2-3 times more often than with nonmembers.

  18. Faster Heavy Ion Transport for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.

    2013-01-01

    The deterministic particle transport code HZETRN was developed to enable fast and accurate space radiation transport through materials. As more complex transport solutions are implemented for neutrons, light ions (Z < 2), mesons, and leptons, it is important to maintain overall computational efficiency. In this work, the heavy ion (Z > 2) transport algorithm in HZETRN is reviewed, and a simple modification is shown to provide an approximate 5x decrease in execution time for galactic cosmic ray transport. Convergence tests and other comparisons are carried out to verify that numerical accuracy is maintained in the new algorithm.

  19. Air Density Measurements in a Mach 10 Wake Using Iodine Cordes Bands

    NASA Technical Reports Server (NTRS)

    Balla, Robert J.; Everhart, Joel L.

    2012-01-01

    An exploratory study designed to examine the viability of making air density measurements in a Mach 10 flow using laser-induced fluorescence of the iodine Cordes bands is presented. Experiments are performed in the NASA Langley Research Center 31 in. Mach 10 air wind tunnel in the hypersonic near wake of a multipurpose crew vehicle model. To introduce iodine into the wake, a 0.5% iodine/nitrogen mixture is seeded using a pressure tap at the rear of the model. Air density was measured at 56 points along a 7 mm line and three stagnation pressures of 6.21, 8.62, and 10.0 MPa (900, 1250, and 1450 psi). Average results over time and space show rho(sub wake)/rho(sub freestream) of 0.145 plus or minus 0.010, independent of freestream air density. Average off-body results over time and space agree to better than 7.5% with computed densities from onbody pressure measurements. Densities measured during a single 60 s run at 10.0 MPa are time-dependent and steadily decrease by 15%. This decrease is attributed to model forebody heating by the flow.

  20. Trends in the Danish work environment in 1990-2000 and their associations with labor-force changes.

    PubMed

    Burr, Hermann; Bjorner, Jakob B; Kristensen, Tage S; Tüchsen, Finn; Bach, Elsa

    2003-08-01

    The aims of this study were (i) to describe the trends in the work environment in 1990-2000 among employees in Denmark and (ii) to establish whether these trends were attributable to labor-force changes. The split-panel design of the Danish Work Environment Cohort Study includes interviews with three cross-sections of 6067, 5454, and 5404 employees aged 18-59 years, each representative of the total Danish labor force in 1990, 1995 and 2000. In the cross-sections, the participation rate decreased over the period (90% in 1990, 80% in 1995, 76% in 2000). The relative differences in participation due to gender, age, and region did not change noticeably. Jobs with decreasing prevalence were clerks, cleaners, textile workers, and military personnel. Jobs with increasing prevalence were academics, computer professionals, and managers. Intense computer use, long workhours, and noise exposure increased. Job insecurity, part-time work, kneeling work posture, low job control, and skin contact with cleaning agents decreased. Labor-force changes fully explained the decline in low job control and skin contact to cleaning agents and half of the increase in long workhours, but not the other work environment changes. The work environment of Danish employees improved from 1990 to 2000, except for increases in long workhours and noise exposure. From a specific work environment intervention point of view, the development has been less encouraging because declines in low job control, as well as skin contact to cleaning agents, were explained by labor-force changes.

  1. A novel feedback algorithm for simulating controlled dynamics and confinement in the advanced reversed-field pinch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlin, J.-E.; Scheffel, J.

    2005-06-15

    In the advanced reversed-field pinch (RFP), the current density profile is externally controlled to diminish tearing instabilities. Thus the scaling of energy confinement time with plasma current and density is improved substantially as compared to the conventional RFP. This may be numerically simulated by introducing an ad hoc electric field, adjusted to generate a tearing mode stable parallel current density profile. In the present work a current profile control algorithm, based on feedback of the fluctuating electric field in Ohm's law, is introduced into the resistive magnetohydrodynamic code DEBSP [D. D. Schnack and D. C. Baxter, J. Comput. Phys. 55,more » 485 (1984); D. D. Schnack, D. C. Barnes, Z. Mikic, D. S. Marneal, E. J. Caramana, and R. A. Nebel, Comput. Phys. Commun. 43, 17 (1986)]. The resulting radial magnetic field is decreased considerably, causing an increase in energy confinement time and poloidal {beta}. It is found that the parallel current density profile spontaneously becomes hollow, and that a formation, being related to persisting resistive g modes, appears close to the reversal surface.« less

  2. Shock interaction with deformable particles using a constrained interface reinitialization scheme

    NASA Astrophysics Data System (ADS)

    Sridharan, P.; Jackson, T. L.; Zhang, J.; Balachandar, S.; Thakur, S.

    2016-02-01

    In this paper, we present axisymmetric numerical simulations of shock propagation in nitromethane over an aluminum particle for post-shock pressures up to 10 GPa. We use the Mie-Gruneisen equation of state to describe both the medium and the particle. The numerical method is a finite-volume based solver on a Cartesian grid, that allows for multi-material interfaces and shocks, and uses a novel constrained reinitialization scheme to precisely preserve particle mass and volume. We compute the unsteady inviscid drag coefficient as a function of time, and show that when normalized by post-shock conditions, the maximum drag coefficient decreases with increasing post-shock pressure. We also compute the mass-averaged particle pressure and show that the observed oscillations inside the particle are on the particle-acoustic time scale. Finally, we present simplified point-particle models that can be used for macroscale simulations. In the Appendix, we extend the isothermal or isentropic assumption concerning the point-force models to non-ideal equations of state, thus justifying their use for the current problem.

  3. Measuring vigilance decrement using computer vision assisted eye tracking in dynamic naturalistic environments.

    PubMed

    Bodala, Indu P; Abbasi, Nida I; Yu Sun; Bezerianos, Anastasios; Al-Nashash, Hasan; Thakor, Nitish V

    2017-07-01

    Eye tracking offers a practical solution for monitoring cognitive performance in real world tasks. However, eye tracking in dynamic environments is difficult due to high spatial and temporal variation of stimuli, needing further and thorough investigation. In this paper, we study the possibility of developing a novel computer vision assisted eye tracking analysis by using fixations. Eye movement data is obtained from a long duration naturalistic driving experiment. Source invariant feature transform (SIFT) algorithm was implemented using VLFeat toolbox to identify multiple areas of interest (AOIs). A new measure called `fixation score' was defined to understand the dynamics of fixation position between the target AOI and the non target AOIs. Fixation score is maximum when the subjects focus on the target AOI and diminishes when they gaze at the non-target AOIs. Statistically significant negative correlation was found between fixation score and reaction time data (r =-0.2253 and p<;0.05). This implies that with vigilance decrement, the fixation score decreases due to visual attention shifting away from the target objects resulting in an increase in the reaction time.

  4. Hydrodynamic response of a fringing coral reef to a rise in mean sea level

    NASA Astrophysics Data System (ADS)

    Taebi, Soheila; Pattiaratchi, Charitha

    2014-07-01

    Ningaloo Reef, located along the northwest coast of Australia, is one of the longest fringing coral reefs in the world extending ~300 km. Similar to other fringing reefs, it consists of a barrier reef ~1-6 km offshore with occasional gaps, backed by a shallow lagoon. Wave breaking on the reef generates radiation stress gradients that produces wave setup across the reef and lagoon and mean currents across the reef. A section of Ningaloo Reef at Sandy Bay was chosen as the focus of an intense 6-week field experiment and numerical simulation using the wave model SWAN coupled to the three-dimensional circulation model ROMS. The physics of nearshore processes such as wave breaking, wave setup and mean flow across the reef was investigated in detail by examining the various momentum balances established in the system. The magnitude of the terms and the distance of their peaks from reef edge in the momentum balance were sensitive to the changes in mean sea level, e.g. the wave forces decreased as the mean water depth increased (and hence, wave breaking dissipation was reduced). This led to an increase in the wave power at the shoreline, a slight shift of the surf zone to the lee side of the reef and changes in the intensity of the circulation. The predicted hydrodynamic fields were input into a Lagrangian particle tracking model to estimate the transport time scale of the reef-lagoon system. Flushing time of the lagoon with the open ocean was computed using two definitions in renewal of semi-enclosed water basins and revealed the sensitivity of such a transport time scale to methods. An increase in the lagoon exchange rate at smaller mean sea-level rise and the decrease at higher mean sea-level rise was predicted through flushing time computed using both methods.

  5. Using discrete event computer simulation to improve patient flow in a Ghanaian acute care hospital.

    PubMed

    Best, Allyson M; Dixon, Cinnamon A; Kelton, W David; Lindsell, Christopher J; Ward, Michael J

    2014-08-01

    Crowding and limited resources have increased the strain on acute care facilities and emergency departments worldwide. These problems are particularly prevalent in developing countries. Discrete event simulation is a computer-based tool that can be used to estimate how changes to complex health care delivery systems such as emergency departments will affect operational performance. Using this modality, our objective was to identify operational interventions that could potentially improve patient throughput of one acute care setting in a developing country. We developed a simulation model of acute care at a district level hospital in Ghana to test the effects of resource-neutral (eg, modified staff start times and roles) and resource-additional (eg, increased staff) operational interventions on patient throughput. Previously captured deidentified time-and-motion data from 487 acute care patients were used to develop and test the model. The primary outcome was the modeled effect of interventions on patient length of stay (LOS). The base-case (no change) scenario had a mean LOS of 292 minutes (95% confidence interval [CI], 291-293). In isolation, adding staffing, changing staff roles, and varying shift times did not affect overall patient LOS. Specifically, adding 2 registration workers, history takers, and physicians resulted in a 23.8-minute (95% CI, 22.3-25.3) LOS decrease. However, when shift start times were coordinated with patient arrival patterns, potential mean LOS was decreased by 96 minutes (95% CI, 94-98), and with the simultaneous combination of staff roles (registration and history taking), there was an overall mean LOS reduction of 152 minutes (95% CI, 150-154). Resource-neutral interventions identified through discrete event simulation modeling have the potential to improve acute care throughput in this Ghanaian municipal hospital. Discrete event simulation offers another approach to identifying potentially effective interventions to improve patient flow in emergency and acute care in resource-limited settings. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Ventricular dilation and electrical dyssynchrony synergistically increase regional mechanical nonuniformity but not mechanical dyssynchrony: a computational model.

    PubMed

    Kerckhoffs, Roy C P; Omens, Jeffrey H; McCulloch, Andrew D; Mulligan, Lawrence J

    2010-07-01

    Heart failure (HF) in combination with mechanical dyssynchrony is associated with a high mortality rate. To quantify contractile dysfunction in patients with HF, investigators have proposed several indices of mechanical dyssynchrony, including percentile range of time to peak shortening (WTpeak), circumferential uniformity ratio estimate (CURE), and internal stretch fraction (ISF). The goal of this study was to compare the sensitivity of these indices to 4 major abnormalities responsible for cardiac dysfunction in dyssynchronous HF: dilation, negative inotropy, negative lusitropy, and dyssynchronous activation. All combinations of these 4 major abnormalities were included in 3D computational models of ventricular electromechanics. Compared with a nonfailing heart model, ventricles were dilated, inotropy was reduced, twitch duration was prolonged, and activation sequence was changed from normal to left bundle branch block. In the nonfailing heart, CURE, ISF, and WTpeak were 0.97+/-0.004, 0.010+/-0.002, and 78+/-1 milliseconds, respectively. With dilation alone, CURE decreased 2.0+/-0.07%, ISF increased 58+/-47%, and WTpeak increased 31+/-3%. With dyssynchronous activation alone, CURE decreased 15+/-0.6%, ISF increased 14-fold (+/-3), and WTpeak increased 121+/-4%. With the combination of dilation and dyssynchronous activation, CURE decreased 23+/-0.8%, ISF increased 20-fold (+/-5), and WTpeak increased 147+/-5%. Dilation and left bundle branch block combined synergistically decreased regional cardiac function. CURE and ISF were sensitive to this combination, but WTpeak was not. CURE and ISF also reflected the relative nonuniform distribution of regional work better than WTpeak. These findings might explain why CURE and ISF are better predictors of reverse remodeling in cardiac resynchronization therapy.

  7. Reaction Time Correlations during Eye–Hand Coordination:Behavior and Modeling

    PubMed Central

    Dean, Heather L.; Martí, Daniel; Tsui, Eva; Rinzel, John; Pesaran, Bijan

    2011-01-01

    During coordinated eye– hand movements, saccade reaction times (SRTs) and reach reaction times (RRTs) are correlated in humans and monkeys. Reaction times (RTs) measure the degree of movement preparation and can correlate with movement speed and accuracy. However, RTs can also reflect effector nonspecific influences, such as motivation and arousal. We use a combination of behavioral psychophysics and computational modeling to identify plausible mechanisms for correlations in SRTs and RRTs. To disambiguate nonspecific mechanisms from mechanisms specific to movement coordination, we introduce a dual-task paradigm in which a reach and a saccade are cued with a stimulus onset asynchrony (SOA). We then develop several variants of integrate-to-threshold models of RT, which postulate that responses are initiated when the neural activity encoding effector-specific movement preparation reaches a threshold. The integrator models formalize hypotheses about RT correlations and make predictions for how each RT should vary with SOA. To test these hypotheses, we trained three monkeys to perform the eye– hand SOA task and analyzed their SRTs and RRTs. In all three subjects, RT correlations decreased with increasing SOA duration. Additionally, mean SRT decreased with decreasing SOA, revealing facilitation of saccades with simultaneous reaches, as predicted by the model. These results are not consistent with the predictions of the models with common modulation or common input but are compatible with the predictions of a model with mutual excitation between two effector-specific integrators. We propose that RT correlations are not simply attributable to motivation and arousal and are a signature of coordination. PMID:21325507

  8. Endovascular aneurysm repair simulation can lead to decreased fluoroscopy time and accurately delineate the proximal seal zone.

    PubMed

    Kim, Ann H; Kendrick, Daniel E; Moorehead, Pamela A; Nagavalli, Anil; Miller, Claire P; Liu, Nathaniel T; Wang, John C; Kashyap, Vikram S

    2016-07-01

    The use of simulators for endovascular aneurysm repair (EVAR) is not widespread. We examined whether simulation could improve procedural variables, including operative time and optimizing proximal seal. For the latter, we compared suprarenal vs infrarenal fixation endografts, right femoral vs left femoral main body access, and increasing angulation of the proximal aortic neck. Computed tomography angiography was obtained from 18 patients who underwent EVAR at a single institution. Patient cases were uploaded to the ANGIO Mentor endovascular simulator (Simbionix, Cleveland, Ohio) allowing for three-dimensional reconstruction and adapted for simulation with suprarenal fixation (Endurant II; Medtronic Inc, Minneapolis, Minn) and infrarenal fixation (C3; W. L. Gore & Associates Inc, Newark, Del) deployment systems. Three EVAR novices and three experienced surgeons performed 18 cases from each side with each device in randomized order (n = 72 simulations/participant). The cases were stratified into three groups according to the degree of infrarenal angulation: 0° to 20°, 21° to 40°, and 41° to 66°. Statistical analysis used paired t-test and one-way analysis of variance. Mean fluoroscopy time for participants decreased by 48.6% (P < .0001), and total procedure time decreased by 33.8% (P < .0001) when initial cases were compared with final cases. When stent deployment accuracy was evaluated across all cases, seal zone coverage in highly angulated aortic necks was significantly decreased. The infrarenal device resulted in mean aortic neck zone coverage of 91.9%, 89.4%, and 75.4% (P < .0001 by one-way analysis of variance), whereas the suprarenal device yielded 92.9%, 88.7%, and 71.5% (P < .0001) for the 0° to 20°, 21° to 40°, and 41° to 66° cases, respectively. Suprarenal fixation did not increase seal zone coverage. The side of femoral access for the main body did not influence proximal seal zone coverage regardless of infrarenal angulation. Simulation of EVAR leads to decreased fluoroscopy times for novice and experienced operators. Side of femoral access did not affect precision of proximal endograft landing. The angulated aortic neck leads to decreased proximal seal zone coverage regardless of infrarenal or suprarenal fixation devices. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  9. Time-dependence of the holographic spectral function: diverse routes to thermalisation

    DOE PAGES

    Banerjee, Souvik; Ishii, Takaaki; Joshi, Lata Kh; ...

    2016-08-08

    Here, we develop a new method for computing the holographic retarded propagator in generic (non-) equilibrium states using the state/geometry map. We check that our method reproduces the thermal spectral function given by the Son-Starinets prescription. The time-dependence of the spectral function of a relevant scalar operator is studied in a class of non-equilibrium states. The latter are represented by AdS-Vaidya geometries with an arbitrary parameter characterising the timescale for the dual state to transit from an initial thermal equilibrium to another due to a homogeneous quench. For long quench duration, the spectral function indeed follows the thermal form atmore » the instantaneous effective temperature adiabatically, although with a slight initial time delay and a bit premature thermalisation. At shorter quench durations, several new non-adiabatic features appear: (i) time-dependence of the spectral function is seen much before than that in the effective temperature (advanced time-dependence), (ii) a big transfer of spectral weight to frequencies greater than the initial temperature occurs at an intermediate time (kink formation) and (iii) new peaks with decreasing amplitudes but in greater numbers appear even after the effective temperature has stabilised (persistent oscillations). We find four broad routes to thermalisation for lower values of spatial momenta. At higher values of spatial momenta, kink formations and persistent oscillations are suppressed, and thermalisation time decreases. The general thermalisation pattern is globally top-down, but a closer look reveals complexities.« less

  10. Reducing Communication in Algebraic Multigrid Using Additive Variants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassilevski, Panayot S.; Yang, Ulrike Meier

    Algebraic multigrid (AMG) has proven to be an effective scalable solver on many high performance computers. However, its increasing communication complexity on coarser levels has shown to seriously impact its performance on computers with high communication cost. Moreover, additive AMG variants provide not only increased parallelism as well as decreased numbers of messages per cycle but also generally exhibit slower convergence. Here we present various new additive variants with convergence rates that are significantly improved compared to the classical additive algebraic multigrid method and investigate their potential for decreased communication, and improved communication-computation overlap, features that are essential for goodmore » performance on future exascale architectures.« less

  11. Reducing Communication in Algebraic Multigrid Using Additive Variants

    DOE PAGES

    Vassilevski, Panayot S.; Yang, Ulrike Meier

    2014-02-12

    Algebraic multigrid (AMG) has proven to be an effective scalable solver on many high performance computers. However, its increasing communication complexity on coarser levels has shown to seriously impact its performance on computers with high communication cost. Moreover, additive AMG variants provide not only increased parallelism as well as decreased numbers of messages per cycle but also generally exhibit slower convergence. Here we present various new additive variants with convergence rates that are significantly improved compared to the classical additive algebraic multigrid method and investigate their potential for decreased communication, and improved communication-computation overlap, features that are essential for goodmore » performance on future exascale architectures.« less

  12. Temperature dependence of long coherence times of oxide charge qubits.

    PubMed

    Dey, A; Yarlagadda, S

    2018-02-22

    The ability to maintain coherence and control in a qubit is a major requirement for quantum computation. We show theoretically that long coherence times can be achieved at easily accessible temperatures (such as boiling point of liquid helium) in small (i.e., ~10 nanometers) charge qubits of oxide double quantum dots when only optical phonons are the source of decoherence. In the regime of strong electron-phonon coupling and in the non-adiabatic region, we employ a duality transformation to make the problem tractable and analyze the dynamics through a non-Markovian quantum master equation. We find that the system decoheres after a long time, despite the fact that no energy is exchanged with the bath. Detuning the dots to a fraction of the optical phonon energy, increasing the electron-phonon coupling, reducing the adiabaticity, or decreasing the temperature enhances the coherence time.

  13. Comparison of holographic and field theoretic complexities for time dependent thermofield double states

    NASA Astrophysics Data System (ADS)

    Yang, Run-Qiu; Niu, Chao; Zhang, Cheng-Yong; Kim, Keun-Young

    2018-02-01

    We compute the time-dependent complexity of the thermofield double states by four different proposals: two holographic proposals based on the "complexity-action" (CA) conjecture and "complexity-volume" (CV) conjecture, and two quantum field theoretic proposals based on the Fubini-Study metric (FS) and Finsler geometry (FG). We find that four different proposals yield both similarities and differences, which will be useful to deepen our understanding on the complexity and sharpen its definition. In particular, at early time the complexity linearly increase in the CV and FG proposals, linearly decreases in the FS proposal, and does not change in the CA proposal. In the late time limit, the CA, CV and FG proposals all show that the growth rate is 2 E/(πℏ) saturating the Lloyd's bound, while the FS proposal shows the growth rate is zero. It seems that the holographic CV conjecture and the field theoretic FG method are more correlated.

  14. Some single-machine scheduling problems with learning effects and two competing agents.

    PubMed

    Li, Hongjie; Li, Zeyuan; Yin, Yunqiang

    2014-01-01

    This study considers a scheduling environment in which there are two agents and a set of jobs, each of which belongs to one of the two agents and its actual processing time is defined as a decreasing linear function of its starting time. Each of the two agents competes to process its respective jobs on a single machine and has its own scheduling objective to optimize. The objective is to assign the jobs so that the resulting schedule performs well with respect to the objectives of both agents. The objective functions addressed in this study include the maximum cost, the total weighted completion time, and the discounted total weighted completion time. We investigate three problems arising from different combinations of the objectives of the two agents. The computational complexity of the problems is discussed and solution algorithms where possible are presented.

  15. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riesen, Rolf E.; Bridges, Patrick G.; Stearley, Jon R.

    Next-generation exascale systems, those capable of performing a quintillion (10{sup 18}) operations per second, are expected to be delivered in the next 8-10 years. These systems, which will be 1,000 times faster than current systems, will be of unprecedented scale. As these systems continue to grow in size, faults will become increasingly common, even over the course of small calculations. Therefore, issues such as fault tolerance and reliability will limit application scalability. Current techniques to ensure progress across faults like checkpoint/restart, the dominant fault tolerance mechanism for the last 25 years, are increasingly problematic at the scales of future systemsmore » due to their excessive overheads. In this work, we evaluate a number of techniques to decrease the overhead of checkpoint/restart and keep this method viable for future exascale systems. More specifically, this work evaluates state-machine replication to dramatically increase the checkpoint interval (the time between successive checkpoint) and hash-based, probabilistic incremental checkpointing using graphics processing units to decrease the checkpoint commit time (the time to save one checkpoint). Using a combination of empirical analysis, modeling, and simulation, we study the costs and benefits of these approaches on a wide range of parameters. These results, which cover of number of high-performance computing capability workloads, different failure distributions, hardware mean time to failures, and I/O bandwidths, show the potential benefits of these techniques for meeting the reliability demands of future exascale platforms.« less

  17. Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.

    PubMed

    Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander

    2018-04-10

    A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.

  18. 24 CFR 180.405 - Time computations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Time computations. 180.405 Section... Hearing § 180.405 Time computations. (a) In computing time under this part, the time period begins the day... Saturday, Sunday, or legal holiday observed by the Federal Government, in which case the time period...

  19. 24 CFR 180.405 - Time computations.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 1 2012-04-01 2012-04-01 false Time computations. 180.405 Section... Hearing § 180.405 Time computations. (a) In computing time under this part, the time period begins the day... Saturday, Sunday, or legal holiday observed by the Federal Government, in which case the time period...

  20. Numerical Simulation of Desulfurization Behavior in Gas-Stirred Systems Based on Computation Fluid Dynamics-Simultaneous Reaction Model (CFD-SRM) Coupled Model

    NASA Astrophysics Data System (ADS)

    Lou, Wentao; Zhu, Miaoyong

    2014-10-01

    A computation fluid dynamics-simultaneous reaction model (CFD-SRM) coupled model has been proposed to describe the desulfurization behavior in a gas-stirred ladle. For the desulfurization thermodynamics, different models were investigated to determine sulfide capacity and oxygen activity. For the desulfurization kinetic, the effect of bubbly plume flow, as well as oxygen absorption and oxidation reactions in slag eyes are considered. The thermodynamic and kinetic modification coefficients are proposed to fit the measured data, respectively. Finally, the effects of slag basicity and gas flow rate on the desulfurization efficiency are investigated. The results show that as the interfacial reactions (Al2O3)-(FeO)-(SiO2)-(MnO)-[S]-[O] simultaneous kinetic equilibrium is adopted to determine the oxygen activity, and the Young's model with the modification coefficient R th of 1.5 is adopted to determine slag sulfide capacity, the predicted sulfur distribution ratio LS agrees well with the measured data. With an increase of the gas blowing time, the predicted desulfurization rate gradually decreased, and when the modification parameter R k is 0.8, the predicted sulfur content changing with time in ladle agrees well with the measured data. If the oxygen absorption and oxidation reactions in slag eyes are not considered in this model, then the sulfur removal rate in the ladle would be overestimated, and this trend would become more obvious with an increase of the gas flow rate and decrease of the slag layer height. With the slag basicity increasing, the total desulfurization ratio increases; however, the total desulfurization ratio changes weakly as the slag basicity exceeds 7. With the increase of the gas flow rate, the desulfurization ratio first increases and then decreases. When the gas flow rate is 200 NL/min, the desulfurization ratio reaches a maximum value in an 80-ton gas-stirred ladle.

Top