Sample records for computer work a comparison

  1. Computer work duration and its dependence on the used pause definition.

    PubMed

    Richter, Janneke M; Slijper, Harm P; Over, Eelco A B; Frens, Maarten A

    2008-11-01

    Several ergonomic studies have estimated computer work duration using registration software. In these studies, an arbitrary pause definition (Pd; the minimal time between two computer events to constitute a pause) is chosen and the resulting duration of computer work is estimated. In order to uncover the relationship between the used pause definition and the computer work duration (PWT), we used registration software to record usage patterns of 571 computer users across almost 60,000 working days. For a large range of Pds (1-120 s), we found a shallow, log-linear relationship between PWT and Pds. For keyboard and mouse use, a second-order function fitted the data best. We found that these relationships were dependent on the amount of computer work and subject characteristics. Comparison of exposure duration from studies using different pause definitions should take this into account, since it could lead to misclassification. Software manufacturers and ergonomists assessing computer work duration could use the found relationships for software design and study comparison.

  2. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers.

    PubMed

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation.

  3. Application of Sequence Comparison Methods to Multisensor Data Fusion and Target Recognition

    DTIC Science & Technology

    1993-06-18

    lin- ear comparison). A particularly attractive aspect of the proposed fusion scheme is that it has the potential to work for any object with (1...radar sensing is a historical custom - however, the reader should keep in mind that the fundamental issue in this research is to explore and exploit...reduce the computationally expensive need to compute partial derivatives. In usual practice, the computationally more attractive filter design is

  4. Computational Software to Fit Seismic Data Using Epidemic-Type Aftershock Sequence Models and Modeling Performance Comparisons

    NASA Astrophysics Data System (ADS)

    Chu, A.

    2016-12-01

    Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work implements three of the homogeneous ETAS models described in Ogata (1998). With a model's log-likelihood function, my software finds the Maximum-Likelihood Estimates (MLEs) of the model's parameters to estimate the homogeneous background rate and the temporal and spatial parameters that govern triggering effects. EM-algorithm is employed for its advantages of stability and robustness (Veen and Schoenberg, 2008). My work also presents comparisons among the three models in robustness, convergence speed, and implementations from theory to computing practice. Up-to-date regional seismic data of seismic active areas such as Southern California and Japan are used to demonstrate the comparisons. Data analysis has been done using computer languages Java and R. Java has the advantages of being strong-typed and easiness of controlling memory resources, while R has the advantages of having numerous available functions in statistical computing. Comparisons are also made between the two programming languages in convergence and stability, computational speed, and easiness of implementation. Issues that may affect convergence such as spatial shapes are discussed.

  5. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers

    PubMed Central

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513

  6. Deciphering the Landauer-Büttiker Transmission Function from Single Molecule Break Junction Experiments

    NASA Astrophysics Data System (ADS)

    Reuter, Matthew; Tschudi, Stephen

    When investigating the electrical response properties of molecules, experiments often measure conductance whereas computation predicts transmission probabilities. Although the Landauer-Büttiker theory relates the two in the limit of coherent scattering through the molecule, a direct comparison between experiment and computation can still be difficult. Experimental data (specifically that from break junctions) is statistical and computational results are deterministic. Many studies compare the most probable experimental conductance with computation, but such an analysis discards almost all of the experimental statistics. In this work we develop tools to decipher the Landauer-Büttiker transmission function directly from experimental statistics and then apply them to enable a fairer comparison between experimental and computational results.

  7. Visual comparison testing of automotive paint simulation

    NASA Astrophysics Data System (ADS)

    Meyer, Gary; Fan, Hua-Tzu; Seubert, Christopher; Evey, Curtis; Meseth, Jan; Schnackenberg, Ryan

    2015-03-01

    An experiment was performed to determine whether typical industrial automotive color paint comparisons made using real physical samples could also be carried out using a digital simulation displayed on a calibrated color television monitor. A special light booth, designed to facilitate evaluation of the car paint color with reflectance angle, was employed in both the real and virtual color comparisons. Paint samples were measured using a multi-angle spectrophotometer and were simulated using a commercially available software package. Subjects performed the test quicker using the computer graphic simulation, and results indicate that there is only a small difference between the decisions made using the light booth and the computer monitor. This outcome demonstrates the potential of employing simulations to replace some of the time consuming work with real physical samples that still characterizes material appearance work in industry.

  8. Optimal Mass Transport for Shape Matching and Comparison

    PubMed Central

    Su, Zhengyu; Wang, Yalin; Shi, Rui; Zeng, Wei; Sun, Jian; Luo, Feng; Gu, Xianfeng

    2015-01-01

    Surface based 3D shape analysis plays a fundamental role in computer vision and medical imaging. This work proposes to use optimal mass transport map for shape matching and comparison, focusing on two important applications including surface registration and shape space. The computation of the optimal mass transport map is based on Monge-Brenier theory, in comparison to the conventional method based on Monge-Kantorovich theory, this method significantly improves the efficiency by reducing computational complexity from O(n2) to O(n). For surface registration problem, one commonly used approach is to use conformal map to convert the shapes into some canonical space. Although conformal mappings have small angle distortions, they may introduce large area distortions which are likely to cause numerical instability thus resulting failures of shape analysis. This work proposes to compose the conformal map with the optimal mass transport map to get the unique area-preserving map, which is intrinsic to the Riemannian metric, unique, and diffeomorphic. For shape space study, this work introduces a novel Riemannian framework, Conformal Wasserstein Shape Space, by combing conformal geometry and optimal mass transport theory. In our work, all metric surfaces with the disk topology are mapped to the unit planar disk by a conformal mapping, which pushes the area element on the surface to a probability measure on the disk. The optimal mass transport provides a map from the shape space of all topological disks with metrics to the Wasserstein space of the disk and the pullback Wasserstein metric equips the shape space with a Riemannian metric. We validate our work by numerous experiments and comparisons with prior approaches and the experimental results demonstrate the efficiency and efficacy of our proposed approach. PMID:26440265

  9. A comparison between computer-controlled and set work rate exercise based on target heart rate

    NASA Technical Reports Server (NTRS)

    Pratt, Wanda M.; Siconolfi, Steven F.; Webster, Laurie; Hayes, Judith C.; Mazzocca, Augustus D.; Harris, Bernard A., Jr.

    1991-01-01

    Two methods are compared for observing the heart rate (HR), metabolic equivalents, and time in target HR zone (defined as the target HR + or - 5 bpm) during 20 min of exercise at a prescribed intensity of the maximum working capacity. In one method, called set-work rate exercise, the information from a graded exercise test is used to select a target HR and to calculate a corresponding constant work rate that should induce the desired HR. In the other method, the work rate is controlled by a computer algorithm to achieve and maintain a prescribed target HR. It is shown that computer-controlled exercise is an effective alternative to the traditional set work rate exercise, particularly when tight control of cardiovascular responses is necessary.

  10. Comparison of the Hamstring Muscle Activity and Flexion-Relaxation Ratio between Asymptomatic Persons and Computer Work-related Low Back Pain Sufferers.

    PubMed

    Kim, Min-Hee; Yoo, Won-Gyu

    2013-05-01

    [Purpose] The purpose of this study was to compare the hamstring muscle (HAM) activities and flexion-relaxation ratios of an asymptomatic group and a computer work-related low back pain (LBP) group. [Subjects] For this study, we recruited 10 asymptomatic computer workers and 10 computer workers with work-related LBP. [Methods] We measured the RMS activity of each phase (flexion, full-flexion, and re-extension phase) of trunk flexion and calculated the flexion-relaxation (FR) ratio of the muscle activities of the flexion and full-flexion phases. [Results] In the computer work-related LBP group, the HAM muscle activity increased during the full-flexion phase compared to the asymptomatic group, and the FR ration was also significantly higher. [Conclusion] We thought that prolonged sitting of computer workers might cause the change in their HAM muscle activity pattern.

  11. Test-retest reliability and concurrent validity of a web-based questionnaire measuring workstation and individual correlates of work postures during computer work.

    PubMed

    IJmker, Stefan; Mikkers, Janneke; Blatter, Birgitte M; van der Beek, Allard J; van Mechelen, Willem; Bongers, Paulien M

    2008-11-01

    "Ergonomic" questionnaires are widely used in epidemiological field studies to study the association between workstation characteristics, work posture and musculoskeletal disorders among office workers. Findings have been inconsistent regarding the putative adverse effect of work postures. Underestimation of the true association might be present in studies due to misclassification of subjects to risk (i.e. exposed to non-neutral working postures) and no-risk categories (i.e. not exposed to non-neutral working postures) based on questionnaire responses. The objective of this study was to estimate the amount of misclassification resulting from the use of questionnaires. Test-retest reliability and concurrent validity of a newly developed questionnaire was assessed. This questionnaire collects data on workstation characteristics and on individual characteristics during computer work (i.e. work postures, movements and habits). Pictures were added where possible to provide visual guidance. The study population consisted of 84 office workers of a research department. They filled out the questionnaire on the Internet twice, with an in-between period of 2 weeks. For a subgroup of workers (n=38), additional on-site observations and multiple manual goniometer measurements were performed. Percentage agreement ranged between 71% and 100% for the test-retest analysis, between 31% and 100% for the comparison between questionnaire and on-site observation, and between 26% and 71% for the comparison between questionnaire and manual goniometer measurements. For 9 out of 12 tested items, the percentage agreement between questionnaire and manual goniometer measurements was below 50%. The questionnaire collects reliable data on workstation characteristics and some individual characteristics during computer work (i.e. work movements and habits), but does not seem to be useful to collect data on work postures during computer work in epidemiological field studies among office workers.

  12. A novel anisotropic fast marching method and its application to blood flow computation in phase-contrast MRI.

    PubMed

    Schwenke, M; Hennemuth, A; Fischer, B; Friman, O

    2012-01-01

    Phase-contrast MRI (PC MRI) can be used to assess blood flow dynamics noninvasively inside the human body. The acquired images can be reconstructed into flow vector fields. Traditionally, streamlines can be computed based on the vector fields to visualize flow patterns and particle trajectories. The traditional methods may give a false impression of precision, as they do not consider the measurement uncertainty in the PC MRI images. In our prior work, we incorporated the uncertainty of the measurement into the computation of particle trajectories. As a major part of the contribution, a novel numerical scheme for solving the anisotropic Fast Marching problem is presented. A computing time comparison to state-of-the-art methods is conducted on artificial tensor fields. A visual comparison of healthy to pathological blood flow patterns is given. The comparison shows that the novel anisotropic Fast Marching solver outperforms previous schemes in terms of computing time. The visual comparison of flow patterns directly visualizes large deviations of pathological flow from healthy flow. The novel anisotropic Fast Marching solver efficiently resolves even strongly anisotropic path costs. The visualization method enables the user to assess the uncertainty of particle trajectories derived from PC MRI images.

  13. Working conditions, visual fatigue, and mental health among systems analysts in São Paulo, Brazil

    PubMed Central

    Rocha, L; Debert-Ribeiro, M

    2004-01-01

    Aims: To evaluate the association between working conditions and visual fatigue and mental health among systems analysts living in São Paulo, Brazil. Methods: A cross sectional study was carried out by a multidisciplinary team. It included: ergonomic analysis of work, individual and group interviews, and 553 self applied questionnaires in two enterprises. The comparison population numbered 136 workers in different occupations. Results: The study population mainly comprised young males. Among systems analysts, visual fatigue was associated with mental workload, inadequate equipment and workstation, low level of worker participation, being a woman, and subject's attitude of fascination by the computer. Nervousness and intellectual performance were associated with mental workload, inadequate equipment, work environment, and tools. Continuing education and leisure were protective factors. Work interfering in family life was associated with mental workload, difficulties with clients, strict deadlines, subject's attitude of fascination by the computer, and finding solutions of work problems outside work. Family support, satisfaction in life and work, and adequate work environment and tools were protective factors. Work interfering in personal life was associated with subject's attitude of fascination by the computer, strict deadlines, inadequate equipment, and high level of work participation. Satisfaction in life and work and continuing education were protective factors. The comparison population did not share common working factors with the systems analysts in the regression analysis. Conclusions: The main health effects of systems analysts' work were expressed by machine anthropomorphism, being very demanding, mental acceleration, mental absorption, and difficulty in dealing with emotions. PMID:14691269

  14. Computationally efficient multibody simulations

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant; Kumar, Manoj

    1994-01-01

    Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.

  15. Comparison of Computer-Based Versus Counselor-Based Occupational Information Systems with Disadvantaged Vocational Students

    ERIC Educational Resources Information Center

    Maola, Joseph; Kane, Gary

    1976-01-01

    Subjects, who were Occupational Work Experience students, were randomly assigned to individual guidance from either a computerized occupational information system, to a counselor-based information system or to a control group. Results demonstrate a hierarchical learning effect: The computer group learned more than the counseled group, which…

  16. Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics.

    PubMed

    Patrizi, Alfredo; Pennestrì, Ettore; Valentini, Pier Paolo

    2016-01-01

    The paper deals with the comparison between a high-end marker-based acquisition system and a low-cost marker-less methodology for the assessment of the human posture during working tasks. The low-cost methodology is based on the use of a single Microsoft Kinect V1 device. The high-end acquisition system is the BTS SMART that requires the use of reflective markers to be placed on the subject's body. Three practical working activities involving object lifting and displacement have been investigated. The operational risk has been evaluated according to the lifting equation proposed by the American National Institute for Occupational Safety and Health. The results of the study show that the risk multipliers computed from the two acquisition methodologies are very close for all the analysed activities. In agreement to this outcome, the marker-less methodology based on the Microsoft Kinect V1 device seems very promising to promote the dissemination of computer-aided assessment of ergonomics while maintaining good accuracy and affordable costs. PRACTITIONER’S SUMMARY: The study is motivated by the increasing interest for on-site working ergonomics assessment. We compared a low-cost marker-less methodology with a high-end marker-based system. We tested them on three different working tasks, assessing the working risk of lifting loads. The two methodologies showed comparable precision in all the investigations.

  17. Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.

    PubMed

    Orchard, Garrick; Jayawant, Ajinkya; Cohen, Gregory K; Thakor, Nitish

    2015-01-01

    Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.

  18. Evidence-based ergonomics education: Promoting risk factor awareness among office computer workers.

    PubMed

    Mani, Karthik; Provident, Ingrid; Eckel, Emily

    2016-01-01

    Work-related musculoskeletal disorders (WMSDs) related to computer work have become a serious public health concern. Literature revealed a positive association between computer use and WMSDs. The purpose of this evidence-based pilot project was to provide a series of evidence-based educational sessions on ergonomics to office computer workers to enhance the awareness of risk factors of WMSDs. Seventeen office computer workers who work for the National Board of Certification in Occupational Therapy volunteered for this project. Each participant completed a baseline and post-intervention ergonomics questionnaire and attended six educational sessions. The Rapid Office Strain Assessment and an ergonomics questionnaire were used for data collection. The post-intervention data revealed that 89% of participants were able to identify a greater number of risk factors and answer more questions correctly in knowledge tests of the ergonomics questionnaire. Pre- and post-intervention comparisons showed changes in work posture and behaviors (taking rest breaks, participating in exercise, adjusting workstation) of participants. The findings have implications for injury prevention in office settings and suggest that ergonomics education may yield positive knowledge and behavioral changes among computer workers.

  19. GPU-based cloud service for Smith-Waterman algorithm using frequency distance filtration scheme.

    PubMed

    Lee, Sheng-Ta; Lin, Chun-Yuan; Hung, Che Lun

    2013-01-01

    As the conventional means of analyzing the similarity between a query sequence and database sequences, the Smith-Waterman algorithm is feasible for a database search owing to its high sensitivity. However, this algorithm is still quite time consuming. CUDA programming can improve computations efficiently by using the computational power of massive computing hardware as graphics processing units (GPUs). This work presents a novel Smith-Waterman algorithm with a frequency-based filtration method on GPUs rather than merely accelerating the comparisons yet expending computational resources to handle such unnecessary comparisons. A user friendly interface is also designed for potential cloud server applications with GPUs. Additionally, two data sets, H1N1 protein sequences (query sequence set) and human protein database (database set), are selected, followed by a comparison of CUDA-SW and CUDA-SW with the filtration method, referred to herein as CUDA-SWf. Experimental results indicate that reducing unnecessary sequence alignments can improve the computational time by up to 41%. Importantly, by using CUDA-SWf as a cloud service, this application can be accessed from any computing environment of a device with an Internet connection without time constraints.

  20. A Comparison of Numerical Problem Solving under Three Types of Calculation Conditions.

    ERIC Educational Resources Information Center

    Roberts, Dennis M.; Glynn, Shawn M.

    1978-01-01

    The study reported is the first in a series of investigations designed to empirically test the hypothesis that calculators reduce quantitative working time and increase computational accuracy, and to examine the relative magnitude of benefit that accompanies utilizing calculators compared to manual work. (MN)

  1. Evaluating High-Degree-and-Order Gravitational Harmonics and its Application to the State Predictions of a Lunar Orbiting Satellite

    NASA Astrophysics Data System (ADS)

    Song, Young-Joo; Kim, Bang-Yeop

    2015-09-01

    In this work, an efficient method with which to evaluate the high-degree-and-order gravitational harmonics of the nonsphericity of a central body is described and applied to state predictions of a lunar orbiter. Unlike the work of Song et al. (2010), which used a conventional computation method to process gravitational harmonic coefficients, the current work adapted a well-known recursion formula that directly uses fully normalized associated Legendre functions to compute the acceleration due to the non-sphericity of the moon. With the formulated algorithms, the states of a lunar orbiting satellite are predicted and its performance is validated in comparisons with solutions obtained from STK/Astrogator. The predicted differences in the orbital states between STK/Astrogator and the current work all remain at a position of less than 1 m with velocity accuracy levels of less than 1 mm/s, even with different orbital inclinations. The effectiveness of the current algorithm, in terms of both the computation time and the degree of accuracy degradation, is also shown in comparisons with results obtained from earlier work. It is expected that the proposed algorithm can be used as a foundation for the development of an operational flight dynamics subsystem for future lunar exploration missions by Korea. It can also be used to analyze missions which require very close operations to the moon.

  2. Are specialist physicians missing out on the e-Health boat?

    PubMed

    Osborn, M; Day, R; Westbrook, J

    2009-10-01

    Nationally health systems are making increasing investments in the use of clinical information systems. Little is known about current computer use by specialist physicians, particularly outside the hospital setting. To identify the extent and reasons physician Fellows of the Royal Australasian College of Physicians (RACP) use computers in their work. A self-administered survey was emailed from the RACP to all practising physicians in 2007 that were living in Australia and New Zealand who had consented to email contact with the College. The survey was sent to a total of 7445 eligible physicians, 2328 physicians responded (31.3% response rate), but only 1266 responses (21.0%) were able to be analysed. Most 97.5% had access to computers at work and 96.5% used home computers for work purposes. Physicians in public hospitals (72.6%) were more likely to use computers for work (65.6%) than those in private hospitals (12.6%) or consulting rooms (27.3%). Overall physicians working in public hospitals used a wider range of applications with 70.5% using their computers for searching the internet, 53.7% for receiving results and 52.7% used their computers to engage in specific educational activities. Physicians working from their consulting rooms (33.6%) were more likely to use electronic prescribing (11%) compared with physicians working in public hospitals (5.7%). Fellows have not incorporated computers into their consulting rooms over which they have control. This is in contrast to general practitioners who have embraced computers after the provision of various incentives. The rate of use of computers by physicians for electronic prescribing in consulting rooms (11%) is very low in comparison with general practitioners (98%). One reason may be that physicians work in multiple locations whereas general practitioners are more likely to work from one location.

  3. Talking Physics: Two Case Studies on Short Answers and Self-explanation in Learning Physics

    NASA Astrophysics Data System (ADS)

    Badeau, Ryan C.

    This thesis explores two case studies into the use of short answers and self-explanation to improve student learning in physics. The first set of experiments focuses on the role of short answer questions in the context of computer-based instruction. Through a series of six experiments, we compare and evaluate the performance of computer-assessed short answer questions versus multiple choice for training conceptual topics in physics, controlling for feedback between the two formats. In addition to finding overall similar improvements on subsequent student performance and retention, we identify unique differences in how students interact with the treatments in terms of time spent on feedback and performance on follow-up short answer assessment. In addition, we identify interactions between the level of interactivity of the training, question format, and student attitudinal ratings of each respective training. The second case study focuses on the use of worked examples in the context of multi-concept physics problems - which we call "synthesis problems." For this part of the thesis, four experiments were designed to evaluate the effectiveness of two instructional methods employing worked examples on student performance with synthesis problems; these instructional techniques, analogical comparison and self-explanation, have previously been studied primarily in the context of single-concept problems. As such, the work presented here represents a novel focus on extending these two techniques to this class of more complicated physics problem. Across the four experiments, both self-explanation and certain kinds of analogical comparison of worked examples significantly improved student performance on a target synthesis problem, with distinct improvements in recognition of the relevant concepts. More specifically, analogical comparison significantly improved student performance when the comparisons were invoked between worked synthesis examples. In contrast, similar comparisons between corresponding pairs of worked single-concept examples did not significantly improve performance. On a more complicated synthesis problem, self-explanation was significantly more effective than analogical comparison, potentially due to differences in how successfully students encoded the full structure of the worked examples. Finally, we find that the two techniques can be combined for additional benefit, with the trade-off of slightly more time-on-task.

  4. Technologies That Assist in Online Group Work: A Comparison of Synchronous and Asynchronous Computer Mediated Communication Technologies on Students' Learning and Community

    ERIC Educational Resources Information Center

    Rockinson-Szapkiw, Amanda; Wendt, Jillian

    2015-01-01

    While the benefits of online group work completed using asynchronous CMC technology is documented, researchers have identified a number of challenges that result in ineffective and unsuccessful online group work. Fewer channels of communication and lack of immediacy when compared to face-to-face group work are a few of the noted limitations. Thus,…

  5. The Use of Austpac at Deakin University for Distance Education. Computer Communications Working Group Report No. 1.

    ERIC Educational Resources Information Center

    Castro, Angela; Stirzaker, Lee

    This paper discusses Australia's two packet-switching networks, AUSTPAC and MIDAS, which are used for data transmissions with other computers located within Australia or between overseas destinations. Although the facilities of both are similar, a comparison of their services based on traffic volume, connection, registration, and rental charges is…

  6. Initial comparison of single cylinder Stirling engine computer model predictions with test results

    NASA Technical Reports Server (NTRS)

    Tew, R. C., Jr.; Thieme, L. G.; Miao, D.

    1979-01-01

    A NASA developed digital computer code for a Stirling engine, modelling the performance of a single cylinder rhombic drive ground performance unit (GPU), is presented and its predictions are compared to test results. The GPU engine incorporates eight regenerator/cooler units and the engine working space is modelled by thirteen control volumes. The model calculates indicated power and efficiency for a given engine speed, mean pressure, heater and expansion space metal temperatures and cooler water inlet temperature and flow rate. Comparison of predicted and observed powers implies that the reference pressure drop calculations underestimate actual pressure drop, possibly due to oil contamination in the regenerator/cooler units, methane contamination in the working gas or the underestimation of mechanical loss. For a working gas of hydrogen, the predicted values of brake power are from 0 to 6% higher than experimental values, and brake efficiency is 6 to 16% higher, while for helium the predicted brake power and efficiency are 2 to 15% higher than the experimental.

  7. The impact of computer display height and desk design on muscle activity during information technology work by young adults.

    PubMed

    Straker, L; Pollock, C; Burgess-Limerick, R; Skoss, R; Coleman, J

    2008-08-01

    Computer display height and desk design are believed to be important workstation features and are included in international standards and guidelines. However, the evidence base for these guidelines is lacking a comparison of neck/shoulder muscle activity during computer and paper tasks and whether forearm support can be provided by desk design. This study measured the spinal and upper limb muscle activity in 36 young adults whilst they worked in different computer display, book and desk conditions. Display height affected spinal muscle activity with paper tasks resulting in greater mean spinal and upper limb muscle activity. A curved desk resulted in increased proximal muscle activity. There was no substantial interaction between display and desk.

  8. When static media promote active learning: annotated illustrations versus narrated animations in multimedia instruction.

    PubMed

    Mayer, Richard E; Hegarty, Mary; Mayer, Sarah; Campbell, Julie

    2005-12-01

    In 4 experiments, students received a lesson consisting of computer-based animation and narration or a lesson consisting of paper-based static diagrams and text. The lessons used the same words and graphics in the paper-based and computer-based versions to explain the process of lightning formation (Experiment 1), how a toilet tank works (Experiment 2), how ocean waves work (Experiment 3), and how a car's braking system works (Experiment 4). On subsequent retention and transfer tests, the paper group performed significantly better than the computer group on 4 of 8 comparisons, and there was no significant difference on the rest. These results support the static media hypothesis, in which static illustrations with printed text reduce extraneous processing and promote germane processing as compared with narrated animations.

  9. Letter regarding 'Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics' by Patrizi et al. and research reproducibility.

    PubMed

    2017-04-01

    The reporting of research in a manner that allows reproduction in subsequent investigations is important for scientific progress. Several details of the recent study by Patrizi et al., 'Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics', are absent from the published manuscript and make reproduction of findings impossible. As new and complex technologies with great promise for ergonomics develop, new but surmountable challenges for reporting investigations using these technologies in a reproducible manner arise. Practitioner Summary: As with traditional methods, scientific reporting of new and complex ergonomics technologies should be performed in a manner that allows reproduction in subsequent investigations and supports scientific advancement.

  10. Use of GLONASS at the BIPM

    DTIC Science & Technology

    2009-11-01

    TWSTFT links. This paper reports the latest results from the BIPM. INTRODUCTION Over the last 50 years, the accuracy...WORK Our work continues with the following projects:  Use of TWSTFT data for further link comparison studies (AOS, OP, and PTB)  Computation of

  11. Comparisons of some large scientific computers

    NASA Technical Reports Server (NTRS)

    Credeur, K. R.

    1981-01-01

    In 1975, the National Aeronautics and Space Administration (NASA) began studies to assess the technical and economic feasibility of developing a computer having sustained computational speed of one billion floating point operations per second and a working memory of at least 240 million words. Such a powerful computer would allow computational aerodynamics to play a major role in aeronautical design and advanced fluid dynamics research. Based on favorable results from these studies, NASA proceeded with developmental plans. The computer was named the Numerical Aerodynamic Simulator (NAS). To help insure that the estimated cost, schedule, and technical scope were realistic, a brief study was made of past large scientific computers. Large discrepancies between inception and operation in scope, cost, or schedule were studied so that they could be minimized with NASA's proposed new compter. The main computers studied were the ILLIAC IV, STAR 100, Parallel Element Processor Ensemble (PEPE), and Shuttle Mission Simulator (SMS) computer. Comparison data on memory and speed were also obtained on the IBM 650, 704, 7090, 360-50, 360-67, 360-91, and 370-195; the CDC 6400, 6600, 7600, CYBER 203, and CYBER 205; CRAY 1; and the Advanced Scientific Computer (ASC). A few lessons learned conclude the report.

  12. Comparison of Hydrodynamic Load Predictions Between Engineering Models and Computational Fluid Dynamics for the OC4-DeepCwind Semi-Submersible: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benitz, M. A.; Schmidt, D. P.; Lackner, M. A.

    Hydrodynamic loads on the platforms of floating offshore wind turbines are often predicted with computer-aided engineering tools that employ Morison's equation and/or potential-flow theory. This work compares results from one such tool, FAST, NREL's wind turbine computer-aided engineering tool, and the computational fluid dynamics package, OpenFOAM, for the OC4-DeepCwind semi-submersible analyzed in the International Energy Agency Wind Task 30 project. Load predictions from HydroDyn, the offshore hydrodynamics module of FAST, are compared with high-fidelity results from OpenFOAM. HydroDyn uses a combination of Morison's equations and potential flow to predict the hydrodynamic forces on the structure. The implications of the assumptionsmore » in HydroDyn are evaluated based on this code-to-code comparison.« less

  13. Model to Test Electric Field Comparisons in a Composite Fairing Cavity

    NASA Technical Reports Server (NTRS)

    Trout, Dawn; Burford, Janessa

    2012-01-01

    Evaluating the impact of radio frequency transmission in vehicle fairings is important to sensitive spacecraft. This study shows cumulative distribution function (CDF) comparisons of composite . a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. This work is an extension of the bare aluminum fairing perfect electric conductor (PEC) model. Test and model data correlation is shown.

  14. Model to Test Electric Field Comparisons in a Composite Fairing Cavity

    NASA Technical Reports Server (NTRS)

    Trout, Dawn H.; Burford, Janessa

    2013-01-01

    Evaluating the impact of radio frequency transmission in vehicle fairings is important to sensitive spacecraft. This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. This work is an extension of the bare aluminum fairing perfect electric conductor (PEC) model. Test and model data correlation is shown.

  15. A Computational Framework for Bioimaging Simulation.

    PubMed

    Watabe, Masaki; Arjunan, Satya N V; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi

    2015-01-01

    Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units.

  16. Research of the effectiveness of parallel multithreaded realizations of interpolation methods for scaling raster images

    NASA Astrophysics Data System (ADS)

    Vnukov, A. A.; Shershnev, M. B.

    2018-01-01

    The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.

  17. Computational Study of Hypersonic Boundary Layer Stability on Cones

    NASA Astrophysics Data System (ADS)

    Gronvall, Joel Edwin

    Due to the complex nature of boundary layer laminar-turbulent transition in hypersonic flows and the resultant effect on the design of re-entry vehicles, there remains considerable interest in developing a deeper understanding of the underlying physics. To that end, the use of experimental observations and computational analysis in a complementary manner will provide the greatest insights. It is the intent of this work to provide such an analysis for two ongoing experimental investigations. The first focuses on the hypersonic boundary layer transition experiments for a slender cone that are being conducted at JAXA's free-piston shock tunnel HIEST facility. Of particular interest are the measurements of disturbance frequencies associated with transition at high enthalpies. The computational analysis provided for these cases included two-dimensional CFD mean flow solutions for use in boundary layer stability analyses. The disturbances in the boundary layer were calculated using the linear parabolized stability equations. Estimates for transition locations, comparisons of measured disturbance frequencies and computed frequencies, and a determination of the type of disturbances present were made. It was found that for the cases where the disturbances were measured at locations where the flow was still laminar but nearly transitional, that the highly amplified disturbances showed reasonable agreement with the computations. Additionally, an investigation of the effects of finite-rate chemistry and vibrational excitation on flows over cones was conducted for a set of theoretical operational conditions at the HIEST facility. The second study focuses on transition in three-dimensional hypersonic boundary layers, and for this the cone at angle of attack experiments being conducted at the Boeing/AFOSR Mach-6 quiet tunnel at Purdue University were examined. Specifically, the effect of surface roughness on the development of the stationary crossflow instability are investigated in this work. One standard mean flow solution and two direct numerical simulations of a slender cone at an angle of attack were computed. The direct numerical simulations included a digitally-filtered, randomly distributed surface roughness and were performed using a high-order, low-dissipation numerical scheme on appropriately resolved grids. Comparisons with experimental observations showed excellent qualitative agreement. Comparisons with similar previous computational work were also made and showed agreement in the wavenumber range of the most unstable crossflow modes.

  18. Preliminary topical report on comparison reactor disassembly calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLaughlin, T.P.

    1975-11-01

    Preliminary results of comparison disassembly calculations for a representative LMFBR model (2100-l voided core) and arbitrary accident conditions are described. The analytical methods employed were the computer programs: FX2- POOL, PAD, and VENUS-II. The calculated fission energy depositions are in good agreement, as are measures of the destructive potential of the excursions, kinetic energy, and work. However, in some cases the resulting fuel temperatures are substantially divergent. Differences in the fission energy deposition appear to be attributable to residual inconsistencies in specifying the comparison cases. In contrast, temperature discrepancies probably stem from basic differences in the energy partition models inherentmore » in the codes. Although explanations of the discrepancies are being pursued, the preliminary results indicate that all three computational methods provide a consistent, global characterization of the contrived disassembly accident. (auth)« less

  19. A Comparison of Computation Span and Reading Span Working Memory Measures' Relations With Problem-Solving Criteria.

    PubMed

    Perlow, Richard; Jattuso, Mia

    2018-06-01

    Researchers have operationalized working memory in different ways and although working memory-performance relationships are well documented, there has been relatively less attention devoted to determining whether seemingly similar measures yield comparable relations with performance outcomes. Our objective is to assess whether two working memory measures deploying the same processes but different item content yield different relations with two problem-solving criteria. Participants completed a computation-based working memory measure and a reading-based measure prior to performing a computerized simulation. Results reveal differential relations with one of the two criteria and support the notion that the two working memory measures tap working memory capacity and other cognitive abilities. One implication for theory development is that researchers should consider incorporating other cognitive abilities in their working memory models and that the selection of those abilities should correspond to the criterion of interest. One practical implication is that researchers and practitioners shouldn't automatically assume that different phonological loop-based working memory scales are interchangeable.

  20. Aeroelasticity in Turbomachines. Comparison of Theoretical and Experimental Cascade Results.

    DTIC Science & Technology

    1986-01-01

    Y~x)csn1#(x)) It should be noted here that, in computing the blade surface pressure distribution, only components, and not amplitudes or phase angles...oscillation, done on the system is obtained by computing -i ChV+Cc+rhU+c_,Vh (12) Expressed in this way, the aerodynamic work coefficients c., cVh, cva...predictions), so the aerodynamic damping coefficient can easily be computed and plotted. This information is useful to the turbomachine designer for

  1. Towards Image Documentation of Grave Coverings and Epitaphs for Exhibition Purposes

    NASA Astrophysics Data System (ADS)

    Pomaska, G.; Dementiev, N.

    2015-08-01

    Epitaphs and memorials as immovable items in sacred spaces provide with their inscriptions valuable documents of history. Today not only photography or photos are suitable as presentation material for cultural assets in museums. Computer vision and photogrammetry provide methods for recording, 3D modelling, rendering under artificial light conditions as well as further options for analysis and investigation of artistry. For exhibition purposes epitaphs have been recorded by the structure from motion method. A comparison of different kinds of SFM software distributions could be worked out. The suitability of open source software in the mesh processing chain from modelling up to displaying on computer monitors should be answered. Raspberry Pi, a computer in SoC technology works as a media server under Linux applying Python scripts. Will the little computer meet the requirements for a museum and is the handling comfortable enough for staff and visitors? This contribution reports about the case study.

  2. Multiple-grid convergence acceleration of viscous and inviscid flow computations

    NASA Technical Reports Server (NTRS)

    Johnson, G. M.

    1983-01-01

    A multiple-grid algorithm for use in efficiently obtaining steady solution to the Euler and Navier-Stokes equations is presented. The convergence of a simple, explicit fine-grid solution procedure is accelerated on a sequence of successively coarser grids by a coarse-grid information propagation method which rapidly eliminates transients from the computational domain. This use of multiple-gridding to increase the convergence rate results in substantially reduced work requirements for the numerical solution of a wide range of flow problems. Computational results are presented for subsonic and transonic inviscid flows and for laminar and turbulent, attached and separated, subsonic viscous flows. Work reduction factors as large as eight, in comparison to the basic fine-grid algorithm, were obtained. Possibilities for further performance improvement are discussed.

  3. A Computational Framework for Bioimaging Simulation

    PubMed Central

    Watabe, Masaki; Arjunan, Satya N. V.; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi

    2015-01-01

    Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units. PMID:26147508

  4. Novel Image Encryption based on Quantum Walks

    PubMed Central

    Yang, Yu-Guang; Pan, Qing-Xiang; Sun, Si-Jia; Xu, Peng

    2015-01-01

    Quantum computation has achieved a tremendous success during the last decades. In this paper, we investigate the potential application of a famous quantum computation model, i.e., quantum walks (QW) in image encryption. It is found that QW can serve as an excellent key generator thanks to its inherent nonlinear chaotic dynamic behavior. Furthermore, we construct a novel QW-based image encryption algorithm. Simulations and performance comparisons show that the proposal is secure enough for image encryption and outperforms prior works. It also opens the door towards introducing quantum computation into image encryption and promotes the convergence between quantum computation and image processing. PMID:25586889

  5. Light aircraft lift, drag, and moment prediction: A review and analysis

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Summey, D. C.; Smith, N. S.; Carden, R. K.

    1975-01-01

    The historical development of analytical methods for predicting the lift, drag, and pitching moment of complete light aircraft configurations in cruising flight is reviewed. Theoretical methods, based in part on techniques described in the literature and in part on original work, are developed. These methods form the basis for understanding the computer programs given to: (1) compute the lift, drag, and moment of conventional airfoils, (2) extend these two-dimensional characteristics to three dimensions for moderate-to-high aspect ratio unswept wings, (3) plot complete configurations, (4) convert the fuselage geometric data to the correct input format, (5) compute the fuselage lift and drag, (6) compute the lift and moment of symmetrical airfoils to M = 1.0 by a simplified semi-empirical procedure, and (7) compute, in closed form, the pressure distribution over a prolate spheroid at alpha = 0. Comparisons of the predictions with experiment indicate excellent lift and drag agreement for conventional airfoils and wings. Limited comparisons of body-alone drag characteristics yield reasonable agreement. Also included are discussions for interference effects and techniques for summing the results above to obtain predictions for complete configurations.

  6. Computational Study of Primary Electrons in the Cusp Region of an Ion Engine's Discharge Chamber

    NASA Technical Reports Server (NTRS)

    Stueber, Thomas J. (Technical Monitor); Deshpande, Shirin S.; Mahalingam, Sudhakar; Menart, James A.

    2004-01-01

    In this work a computer code called PRIMA is used to study the motion of primary electrons in the magnetic cusp region of the discharge chamber of an ion engine. Even though the amount of wall area covered by the cusps is very small, the cusp regions are important because prior computational analyses have indicated that most primary electrons leave the discharge chamber through the cusps. The analysis presented here focuses on the cusp region only. The affects of the shape and size of the cusp region on primary electron travel are studied as well as the angle and location at which the electron enters the cusp region. These affects are quantified using the confinement length and the number density distributions of the primary electrons. In addition to these results comparisons of the results from PRIMA are made to experimental results for a cylindrical discharge chamber with two magnetic rings. These comparisons indicate the validity of the computer code called PRIMA.

  7. Comparison of a 3-D GPU-Assisted Maxwell Code and Ray Tracing for Reflectometry on ITER

    NASA Astrophysics Data System (ADS)

    Gady, Sarah; Kubota, Shigeyuki; Johnson, Irena

    2015-11-01

    Electromagnetic wave propagation and scattering in magnetized plasmas are important diagnostics for high temperature plasmas. 1-D and 2-D full-wave codes are standard tools for measurements of the electron density profile and fluctuations; however, ray tracing results have shown that beam propagation in tokamak plasmas is inherently a 3-D problem. The GPU-Assisted Maxwell Code utilizes the FDTD (Finite-Difference Time-Domain) method for solving the Maxwell equations with the cold plasma approximation in a 3-D geometry. Parallel processing with GPGPU (General-Purpose computing on Graphics Processing Units) is used to accelerate the computation. Previously, we reported on initial comparisons of the code results to 1-D numerical and analytical solutions, where the size of the computational grid was limited by the on-board memory of the GPU. In the current study, this limitation is overcome by using domain decomposition and an additional GPU. As a practical application, this code is used to study the current design of the ITER Low Field Side Reflectometer (LSFR) for the Equatorial Port Plug 11 (EPP11). A detailed examination of Gaussian beam propagation in the ITER edge plasma will be presented, as well as comparisons with ray tracing. This work was made possible by funding from the Department of Energy for the Summer Undergraduate Laboratory Internship (SULI) program. This work is supported by the US DOE Contract No.DE-AC02-09CH11466 and DE-FG02-99-ER54527.

  8. SPAN: Ocean science

    NASA Technical Reports Server (NTRS)

    Thomas, Valerie L.; Koblinsky, Chester J.; Webster, Ferris; Zlotnicki, Victor; Green, James L.

    1987-01-01

    The Space Physics Analysis Network (SPAN) is a multi-mission, correlative data comparison network which links space and Earth science research and data analysis computers. It provides a common working environment for sharing computer resources, sharing computer peripherals, solving proprietary problems, and providing the potential for significant time and cost savings for correlative data analysis. This is one of a series of discipline-specific SPAN documents which are intended to complement the SPAN primer and SPAN Management documents. Their purpose is to provide the discipline scientists with a comprehensive set of documents to assist in the use of SPAN for discipline specific scientific research.

  9. Secure distributed genome analysis for GWAS and sequence comparison computation.

    PubMed

    Zhang, Yihua; Blanton, Marina; Almashaqbeh, Ghada

    2015-01-01

    The rapid increase in the availability and volume of genomic data makes significant advances in biomedical research possible, but sharing of genomic data poses challenges due to the highly sensitive nature of such data. To address the challenges, a competition for secure distributed processing of genomic data was organized by the iDASH research center. In this work we propose techniques for securing computation with real-life genomic data for minor allele frequency and chi-squared statistics computation, as well as distance computation between two genomic sequences, as specified by the iDASH competition tasks. We put forward novel optimizations, including a generalization of a version of mergesort, which might be of independent interest. We provide implementation results of our techniques based on secret sharing that demonstrate practicality of the suggested protocols and also report on performance improvements due to our optimization techniques. This work describes our techniques, findings, and experimental results developed and obtained as part of iDASH 2015 research competition to secure real-life genomic computations and shows feasibility of securely computing with genomic data in practice.

  10. Secure distributed genome analysis for GWAS and sequence comparison computation

    PubMed Central

    2015-01-01

    Background The rapid increase in the availability and volume of genomic data makes significant advances in biomedical research possible, but sharing of genomic data poses challenges due to the highly sensitive nature of such data. To address the challenges, a competition for secure distributed processing of genomic data was organized by the iDASH research center. Methods In this work we propose techniques for securing computation with real-life genomic data for minor allele frequency and chi-squared statistics computation, as well as distance computation between two genomic sequences, as specified by the iDASH competition tasks. We put forward novel optimizations, including a generalization of a version of mergesort, which might be of independent interest. Results We provide implementation results of our techniques based on secret sharing that demonstrate practicality of the suggested protocols and also report on performance improvements due to our optimization techniques. Conclusions This work describes our techniques, findings, and experimental results developed and obtained as part of iDASH 2015 research competition to secure real-life genomic computations and shows feasibility of securely computing with genomic data in practice. PMID:26733307

  11. A Methodology for Model Comparison Using the Theater Simulation of Airbase Resources and All Mobile Tactical Air Force Models

    DTIC Science & Technology

    1992-09-01

    ease with which a model is employed, may depend on several factors, among them the users’ past experience in modeling, preferences for menu driven...partially on our knowledge of important logistics factors, partially on the past work of Diener (12), and partially on the assumption that comparison of...flexibility in output report selection. The minimum output was used in each instance 74 to conserve computer storage and to minimize the consumption of paper

  12. Acceleration of Binding Site Comparisons by Graph Partitioning.

    PubMed

    Krotzky, Timo; Klebe, Gerhard

    2015-08-01

    The comparison of protein binding sites is a prominent task in computational chemistry and has been studied in many different ways. For the automatic detection and comparison of putative binding cavities the Cavbase system has been developed which uses a coarse-grained set of pseudocenters to represent the physicochemical properties of a binding site and employs a graph-based procedure to calculate similarities between two binding sites. However, the comparison of two graphs is computationally quite demanding which makes large-scale studies such as the rapid screening of entire databases hardly feasible. In a recent work, we proposed the method Local Cliques (LC) for the efficient comparison of Cavbase binding sites. It employs a clique heuristic to detect the maximum common subgraph of two binding sites and an extended graph model to additionally compare the shape of individual surface patches. In this study, we present an alternative to further accelerate the LC method by partitioning the binding-site graphs into disjoint components prior to their comparisons. The pseudocenter sets are split with regard to their assigned phyiscochemical type, which leads to seven much smaller graphs than the original one. Applying this approach on the same test scenarios as in the former comprehensive way results in a significant speed-up without sacrificing accuracy. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Current Grid Generation Strategies and Future Requirements in Hypersonic Vehicle Design, Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)

    1998-01-01

    Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.

  14. A Comparison of Three Navier-Stokes Solvers for Exhaust Nozzle Flowfields

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Yoder, Dennis A.; Debonis, James R.

    1999-01-01

    A comparison of the NPARC, PAB, and WIND (previously known as NASTD) Navier-Stokes solvers is made for two flow cases with turbulent mixing as the dominant flow characteristic, a two-dimensional ejector nozzle and a Mach 1.5 elliptic jet. The objective of the work is to determine if comparable predictions of nozzle flows can be obtained from different Navier-Stokes codes employed in a multiple site research program. A single computational grid was constructed for each of the two flows and used for all of the Navier-Stokes solvers. In addition, similar k-e based turbulence models were employed in each code, and boundary conditions were specified as similarly as possible across the codes. Comparisons of mass flow rates, velocity profiles, and turbulence model quantities are made between the computations and experimental data. The computational cost of obtaining converged solutions with each of the codes is also documented. Results indicate that all of the codes provided similar predictions for the two nozzle flows. Agreement of the Navier-Stokes calculations with experimental data was good for the ejector nozzle. However, for the Mach 1.5 elliptic jet, the calculations were unable to accurately capture the development of the three dimensional elliptic mixing layer.

  15. The impact of computer display height and desk design on 3D posture during information technology work by young adults.

    PubMed

    Straker, L; Burgess-Limerick, R; Pollock, C; Murray, K; Netto, K; Coleman, J; Skoss, R

    2008-04-01

    Computer display height and desk design to allow forearm support are two critical design features of workstations for information technology tasks. However there is currently no 3D description of head and neck posture with different computer display heights and no direct comparison to paper based information technology tasks. There is also inconsistent evidence on the effect of forearm support on posture and no evidence on whether these features interact. This study compared the 3D head, neck and upper limb postures of 18 male and 18 female young adults whilst working with different display and desk design conditions. There was no substantial interaction between display height and desk design. Lower display heights increased head and neck flexion with more spinal asymmetry when working with paper. The curved desk, designed to provide forearm support, increased scapula elevation/protraction and shoulder flexion/abduction.

  16. Development of flight testing techniques

    NASA Technical Reports Server (NTRS)

    Sandlin, D. R.

    1984-01-01

    A list of students involved in research on flight analysis and development is given along with abstracts of their work. The following is a listing of the titles of each work: Longitudinal stability and control derivatives obtained from flight data of a PA-30 aircraft; Aerodynamic drag reduction tests on a box shaped vehicle; A microprocessor based anti-aliasing filter for a PCM system; Flutter prediction of a wing with active aileron control; Comparison of theoretical and flight measured local flow aerodynamics for a low aspect ratio fin; In flight thrust determination on a real time basis; A comparison of computer generated lift and drag polars for a Wortmann airfoil to flight and wind tunnel results; and Deep stall flight testing of the NASA SGS 1-36.

  17. Magneto-optical Phase Transition in a Nanostructured Co/Pd Thin Film

    NASA Astrophysics Data System (ADS)

    Nwokoye, Chidubem; Bennett, Lawrence; Della Torre, Edward; Siddique, Abid; Zhang, Ming; Wagner, Michael; Narducci, Frank

    Interest in the study of magnetism in nanostructures at low temperatures is growing. We report work that extends the magnetics experiments in that studied Bose-Einstein Condensation (BEC) of magnons in confined nanostructures. We report experimental investigation of the magneto-optical properties, influenced by photon-magnon interactions, of a Co/Pd thin film below and above the magnon BEC temperature. Comparison of results from SQUID and MOKE experiments revealed a phase transition temperature in both magnetic and magneto-optical properties of the material that is attributed to the magnon BEC. Recent research in magnonics has provided a realization scheme for developing magnon BEC qubit gates for a quantum computing processor. Future research work will explore this technology and find ways to apply quantum computing to address some computational challenges in communication systems. We recognize financial support from the Naval Air Systems Command Section 219 grant.

  18. Model Comparison for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Moses, Gregory; Chenhall, Jeffrey; Cao, Duc; Delettrez, Jacques

    2015-11-01

    Four electron thermal transport models are compared for their ability to accurately and efficiently model non-local behavior in ICF simulations. Goncharov's transport model has accurately predicted shock timing in implosion simulations but is computationally slow and limited to 1D. The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. uses multigroup diffusion to speed up the calculation. Chenhall has expanded upon the iSNB diffusion model to a higher order simplified P3 approximation and a Monte Carlo transport model, to bridge the gap between the iSNB and Goncharov models while maintaining computational efficiency. Comparisons of the above models for several test problems will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  19. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  20. Parallelization of the FLAPW method and comparison with the PPW method

    NASA Astrophysics Data System (ADS)

    Canning, Andrew; Mannstadt, Wolfgang; Freeman, Arthur

    2000-03-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. In the past the FLAPW method has been limited to systems of about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell running on up to 512 processors on a Cray T3E parallel supercomputer. Some results will also be presented on a comparison of the plane-wave pseudopotential method and the FLAPW method on large systems.

  1. On the laminar-turbulent transition in the boundary layer of streamwise corner

    NASA Astrophysics Data System (ADS)

    Kirilovskiy, S. V.; Boiko, A. V.; Poplavskaya, T. V.

    2017-10-01

    The work is aimed at developing methods of numerical simulation of incompressible non-symmetric flow in streamwise corner by solving the Navier-Stokes equations with ANSYS Fluent and the self-similar equations of boundary-layer type. A comparison of the computations with each other and experimental data is provided.

  2. Promoting Continuing Computer Science Education through a Massively Open Online Course

    ERIC Educational Resources Information Center

    Oliver, Kevin

    2016-01-01

    This paper presents the results of a comparison study between graduate students taking a software security course at an American university and international working professionals taking a version of the same course online through a free massive open online course (MOOC) created in the Google CourseBuilder learning environment. A goal of the study…

  3. Microstructure-Based Computational Modeling of Mechanical Behavior of Polymer Micro/Nano Composites

    DTIC Science & Technology

    2013-12-01

    K. ......... 165  Fig. 5.11. Comparison between experimental data and calibrated numerical models for displacement control tests, at three different...displacement control simulation) for all mesh densities for both work-conjugate and non work-conjugate. ........................ 302  Fig. 9.3. Damage...some large deformation experimental tests (and also accepting the non -uniformity of the strain field). In the established well-known theorem for

  4. A comparison of approaches for finding minimum identifying codes on graphs

    NASA Astrophysics Data System (ADS)

    Horan, Victoria; Adachi, Steve; Bak, Stanley

    2016-05-01

    In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.

  5. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  6. Three-dimensional inviscid analysis of radial-turbine flow and a limited comparison with experimental data

    NASA Technical Reports Server (NTRS)

    Choo, Y. K.; Civinskas, K. C.

    1985-01-01

    The three-dimensional inviscid DENTON code is used to analyze flow through a radial-inflow turbine rotor. Experimental data from the rotor are compared with analytical results obtained by using the code. The experimental data available for comparison are the radial distributions of circumferentially averaged values of absolute flow angle and total pressure downstream of the rotor exit. The computed rotor-exit flow angles are generally underturned relative to the experimental values, which reflect the boundary-layer separation at the trailing edge and the development of wakes downstream of the rotor. The experimental rotor is designed for a higher-than-optimum work factor of 1.126 resulting in a nonoptimum positive incidence and causing a region of rapid flow adjustment and large velocity gradients. For this experimental rotor, the computed radial distribution of rotor-exit to turbine-inlet total pressure ratios are underpredicted due to the errors in the finite-difference approximations in the regions of rapid flow adjustment, and due to using the relatively coarser grids in the middle of the blade region where the flow passage is highly three-dimensional. Additional results obtained from the three-dimensional inviscid computation are also presented, but without comparison due to the lack of experimental data. These include quasi-secondary velocity vectors on cross-channel surfaces, velocity components on the meridional and blade-to-blade surfaces, and blade surface loading diagrams. Computed results show the evolution of a passage vortex and large streamline deviations from the computational streamwise grid lines. Experience gained from applying the code to a radial turbine geometry is also discussed.

  7. Three-dimensional inviscid analysis of radial turbine flow and a limited comparison with experimental data

    NASA Technical Reports Server (NTRS)

    Choo, Y. K.; Civinskas, K. C.

    1985-01-01

    The three-dimensional inviscid DENTON code is used to analyze flow through a radial-inflow turbine rotor. Experimental data from the rotor are compared with analytical results obtained by using the code. The experimental data available for comparison are the radial distributions of circumferentially averaged values of absolute flow angle and total pressure downstream of the rotor exit. The computed rotor-exit flow angles are generally underturned relative to the experimental values, which reflect the boundary-layer separation at the trailing edge and the development of wakes downstream of the rotor. The experimental rotor is designed for a higher-than-optimum work factor of 1.126 resulting in a nonoptimum positive incidence and causing a region of rapid flow adjustment and large velocity gradients. For this experimental rotor, the computed radial distribution of rotor-exit to turbine-inlet total pressure ratios are underpredicted due to the errors in the finite-difference approximations in the regions of rapid flow adjustment, and due to using the relatively coarser grids in the middle of the blade region where the flow passage is highly three-dimensional. Additional results obtained from the three-dimensional inviscid computation are also presented, but without comparison due to the lack of experimental data. These include quasi-secondary velocity vectors on cross-channel surfaces, velocity components on the meridional and blade-to-blade surfaces, and blade surface loading diagrams. Computed results show the evolution of a passage vortex and large streamline deviations from the computational streamwise grid lines. Experience gained from applying the code to a radial turbine geometry is also discussed.

  8. Evidence-based ergonomics. A comparison of Japanese and American office layouts.

    PubMed

    Noro, Kageyu; Fujimaki, Goroh; Kishi, Shinsuke

    2003-01-01

    There is a variety of alternatives in office layouts. Yet the theoretical basis and criteria for predicting how well these layouts accommodate employees are poorly understood. The objective of this study was to evaluate criteria for selecting office layouts. Intensive computer workers worked in simulated office layouts in a controlled experimental laboratory. Eye movement measures indicate that knowledge work requires both concentration and interaction. Findings pointed to one layout as providing optimum balance between these 2 requirements. Recommendations for establishing a theoretical basis and design criteria for selecting office layouts based on work style are suggested.

  9. Evaluation of a modified Fitts law brain-computer interface target acquisition task in able and motor disabled individuals

    NASA Astrophysics Data System (ADS)

    Felton, E. A.; Radwin, R. G.; Wilson, J. A.; Williams, J. C.

    2009-10-01

    A brain-computer interface (BCI) is a communication system that takes recorded brain signals and translates them into real-time actions, in this case movement of a cursor on a computer screen. This work applied Fitts' law to the evaluation of performance on a target acquisition task during sensorimotor rhythm-based BCI training. Fitts' law, which has been used as a predictor of movement time in studies of human movement, was used here to determine the information transfer rate, which was based on target acquisition time and target difficulty. The information transfer rate was used to make comparisons between control modalities and subject groups on the same task. Data were analyzed from eight able-bodied and five motor disabled participants who wore an electrode cap that recorded and translated their electroencephalogram (EEG) signals into computer cursor movements. Direct comparisons were made between able-bodied and disabled subjects, and between EEG and joystick cursor control in able-bodied subjects. Fitts' law aptly described the relationship between movement time and index of difficulty for each task movement direction when evaluated separately and averaged together. This study showed that Fitts' law can be successfully applied to computer cursor movement controlled by neural signals.

  10. Comparison of Monoenergetic Photon Organ Dose Rate Coefficients for the Female Stylized and Voxel Phantoms Submerged in Air

    DOE PAGES

    Hiller, Mauritius; Dewji, Shaheen Azim

    2017-02-16

    Dose rate coefficients computed using the International Commission on Radiological Protection (ICRP) reference adult female voxel phantom were compared with values computed using the Oak Ridge National Laboratory (ORNL) adult female stylized phantom in an air submersion exposure geometry. This is a continuation of previous work comparing monoenergetic organ dose rate coefficients for the male adult phantoms. With both the male and female data computed, effective dose rate as defined by ICRP Publication 103 was compared for both phantoms. Organ dose rate coefficients for the female phantom and ratios of organ dose rates for the voxel and stylized phantoms aremore » provided in the energy range from 30 to 5 MeV. Analysis of the contribution of the organs to effective dose is also provided. Lastly, comparison of effective dose rates between the voxel and stylized phantoms was within 8% at 100 keV and is <5% between 200 and 5000 keV.« less

  11. Comparison of Monoenergetic Photon Organ Dose Rate Coefficients for the Female Stylized and Voxel Phantoms Submerged in Air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiller, Mauritius; Dewji, Shaheen Azim

    Dose rate coefficients computed using the International Commission on Radiological Protection (ICRP) reference adult female voxel phantom were compared with values computed using the Oak Ridge National Laboratory (ORNL) adult female stylized phantom in an air submersion exposure geometry. This is a continuation of previous work comparing monoenergetic organ dose rate coefficients for the male adult phantoms. With both the male and female data computed, effective dose rate as defined by ICRP Publication 103 was compared for both phantoms. Organ dose rate coefficients for the female phantom and ratios of organ dose rates for the voxel and stylized phantoms aremore » provided in the energy range from 30 to 5 MeV. Analysis of the contribution of the organs to effective dose is also provided. Lastly, comparison of effective dose rates between the voxel and stylized phantoms was within 8% at 100 keV and is <5% between 200 and 5000 keV.« less

  12. Differences between postmortem computed tomography and conventional autopsy in a stabbing murder case

    PubMed Central

    Zerbini, Talita; da Silva, Luiz Fernando Ferraz; Ferro, Antonio Carlos Gonçalves; Kay, Fernando Uliana; Junior, Edson Amaro; Pasqualucci, Carlos Augusto Gonçalves; do Nascimento Saldiva, Paulo Hilario

    2014-01-01

    OBJECTIVE: The aim of the present work is to analyze the differences and similarities between the elements of a conventional autopsy and images obtained from postmortem computed tomography in a case of a homicide stab wound. METHOD: Comparison between the findings of different methods: autopsy and postmortem computed tomography. RESULTS: In some aspects, autopsy is still superior to imaging, especially in relation to external examination and the description of lesion vitality. However, the findings of gas embolism, pneumothorax and pulmonary emphysema and the relationship between the internal path of the instrument of aggression and the entry wound are better demonstrated by postmortem computed tomography. CONCLUSIONS: Although multislice computed tomography has greater accuracy than autopsy, we believe that the conventional autopsy method is fundamental for providing evidence in criminal investigations. PMID:25518020

  13. NASA Rotor 37 CFD Code Validation: Glenn-HT Code

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2010-01-01

    In order to advance the goals of NASA aeronautics programs, it is necessary to continuously evaluate and improve the computational tools used for research and design at NASA. One such code is the Glenn-HT code which is used at NASA Glenn Research Center (GRC) for turbomachinery computations. Although the code has been thoroughly validated for turbine heat transfer computations, it has not been utilized for compressors. In this work, Glenn-HT was used to compute the flow in a transonic compressor and comparisons were made to experimental data. The results presented here are in good agreement with this data. Most of the measures of performance are well within the measurement uncertainties and the exit profiles of interest agree with the experimental measurements.

  14. Folding Proteins at 500 ns/hour with Work Queue.

    PubMed

    Abdul-Wahid, Badi'; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A

    2012-10-01

    Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour.

  15. Folding Proteins at 500 ns/hour with Work Queue

    PubMed Central

    Abdul-Wahid, Badi’; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A.

    2014-01-01

    Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour. PMID:25540799

  16. A Comparison, for Teaching Purposes, of Three Data-Acquisition Systems for the Macintosh.

    ERIC Educational Resources Information Center

    Swanson, Harold D.

    1990-01-01

    Three commercial products for data acquisition with the Macintosh computer, known by the trade names of LabVIEW, Analog Connection WorkBench, and MacLab were reviewed and compared, on the basis of actual trials, for their suitability in physiological and biological teaching laboratories. Suggestions for using these software packages are provided.…

  17. Comparisons of Particle Tracking Techniques and Galerkin Finite Element Methods in Flow Simulations on Watershed Scales

    NASA Astrophysics Data System (ADS)

    Shih, D.; Yeh, G.

    2009-12-01

    This paper applies two numerical approximations, the particle tracking technique and Galerkin finite element method, to solve the diffusive wave equation in both one-dimensional and two-dimensional flow simulations. The finite element method is one of most commonly approaches in numerical problems. It can obtain accurate solutions, but calculation times may be rather extensive. The particle tracking technique, using either single-velocity or average-velocity tracks to efficiently perform advective transport, could use larger time-step sizes than the finite element method to significantly save computational time. Comparisons of the alternative approximations are examined in this poster. We adapt the model WASH123D to examine the work. WASH123D is an integrated multimedia, multi-processes, physics-based computational model suitable for various spatial-temporal scales, was first developed by Yeh et al., at 1998. The model has evolved in design capability and flexibility, and has been used for model calibrations and validations over the course of many years. In order to deliver a locally hydrological model in Taiwan, the Taiwan Typhoon and Flood Research Institute (TTFRI) is working with Prof. Yeh to develop next version of WASH123D. So, the work of our preliminary cooperationx is also sketched in this poster.

  18. A new graph-based method for pairwise global network alignment

    PubMed Central

    Klau, Gunnar W

    2009-01-01

    Background In addition to component-based comparative approaches, network alignments provide the means to study conserved network topology such as common pathways and more complex network motifs. Yet, unlike in classical sequence alignment, the comparison of networks becomes computationally more challenging, as most meaningful assumptions instantly lead to NP-hard problems. Most previous algorithmic work on network alignments is heuristic in nature. Results We introduce the graph-based maximum structural matching formulation for pairwise global network alignment. We relate the formulation to previous work and prove NP-hardness of the problem. Based on the new formulation we build upon recent results in computational structural biology and present a novel Lagrangian relaxation approach that, in combination with a branch-and-bound method, computes provably optimal network alignments. The Lagrangian algorithm alone is a powerful heuristic method, which produces solutions that are often near-optimal and – unlike those computed by pure heuristics – come with a quality guarantee. Conclusion Computational experiments on the alignment of protein-protein interaction networks and on the classification of metabolic subnetworks demonstrate that the new method is reasonably fast and has advantages over pure heuristics. Our software tool is freely available as part of the LISA library. PMID:19208162

  19. Distributed Coordinated Control of Large-Scale Nonlinear Networks

    DOE PAGES

    Kundu, Soumya; Anghel, Marian

    2015-11-08

    We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less

  20. The effectiveness of various computer-based interventions for patients with chronic pain or functional somatic syndromes: A systematic review and meta-analysis.

    PubMed

    Vugts, Miel A P; Joosen, Margot C W; van der Geer, Jessica E; Zedlitz, Aglaia M E E; Vrijhoef, Hubertus J M

    2018-01-01

    Computer-based interventions target improvement of physical and emotional functioning in patients with chronic pain and functional somatic syndromes. However, it is unclear to what extent which interventions work and for whom. This systematic review and meta-analysis (registered at PROSPERO, 2016: CRD42016050839) assesses efficacy relative to passive and active control conditions, and explores patient and intervention factors. Controlled studies were identified from MEDLINE, EMBASE, PsychInfo, Web of Science, and Cochrane Library. Pooled standardized mean differences by comparison type, and somatic symptom, health-related quality of life, functional interference, catastrophizing, and depression outcomes were calculated at post-treatment and at 6 or more months follow-up. Risk of bias was assessed. Sub-group analyses were performed by patient and intervention characteristics when heterogeneous outcomes were observed. Maximally, 30 out of 46 eligible studies and 3,387 participants were included per meta-analysis. Mostly, internet-based cognitive behavioral therapies were identified. Significantly higher patient reported outcomes were found in comparisons with passive control groups (standardized mean differences ranged between -.41 and -.18), but not in comparisons with active control groups (SMD = -.26 - -.14). For some outcomes, significant heterogeneity related to patient and intervention characteristics. To conclude, there is a minority of good quality evidence for small positive average effects of computer-based (cognitive) behavior change interventions, similar to traditional modes. These effects may be sustainable. Indications were found as of which interventions work better or more consistently across outcomes for which patients. Future process analyses are recommended in the aim of better understanding individual chances of clinically relevant outcomes.

  1. Comparison between various patch wise strategies for reconstruction of ultra-spectral cubes captured with a compressive sensing system

    NASA Astrophysics Data System (ADS)

    Oiknine, Yaniv; August, Isaac Y.; Revah, Liat; Stern, Adrian

    2016-05-01

    Recently we introduced a Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) system. The system is based on a single Liquid Crystal (LC) cell and a parallel sensor array where the liquid crystal cell performs spectral encoding. Within the framework of compressive sensing, the CS-MUSI system is able to reconstruct ultra-spectral cubes captured with only an amount of ~10% samples compared to a conventional system. Despite the compression, the technique is extremely complex computationally, because reconstruction of ultra-spectral images requires processing huge data cubes of Gigavoxel size. Fortunately, the computational effort can be alleviated by using separable operation. An additional way to reduce the reconstruction effort is to perform the reconstructions on patches. In this work, we consider processing on various patch shapes. We present an experimental comparison between various patch shapes chosen to process the ultra-spectral data captured with CS-MUSI system. The patches may be one dimensional (1D) for which the reconstruction is carried out spatially pixel-wise, or two dimensional (2D) - working on spatial rows/columns of the ultra-spectral cube, as well as three dimensional (3D).

  2. An improved exploratory search technique for pure integer linear programming problems

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1990-01-01

    The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.

  3. Automatic violence detection in digital movies

    NASA Astrophysics Data System (ADS)

    Fischer, Stephan

    1996-11-01

    Research on computer-based recognition of violence is scant. We are working on the automatic recognition of violence in digital movies, a first step towards the goal of a computer- assisted system capable of protecting children against TV programs containing a great deal of violence. In the video domain a collision detection and a model-mapping to locate human figures are run, while the creation and comparison of fingerprints to find certain events are run int he audio domain. This article centers on the recognition of fist- fights in the video domain and on the recognition of shots, explosions and cries in the audio domain.

  4. Baseline performance of the GPU 3 Stirling engine

    NASA Technical Reports Server (NTRS)

    Thieme, L. G.; Tew, R. C., Jr.

    1978-01-01

    A 10 horsepower single-cylinder rhombic-drive Stirling engine was converted to a research configuration to obtain data for validation of Stirling computer simulations. The engine was originally built by General Motors Research Laboratories for the U.S. Army in 1965 as part of a 3 kW engine-generator set, designated the GHU 3 (Ground Power Unit). This report presents test results for a range of heater gas temperatures, mean compression-space pressures, and engine speeds with both helium and hydrogen as the working fluids. Also shown are initial data comparisons with computer simulation predictions.

  5. Advanced ballistic range technology

    NASA Technical Reports Server (NTRS)

    Yates, Leslie A.

    1993-01-01

    Optical images, such as experimental interferograms, schlieren, and shadowgraphs, are routinely used to identify and locate features in experimental flow fields and for validating computational fluid dynamics (CFD) codes. Interferograms can also be used for comparing experimental and computed integrated densities. By constructing these optical images from flow-field simulations, one-to-one comparisons of computation and experiment are possible. During the period from February 1, 1992, to November 30, 1992, work has continued on the development of CISS (Constructed Interferograms, Schlieren, and Shadowgraphs), a code that constructs images from ideal- and real-gas flow-field simulations. In addition, research connected with the automated film-reading system and the proposed reactivation of the radiation facility has continued.

  6. A comparison of methods for computing the sigma-coordinate pressure gradient force for flow over sloped terrain in a hybrid theta-sigma model

    NASA Technical Reports Server (NTRS)

    Johnson, D. R.; Uccellini, L. W.

    1983-01-01

    In connection with the employment of the sigma coordinates introduced by Phillips (1957), problems can arise regarding an accurate finite-difference computation of the pressure gradient force. Over steeply sloped terrain, the calculation of the sigma-coordinate pressure gradient force involves computing the difference between two large terms of opposite sign which results in large truncation error. To reduce the truncation error, several finite-difference methods have been designed and implemented. The present investigation has the objective to provide another method of computing the sigma-coordinate pressure gradient force. Phillips' method is applied for the elimination of a hydrostatic component to a flux formulation. The new technique is compared with four other methods for computing the pressure gradient force. The work is motivated by the desire to use an isentropic and sigma-coordinate hybrid model for experiments designed to study flow near mountainous terrain.

  7. Visual perception can account for the close relation between numerosity processing and computational fluency.

    PubMed

    Zhou, Xinlin; Wei, Wei; Zhang, Yiyun; Cui, Jiaxin; Chen, Chuansheng

    2015-01-01

    Studies have shown that numerosity processing (e.g., comparison of numbers of dots in two dot arrays) is significantly correlated with arithmetic performance. Researchers have attributed this association to the fact that both tasks share magnitude processing. The current investigation tested an alternative hypothesis, which states that visual perceptual ability (as measured by a figure-matching task) can account for the close relation between numerosity processing and arithmetic performance (computational fluency). Four hundred and twenty four third- to fifth-grade children (220 boys and 204 girls, 8.0-11.0 years old; 120 third graders, 146 fourth graders, and 158 fifth graders) were recruited from two schools (one urban and one suburban) in Beijing, China. Six classes were randomly selected from each school, and all students in each selected class participated in the study. All children were given a series of cognitive and mathematical tests, including numerosity comparison, figure matching, forward verbal working memory, visual tracing, non-verbal matrices reasoning, mental rotation, choice reaction time, arithmetic tests and curriculum-based mathematical achievement test. Results showed that figure-matching ability had higher correlations with numerosity processing and computational fluency than did other cognitive factors (e.g., forward verbal working memory, visual tracing, non-verbal matrix reasoning, mental rotation, and choice reaction time). More important, hierarchical multiple regression showed that figure matching ability accounted for the well-established association between numerosity processing and computational fluency. In support of the visual perception hypothesis, the results suggest that visual perceptual ability, rather than magnitude processing, may be the shared component of numerosity processing and arithmetic performance.

  8. Visual perception can account for the close relation between numerosity processing and computational fluency

    PubMed Central

    Zhou, Xinlin; Wei, Wei; Zhang, Yiyun; Cui, Jiaxin; Chen, Chuansheng

    2015-01-01

    Studies have shown that numerosity processing (e.g., comparison of numbers of dots in two dot arrays) is significantly correlated with arithmetic performance. Researchers have attributed this association to the fact that both tasks share magnitude processing. The current investigation tested an alternative hypothesis, which states that visual perceptual ability (as measured by a figure-matching task) can account for the close relation between numerosity processing and arithmetic performance (computational fluency). Four hundred and twenty four third- to fifth-grade children (220 boys and 204 girls, 8.0–11.0 years old; 120 third graders, 146 fourth graders, and 158 fifth graders) were recruited from two schools (one urban and one suburban) in Beijing, China. Six classes were randomly selected from each school, and all students in each selected class participated in the study. All children were given a series of cognitive and mathematical tests, including numerosity comparison, figure matching, forward verbal working memory, visual tracing, non-verbal matrices reasoning, mental rotation, choice reaction time, arithmetic tests and curriculum-based mathematical achievement test. Results showed that figure-matching ability had higher correlations with numerosity processing and computational fluency than did other cognitive factors (e.g., forward verbal working memory, visual tracing, non-verbal matrix reasoning, mental rotation, and choice reaction time). More important, hierarchical multiple regression showed that figure matching ability accounted for the well-established association between numerosity processing and computational fluency. In support of the visual perception hypothesis, the results suggest that visual perceptual ability, rather than magnitude processing, may be the shared component of numerosity processing and arithmetic performance. PMID:26441740

  9. Contribution of the computed tomography of the anatomical aspects of the sphenoid sinuses to forensic identification.

    PubMed

    Auffret, Mathieu; Garetier, Marc; Diallo, Idris; Aho, Serge; Ben Salem, Douraied

    2016-12-01

    Body identification is the cornerstone of forensic investigation. It can be performed using radiographic techniques, if antemortem images are available. This study was designed to assess the value of visual comparison of the computed tomography (CT) anatomical aspects of the sphenoid sinuses, in forensic individual identification, especially if antemortem dental records, fingerprints or DNA samples are not available. This retrospective work took place in a French university hospital. The supervisor of this study randomly selected from the picture archiving and communication system (PACS), 58 patients who underwent one (16 patients) or two (42 patients) head CT in various neurological contexts. To avoid bias, those studies were prepared (anonymized, and all the head structures but the sphenoid sinuses were excluded), and used to constitute two working lists of 50 (42+8) CT studies of the sphenoid sinuses. An anatomical classification system of the sphenoid sinuses anatomical variations was created based on the anatomical and surgical literature. In these two working lists, three blinded readers had to identify, using the anatomical system and subjective visual comparison, 42 pairs of matched studies, and 16 unmatched studies. Readers were blinded from the exact numbers of matching studies. Each reader correctly identified the 42 pairs of CT with a concordance of 100% [97.5% confidence interval: 91-100%], and the 16 unmatched CT with a concordance of 100% [97.5% confidence interval: 79-100%]. Overall accuracy was 100%. Our study shows that establishing the anatomical concordance of the sphenoid sinuses by visual comparison could be used in personal identification. This easy method, based on a frequently and increasingly prescribed exam, still needs to be assessed on a postmortem cohort. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  10. Theoretical Characterizaiton of Visual Signatures (Muzzle Flash)

    NASA Astrophysics Data System (ADS)

    Kashinski, D. O.; Scales, A. N.; Vanderley, D. L.; Chase, G. M.; di Nallo, O. E.; Byrd, E. F. C.

    2014-05-01

    We are investigating the accuracy of theoretical models used to predict the visible, ultraviolet and infrared spectra of product materials ejected from the muzzle of currently fielded systems. Recent advances in solid propellants has made the management of muzzle signature (flash) a principle issue in weapons development across the calibers. A priori prediction of the electromagnetic spectra of formulations will allow researchers to tailor blends that yield desired signatures and determine spectrographic detection ranges. We are currently employing quantum chemistry methods at various levels of sophistication to optimize molecular geometries, compute vibrational frequencies, and determine the optical spectra of specific gas-phase molecules and radicals of interest. Electronic excitations are being computed using Time Dependent Density Functional Theory (TD-DFT). A comparison of computational results to experimental values found in the literature is used to assess the affect of basis set and functional choice on calculation accuracy. The current status of this work will be presented at the conference. Work supported by the ARL, and USMA.

  11. Implementing Molecular Dynamics on Hybrid High Performance Computers - Particle-Particle Particle-Mesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Kohlmeyer, Axel; Plimpton, Steven J

    The use of accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high-performance computers, machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. In this paper, we present a continuation of previous work implementing algorithms for using accelerators into the LAMMPS molecular dynamics software for distributed memory parallel hybrid machines. In our previous work, we focused on acceleration for short-range models with anmore » approach intended to harness the processing power of both the accelerator and (multi-core) CPUs. To augment the existing implementations, we present an efficient implementation of long-range electrostatic force calculation for molecular dynamics. Specifically, we present an implementation of the particle-particle particle-mesh method based on the work by Harvey and De Fabritiis. We present benchmark results on the Keeneland InfiniBand GPU cluster. We provide a performance comparison of the same kernels compiled with both CUDA and OpenCL. We discuss limitations to parallel efficiency and future directions for improving performance on hybrid or heterogeneous computers.« less

  12. Improved atomistic simulation of diffusion and sorption in metal oxides

    NASA Astrophysics Data System (ADS)

    Skouras, E. D.; Burganos, V. N.; Payatakes, A. C.

    2001-01-01

    Gas diffusion and sorption on the surface of metal oxides are investigated using atomistic simulations, that make use of two different force fields for the description of the intramolecular and intermolecular interactions. MD and MC computations are presented and estimates of the mean residence time, Henry's constant, and the heat of adsorption are provided for various common gases (CO, CO2, O2, CH4, Xe), and semiconducting substrates that hold promise for gas sensor applications (SnO2, BaTiO3). Comparison is made between the performance of a simple, first generation force field (Universal) and a more detailed, second generation field (COMPASS) under the same conditions and the same assumptions regarding the generation of the working configurations. It is found that the two force fields yield qualitatively similar results in all cases examined here. However, direct comparison with experimental data reveals that the accuracy of the COMPASS-based computations is not only higher than that of the first generation force field but exceeds even that of published specialized methods, based on ab initio computations.

  13. Composite Failures: A Comparison of Experimental Test Results and Computational Analysis Using XFEM

    DTIC Science & Technology

    2016-09-30

    NUWC-NPT Technical Report 12,218 30 September 2016 Composite Failures: A Comparison of Experimental Test Results and Computational Analysis...A Comparison of Experimental Test Results and Computational Analysis Using XFEM 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...availability of measurement techniques, experimental testing of composite materials has largely outpaced the computational modeling ability, forcing

  14. Validations of CFD against detailed velocity and pressure measurements in water turbine runner flow

    NASA Astrophysics Data System (ADS)

    Nilsson, H.; Davidson, L.

    2003-03-01

    This work compares CFD results with experimental results of the flow in two different kinds of water turbine runners. The runners studied are the GAMM Francis runner and the Hölleforsen Kaplan runner. The GAMM Francis runner was used as a test case in the 1989 GAMM Workshop on 3D Computation of Incompressible Internal Flows where the geometry and detailed best efficiency measurements were made available. In addition to the best efficiency measurements, four off-design operating condition measurements are used for the comparisons in this work. The Hölleforsen Kaplan runner was used at the 1999 Turbine 99 and 2001 Turbine 99 - II workshops on draft tube flow, where detailed measurements made after the runner were used as inlet boundary conditions for the draft tube computations. The measurements are used here to validate computations of the flow in the runner.The computations are made in a single runner blade passage where the inlet boundary conditions are obtained from an extrapolation of detailed measurements (GAMM) or from separate guide vane computations (Hölleforsen). The steady flow in a rotating co-ordinate system is computed. The effects of turbulence are modelled by a low-Reynolds number k- turbulence model, which removes some of the assumptions of the commonly used wall function approach and brings the computations one step further.

  15. Research Needs for Human Factors

    DTIC Science & Technology

    1983-01-01

    their relative merits. Until such comparisons are made, practitioners will continue to advocate their own products without a basis for choice among ...judgments among a group of experts; (2) formulating questions for "experts in a way that is compatible with their mental structures or "cognitive...system* Typically the operators work in teams and control compute3, which in turn mediate information flow among various automatic components. Other

  16. Energetics of Single Substitutional Impurities in NiTi

    NASA Technical Reports Server (NTRS)

    Good, Brian S.; Noebe, Ronald

    2003-01-01

    Shape-memory alloys are of considerable current interest, with applications ranging from stents to Mars rover components. In this work, we present results on the energetics of single substitutional impurities in B2 NiTi. Specifically, energies of Pd, Pt, Zr and Hf impurities at both Ni and Ti sites are computed. All energies are computed using the CASTEP ab initio code, and, for comparison, using the quantum approximate energy method of Bozzolo, Ferrante and Smith. Atomistic relaxation in the vicinity of the impurities is investigated via quantum approximate Monte Carlo simulation, and in cases where the relaxation is found to be important, the resulting relaxations are applied to the ab initio calculations. We compare our results with available experimental work.

  17. FY2017 Report on NISC Measurements and Detector Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Madison Theresa; Meierbachtol, Krista Cruse; Jordan, Tyler Alexander

    FY17 work focused on automation, both of the measurement analysis and comparison of simulations. The experimental apparatus was relocated and weeks of continuous measurements of the spontaneous fission source 252Cf was performed. Programs were developed to automate the conversion of measurements into ROOT data framework files with a simple terminal input. The complete analysis of the measurement (which includes energy calibration and the identification of correlated counts) can now be completed with a documented process which involves one simple execution line as well. Finally, the hurdles of slow MCNP simulations resulting in low simulation statistics have been overcome with themore » generation of multi-run suites which make use of the highperformance computing resources at LANL. Preliminary comparisons of measurements and simulations have been performed and will be the focus of FY18 work.« less

  18. Monitoring SLAC High Performance UNIX Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia.more » Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.« less

  19. Prediction of Business Jet Airloads Using The Overflow Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Bounajem, Elias; Buning, Pieter G.

    2001-01-01

    The objective of this work is to evaluate the application of Navier-Stokes computational fluid dynamics technology, for the purpose of predicting off-design condition airloads on a business jet configuration in the transonic regime. The NASA Navier-Stokes flow solver OVERFLOW with Chimera overset grid capability, availability of several numerical schemes and convergence acceleration techniques was selected for this work. A set of scripts which have been compiled to reduce the time required for the grid generation process are described. Several turbulence models are evaluated in the presence of separated flow regions on the wing. Computed results are compared to available wind tunnel data for two Mach numbers and a range of angles-of-attack. Comparisons of wing surface pressure from numerical simulation and wind tunnel measurements show good agreement up to fairly high angles-of-attack.

  20. Comparing solar energy alternatives

    NASA Astrophysics Data System (ADS)

    White, J. R.

    1984-03-01

    This paper outlines a computational procedure for comparing the merits of alternative processes to convert solar radiation to heat, electrical power, or chemical energy. The procedure uses the ratio of equipment investment to useful work as an index. Comparisons with conversion counterparts based on conventional fuels are also facilitated by examining this index. The procedure is illustrated by comparisons of (1) photovoltaic converters of differing efficiencies; (2) photovoltaic converters with and without focusing concentrators; (3) photovoltaic conversion plus electrolysis vs photocatalysis for the production of hydrogen; (4) photovoltaic conversion plus plasma arcs vs photocatalysis for nitrogen fixation. Estimates for conventionally-fuelled processes are included for comparison. The reasons why solar-based concepts fare poorly in such comparisons are traced to the low energy density of solar radiation and its low stream time factor resulting from the limited number of daylight hours available and clouds obscuring the sun.

  1. Sinc-interpolants in the energy plane for regular solution, Jost function, and its zeros of quantum scattering

    NASA Astrophysics Data System (ADS)

    Annaby, M. H.; Asharabi, R. M.

    2018-01-01

    In a remarkable note of Chadan [Il Nuovo Cimento 39, 697-703 (1965)], the author expanded both the regular wave function and the Jost function of the quantum scattering problem using an interpolation theorem of Valiron [Bull. Sci. Math. 49, 181-192 (1925)]. These expansions have a very slow rate of convergence, and applying them to compute the zeros of the Jost function, which lead to the important bound states, gives poor convergence rates. It is our objective in this paper to introduce several efficient interpolation techniques to compute the regular wave solution as well as the Jost function and its zeros approximately. This work continues and improves the results of Chadan and other related studies remarkably. Several worked examples are given with illustrations and comparisons with existing methods.

  2. Student Engagement in a Computer Rich Science Classroom

    NASA Astrophysics Data System (ADS)

    Hunter, Jeffrey C.

    The purpose of this study was to examine the student lived experience when using computers in a rural science classroom. The overarching question the project sought to examine was: How do rural students relate to computers as a learning tool in comparison to a traditional science classroom? Participant data were collected using a pre-study survey, Experience Sampling during class and post-study interviews. Students want to use computers in their classrooms. Students shared that they overwhelmingly (75%) preferred a computer rich classroom to a traditional classroom (25%). Students reported a higher level of engagement in classes that use technology/computers (83%) versus those that do not use computers (17%). A computer rich classroom increased student control and motivation as reflected by a participant who shared; "by using computers I was more motivated to get the work done" (Maggie, April 25, 2014, survey). The researcher explored a rural school environment. Rural populations represent a large number of students and appear to be underrepresented in current research. The participants, tenth grade Biology students, were sampled in a traditional teacher led class without computers for one week followed by a week using computers daily. Data supported that there is a new gap that separates students, a device divide. This divide separates those who have access to devices that are robust enough to do high level class work from those who do not. Although cellular phones have reduced the number of students who cannot access the Internet, they may have created a false feeling that access to a computer is no longer necessary at home. As this study shows, although most students have Internet access, fewer have access to a device that enables them to complete rigorous class work at home. Participants received little or no training at school in proper, safe use of a computer and the Internet. It is clear that the majorities of students are self-taught or receive guidance from peers resulting in lower self-confidence or the development of misconceptions of their skill or ability.

  3. Validation of US3D for Capsule Aerodynamics using 05-CA Wind Tunnel Test Data

    NASA Technical Reports Server (NTRS)

    Schwing, Alan

    2012-01-01

    Several comparisons of computational fluid dynamics to wind tunnel test data are shown for the purpose of code validation. The wind tunnel test, 05-CA, uses a 7.66% model of NASA's Multi-Purpose Crew Vehicle in the 11-foot test section of the Ames Unitary Plan Wind tunnel. A variety of freestream conditions over four Mach numbers and three angles of attack are considered. Test data comparisons include time-averaged integrated forces and moments, time-averaged static pressure ports on the surface, and Strouhal Number. The applicability of the US3D code to subsonic and transonic flow over a bluff body is assessed on a comprehensive data set. With close comparison, this work validates US3D for highly separated flows similar to those examined here.

  4. A conceptual study of the rotor systems research aircraft

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The analytical comparison of the two candidate Rotor Systems Research Aircraft (RSRA) configurations selected by the Government at the completion of Part 1 of the RSRA Conceptual Predesign Study is presented. The purpose of the comparison was to determine the relative suitability of both vehicles for the RSRA missions described in the Government Statement of Work, and to assess their versatility in the testing of new rotor concepts. The analytical comparison was performed primarily with regard to performance and stability and control. A weights, center-of-gravity, and inertia computation was performed for each iteration in the analysis process. The dynamics investigation was not concerned so much with a comparison of the two vehicles, but explored the dynamic problems attending operation of any RSRA operating with large rotor RPM and diameter ranges over large forward speed ranges. Several means of isolating in- and out-of-plane rotor vibrations were analyzed. An optimum isolation scheme was selected.

  5. Distribution of Primary School Enrollments in Eastern Africa. World Bank Staff Working Papers Number 511.

    ERIC Educational Resources Information Center

    Maas, Jacob van Lutsenburg; Criel, Geert

    Distribution of primary school enrollments within and among 15 countries of the Eastern African region was examined by drawing exclusively on routine annual statistics and by emphasizing simple computer-generated indicators. In its first phase, the study made inter-country comparisons that indicated which countries and which areas in the region…

  6. Efficient calculation of general Voigt profiles

    NASA Astrophysics Data System (ADS)

    Cope, D.; Khoury, R.; Lovett, R. J.

    1988-02-01

    An accurate and efficient program is presented for the computation of OIL profiles, generalizations of the Voigt profile resulting from the one-interacting-level model of Ward et al. (1974). These profiles have speed dependent shift and width functions and have asymmetric shapes. The program contains an adjustable error control parameter and includes the Voigt profile as a special case, although the general nature of this program renders it slower than a specialized Voigt profile method. Results on accuracy and computation time are presented for a broad set of test parameters, and a comparison is made with previous work on the asymptotic behavior of general Voigt profiles.

  7. Non-parallel processing: Gendered attrition in academic computer science

    NASA Astrophysics Data System (ADS)

    Cohoon, Joanne Louise Mcgrath

    2000-10-01

    This dissertation addresses the issue of disproportionate female attrition from computer science as an instance of gender segregation in higher education. By adopting a theoretical framework from organizational sociology, it demonstrates that the characteristics and processes of computer science departments strongly influence female retention. The empirical data identifies conditions under which women are retained in the computer science major at comparable rates to men. The research for this dissertation began with interviews of students, faculty, and chairpersons from five computer science departments. These exploratory interviews led to a survey of faculty and chairpersons at computer science and biology departments in Virginia. The data from these surveys are used in comparisons of the computer science and biology disciplines, and for statistical analyses that identify which departmental characteristics promote equal attrition for male and female undergraduates in computer science. This three-pronged methodological approach of interviews, discipline comparisons, and statistical analyses shows that departmental variation in gendered attrition rates can be explained largely by access to opportunity, relative numbers, and other characteristics of the learning environment. Using these concepts, this research identifies nine factors that affect the differential attrition of women from CS departments. These factors are: (1) The gender composition of enrolled students and faculty; (2) Faculty turnover; (3) Institutional support for the department; (4) Preferential attitudes toward female students; (5) Mentoring and supervising by faculty; (6) The local job market, starting salaries, and competitiveness of graduates; (7) Emphasis on teaching; and (8) Joint efforts for student success. This work contributes to our understanding of the gender segregation process in higher education. In addition, it contributes information that can lead to effective solutions for an economically significant issue in modern American society---gender equality in computer science.

  8. Experimental realization of nondestructive discrimination of Bell states using a five-qubit quantum computer

    NASA Astrophysics Data System (ADS)

    Sisodia, Mitali; Shukla, Abhishek; Pathak, Anirban

    2017-12-01

    A scheme for distributed quantum measurement that allows nondestructive or indirect Bell measurement was proposed by Gupta et al [1]. In the present work, Gupta et al.'s scheme is experimentally realized using the five-qubit super-conductivity-based quantum computer, which has been recently placed in cloud by IBM Corporation. The experiment confirmed that the Bell state can be constructed and measured in a nondestructive manner with a reasonably high fidelity. A comparison of the outcomes of this study and the results obtained earlier in an NMR-based experiment (Samal et al. (2010) [10]) has also been performed. The study indicates that to make a scalable SQUID-based quantum computer, errors introduced by the gates (in the present technology) have to be reduced considerably.

  9. Job Scheduling in a Heterogeneous Grid Environment

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  10. Experience in using commercial clouds in CMS

    NASA Astrophysics Data System (ADS)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration

    2017-10-01

    Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  11. Phase Field Modeling of Microstructure Development in Microgravity

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Goldenfeld, Nigel

    2001-01-01

    This newly funded project seeks to extend our NASA-sponsored project on modeling of dendritic microstructures to facilitate collaboration between our research group and those of other NASA investigators. In our ongoing program, we have applied advanced computational techniques to study microstructural evolution in dendritic solidification, for both pure isolated dendrites and directionally solidified alloys. This work has enabled us to compute dendritic microstructures using both realistic material parameters and experimentally relevant processing conditions, thus allowing for the first time direct comparison of phase field computations with laboratory observations. This work has been well received by the materials science and physics communities, and has led to several opportunities for collaboration with scientists working on experimental investigations of pattern selection and segregation in solidification. While we have been able to pursue these collaborations to a limited extent, with some important findings, this project focuses specifically on those collaborations. We have two target collaborations: with Prof. Glicksman's group working on the Isothermal Dendritic Growth Experiment (IDGE), and with Prof. Poirier's group studying directional solidification in Pb-Sb alloys. These two space experiments match well with our two thrusts in modeling, one for pure materials, as in the IDGE, and the other directional solidification. Such collaboration will benefit all of the research groups involved, and will provide for rapid dissemination of the results of our work where it will have significant impact.

  12. Automatic Organ Localization for Adaptive Radiation Therapy for Prostate Cancer

    DTIC Science & Technology

    2005-05-01

    and provides a framework for task 3. Key Research Accomplishments "* Comparison of manual segmentation with our automatic method, using several...well as manual segmentations by a different rater. "* Computation of the actual cumulative dose delivered to both the cancerous and critical healthy...adaptive treatment of prostate or other cancer. As a result, all such work must be done manually . However, manual segmentation of the tumor and neighboring

  13. College students and computers: assessment of usage patterns and musculoskeletal discomfort.

    PubMed

    Noack-Cooper, Karen L; Sommerich, Carolyn M; Mirka, Gary A

    2009-01-01

    A limited number of studies have focused on computer-use-related MSDs in college students, though risk factor exposure may be similar to that of workers who use computers. This study examined computer use patterns of college students, and made comparisons to a group of previously studied computer-using professionals. 234 students completed a web-based questionnaire concerning computer use habits and physical discomfort respondents specifically associated with computer use. As a group, students reported their computer use to be at least 'Somewhat likely' 18 out of 24 h/day, compared to 12 h for the professionals. Students reported more uninterrupted work behaviours than the professionals. Younger graduate students reported 33.7 average weekly computing hours, similar to hours reported by younger professionals. Students generally reported more frequent upper extremity discomfort than the professionals. Frequent assumption of awkward postures was associated with frequent discomfort. The findings signal a need for intervention, including, training and education, prior to entry into the workforce. Students are future workers, and so it is important to determine whether their increasing exposure to computers, prior to entering the workforce, may make it so they enter already injured or do not enter their chosen profession due to upper extremity MSDs.

  14. Comparison of Building Loads Analysis and System Thermodynamics (BLAST) Computer Program Simulations and Measured Energy Use for Army Buildings.

    DTIC Science & Technology

    1980-05-01

    engineering ,ZteNo D R RPTE16 research w 9 laboratory COMPARISON OF BUILDING LOADS ANALYSIS AND SYSTEM THERMODYNAMICS (BLAST) AD 0 5 5,0 3COMPUTER PROGRAM...Building Loads Analysis and System Thermodynamics (BLAST) computer program. A dental clinic and a battalion headquarters and classroom building were...Building and HVAC System Data Computer Simulation Comparison of Actual and Simulated Results ANALYSIS AND FINDINGS

  15. Experience in using commercial clouds in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less

  16. Comparison of nonmesonic hypernuclear decay rates computed in laboratory and center-of-mass coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Conti, C.; Barbero, C.; Galeão, A. P.

    In this work we compute the one-nucleon-induced nonmesonic hypernuclear decay rates of {sub Λ}{sup 5}He, {sub Λ}{sup 12}C and {sub Λ}{sup 13}C using a formalism based on the independent particle shell model in terms of laboratory coordinates. To ascertain the correctness and precision of the method, these results are compared with those obtained using a formalism in terms of center-of-mass coordinates, which has been previously reported in the literature. The formalism in terms of laboratory coordinates will be useful in the shell-model approach to two-nucleon-induced transitions.

  17. Recent Performance Results of VPIC on Trinity

    NASA Astrophysics Data System (ADS)

    Nystrom, W. D.; Bergen, B.; Bird, R. F.; Bowers, K. J.; Daughton, W. S.; Guo, F.; Le, A.; Li, H.; Nam, H.; Pang, X.; Stark, D. J.; Rust, W. N., III; Yin, L.; Albright, B. J.

    2017-10-01

    Trinity is a new DOE compute resource now in production at Los Alamos National Laboratory. Trinity has several new and unique features including two compute partitions, one with dual socket Intel Haswell Xeon compute nodes and one with Intel Knights Landing (KNL) Xeon Phi compute nodes, use of on package high bandwidth memory (HBM) for KNL nodes, ability to configure KNL nodes with respect to HBM model and on die network topology in a variety of operational modes at run time, and use of solid state storage via burst buffer technology to reduce time required to perform I/O. An effort is in progress to optimize VPIC on Trinity by taking advantage of these new architectural features. Results of work will be presented on performance of VPIC on Haswell and KNL partitions for single node runs and runs at scale. Results include use of burst buffers at scale to optimize I/O, comparison of strategies for using MPI and threads, performance benefits using HBM and effectiveness of using intrinsics for vectorization. Work performed under auspices of U.S. Dept. of Energy by Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by LANL LDRD program.

  18. Analysis methodology and development of a statistical tool for biodistribution data from internal contamination with actinides.

    PubMed

    Lamart, Stephanie; Griffiths, Nina M; Tchitchek, Nicolas; Angulo, Jaime F; Van der Meeren, Anne

    2017-03-01

    The aim of this work was to develop a computational tool that integrates several statistical analysis features for biodistribution data from internal contamination experiments. These data represent actinide levels in biological compartments as a function of time and are derived from activity measurements in tissues and excreta. These experiments aim at assessing the influence of different contamination conditions (e.g. intake route or radioelement) on the biological behavior of the contaminant. The ever increasing number of datasets and diversity of experimental conditions make the handling and analysis of biodistribution data difficult. This work sought to facilitate the statistical analysis of a large number of datasets and the comparison of results from diverse experimental conditions. Functional modules were developed using the open-source programming language R to facilitate specific operations: descriptive statistics, visual comparison, curve fitting, and implementation of biokinetic models. In addition, the structure of the datasets was harmonized using the same table format. Analysis outputs can be written in text files and updated data can be written in the consistent table format. Hence, a data repository is built progressively, which is essential for the optimal use of animal data. Graphical representations can be automatically generated and saved as image files. The resulting computational tool was applied using data derived from wound contamination experiments conducted under different conditions. In facilitating biodistribution data handling and statistical analyses, this computational tool ensures faster analyses and a better reproducibility compared with the use of multiple office software applications. Furthermore, re-analysis of archival data and comparison of data from different sources is made much easier. Hence this tool will help to understand better the influence of contamination characteristics on actinide biokinetics. Our approach can aid the optimization of treatment protocols and therefore contribute to the improvement of the medical response after internal contamination with actinides.

  19. The demodulated band transform

    PubMed Central

    Kovach, Christopher K.; Gander, Phillip E.

    2016-01-01

    Background Windowed Fourier decompositions (WFD) are widely used in measuring stationary and non-stationary spectral phenomena and in describing pairwise relationships among multiple signals. Although a variety of WFDs see frequent application in electrophysiological research, including the short-time Fourier transform, continuous wavelets, band-pass filtering and multitaper-based approaches, each carries certain drawbacks related to computational efficiency and spectral leakage. This work surveys the advantages of a WFD not previously applied in electrophysiological settings. New Methods A computationally efficient form of complex demodulation, the demodulated band transform (DBT), is described. Results DBT is shown to provide an efficient approach to spectral estimation with minimal susceptibility to spectral leakage. In addition, it lends itself well to adaptive filtering of non-stationary narrowband noise. Comparison with existing methods A detailed comparison with alternative WFDs is offered, with an emphasis on the relationship between DBT and Thomson's multitaper. DBT is shown to perform favorably in combining computational efficiency with minimal introduction of spectral leakage. Conclusion DBT is ideally suited to efficient estimation of both stationary and non-stationary spectral and cross-spectral statistics with minimal susceptibility to spectral leakage. These qualities are broadly desirable in many settings. PMID:26711370

  20. Comparison of Themodynamic and Transport Property Models for Computing Equilibrium High Enthalpy Flows

    NASA Astrophysics Data System (ADS)

    Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik

    2017-11-01

    To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.

  1. Radiation calculation in non-equilibrium shock layer

    NASA Astrophysics Data System (ADS)

    Dubois, Joanne

    2005-05-01

    The purpose of the work was to investigate confidence in radiation predictions on an entry probe body in high temperature conditions taking the Huygens probe as an example. Existing engineering flowfield codes for shock tube and blunt body simulations were used and updated when necessary to compute species molar fractions and flow field parameters. An interface to the PARADE radiation code allowed radiative emission estimates to the body surface to be made. A validation of the radiative models in equilibrium conditions was first made with published data and by comparison with shock tube test case data from the IUSTI TCM2 facility with Titan like atmosphere test gas. Further verifications were made in non-equilibrium with published computations. These comparisons were initially made using a Boltzmann assumption for the electronic states of CN. An attempt was also made to use pseudo species for the individual electronic states of CN. Assumptions made in this analysis are described and a further comparison with shock tube data undertaken. Several CN radiation datasets have been used, and while improvements to the modelling tools have been made, it seems that considerable uncertainty remains in the modelling of the non-equilibrium emission using simple engineering methods.

  2. The width-tapered double cantilever beam for interlaminar fracture testing

    NASA Technical Reports Server (NTRS)

    Bascom, W. D.; Jensen, R. M.; Bullman, G. W.; Hunston, D. L.

    1984-01-01

    The width-tapered double-cantilever-beam (WTDCB) specimen configuration used to determine the Mode-I interlaminar fracture energy (IFE) of composites has special advantages for routine development work and for quality-assurance purposes. These advantages come primarily from the simplicity of testing and the fact that the specimen is designed for constant change in compliance with crack length, so that the computation of Mode-I IFE is independent of crack length. In this paper, a simplified technique for fabrication and testing WTDCB specimens is described. Also presented are the effects of fiber orientation and specimen dimensions, a comparison of data obtained using the WTDCB specimens and other specimen geometries, and comparison of data obtained at different laboratories. It is concluded that the WTDCB gives interlaminar Mode-I IFE essentially equal to other type specimens, and that it can be used for rapid screening in resin-development work and for quality assurance of composite materials.

  3. A Synthesis of Hybrid RANS/LES CFD Results for F-16XL Aircraft Aerodynamics

    NASA Technical Reports Server (NTRS)

    Luckring, James M.; Park, Michael A.; Hitzel, Stephan M.; Jirasek, Adam; Lofthouse, Andrew J.; Morton, Scott A.; McDaniel, David R.; Rizzi, Arthur M.

    2015-01-01

    A synthesis is presented of recent numerical predictions for the F-16XL aircraft flow fields and aerodynamics. The computational results were all performed with hybrid RANS/LES formulations, with an emphasis on unsteady flows and subsequent aerodynamics, and results from five computational methods are included. The work was focused on one particular low-speed, high angle-of-attack flight test condition, and comparisons against flight-test data are included. This work represents the third coordinated effort using the F-16XL aircraft, and a unique flight-test data set, to advance our knowledge of slender airframe aerodynamics as well as our capability for predicting these aerodynamics with advanced CFD formulations. The prior efforts were identified as Cranked Arrow Wing Aerodynamics Project International, with the acronyms CAWAPI and CAWAPI-2. All information in this paper is in the public domain.

  4. Computational Modeling of Aerosol Hazard Arising from the Opening of an Anthrax Letter in an Open-Office Complex

    NASA Astrophysics Data System (ADS)

    Lien, F. S.; Ji, H.; Yee, E.

    Early experimental work, conducted at Defence R&D Canada — Suffield, measured and characterized the personal and environmental contamination associated with the simulated opening of anthrax-tainted letters under a number of different scenarios. A better understanding of the physical and biological processes is considerably significant for detecting, assessing, and formulating potential mitigation strategies for managing these risks. These preliminary experimental investigations have been extended to simulate the contamination from the opening of anthrax-tainted letters in an Open-Office environment using Computational Fluid Dynamics (CFD). Bacillus globigii (BG) was used as a biological simulant for anthrax, with 0.1 gram of the simulant released from opened letters in the experiments conducted. The accuracy of the model for prediction of the spatial distribution of BG spores in the office is first assessed quantitatively by comparison with measured SF6 concentrations (the baseline experiment), and then qualitatively by comparison with measured BG concentrations obtained under a number of scenarios, some involving people moving within various offices.

  5. Parallel reduced-instruction-set-computer architecture for real-time symbolic pattern matching

    NASA Astrophysics Data System (ADS)

    Parson, Dale E.

    1991-03-01

    This report discusses ongoing work on a parallel reduced-instruction- set-computer (RISC) architecture for automatic production matching. The PRIOPS compiler takes advantage of the memoryless character of automatic processing by translating a program's collection of automatic production tests into an equivalent combinational circuit-a digital circuit without memory, whose outputs are immediate functions of its inputs. The circuit provides a highly parallel, fine-grain model of automatic matching. The compiler then maps the combinational circuit onto RISC hardware. The heart of the processor is an array of comparators capable of testing production conditions in parallel, Each comparator attaches to private memory that contains virtual circuit nodes-records of the current state of nodes and busses in the combinational circuit. All comparator memories hold identical information, allowing simultaneous update for a single changing circuit node and simultaneous retrieval of different circuit nodes by different comparators. Along with the comparator-based logic unit is a sequencer that determines the current combination of production-derived comparisons to try, based on the combined success and failure of previous combinations of comparisons. The memoryless nature of automatic matching allows the compiler to designate invariant memory addresses for virtual circuit nodes, and to generate the most effective sequences of comparison test combinations. The result is maximal utilization of parallel hardware, indicating speed increases and scalability beyond that found for course-grain, multiprocessor approaches to concurrent Rete matching. Future work will consider application of this RISC architecture to the standard (controlled) Rete algorithm, where search through memory dominates portions of matching.

  6. Assessment of flat rolling theories for the use in a model-based controller for high-precision rolling applications

    NASA Astrophysics Data System (ADS)

    Stockert, Sven; Wehr, Matthias; Lohmar, Johannes; Abel, Dirk; Hirt, Gerhard

    2017-10-01

    In the electrical and medical industries the trend towards further miniaturization of devices is accompanied by the demand for smaller manufacturing tolerances. Such industries use a plentitude of small and narrow cold rolled metal strips with high thickness accuracy. Conventional rolling mills can hardly achieve further improvement of these tolerances. However, a model-based controller in combination with an additional piezoelectric actuator for high dynamic roll adjustment is expected to enable the production of the required metal strips with a thickness tolerance of +/-1 µm. The model-based controller has to be based on a rolling theory which can describe the rolling process very accurately. Additionally, the required computing time has to be low in order to predict the rolling process in real-time. In this work, four rolling theories from literature with different levels of complexity are tested for their suitability for the predictive controller. Rolling theories of von Kármán, Siebel, Bland & Ford and Alexander are implemented in Matlab and afterwards transferred to the real-time computer used for the controller. The prediction accuracy of these theories is validated using rolling trials with different thickness reduction and a comparison to the calculated results. Furthermore, the required computing time on the real-time computer is measured. Adequate results according the prediction accuracy can be achieved with the rolling theories developed by Bland & Ford and Alexander. A comparison of the computing time of those two theories reveals that Alexander's theory exceeds the sample rate of 1 kHz of the real-time computer.

  7. Theoretical Characterizaiton of Visual Signatures

    NASA Astrophysics Data System (ADS)

    Kashinski, D. O.; Chase, G. M.; di Nallo, O. E.; Scales, A. N.; Vanderley, D. L.; Byrd, E. F. C.

    2015-05-01

    We are investigating the accuracy of theoretical models used to predict the visible, ultraviolet, and infrared spectra, as well as other properties, of product materials ejected from the muzzle of currently fielded systems. Recent advances in solid propellants has made the management of muzzle signature (flash) a principle issue in weapons development across the calibers. A priori prediction of the electromagnetic spectra of formulations will allow researchers to tailor blends that yield desired signatures and determine spectrographic detection ranges. Quantum chemistry methods at various levels of sophistication have been employed to optimize molecular geometries, compute unscaled vibrational frequencies, and determine the optical spectra of specific gas-phase species. Electronic excitations are being computed using Time Dependent Density Functional Theory (TD-DFT). A full statistical analysis and reliability assessment of computational results is currently underway. A comparison of theoretical results to experimental values found in the literature is used to assess any affects of functional choice and basis set on calculation accuracy. The status of this work will be presented at the conference. Work supported by the ARL, DoD HPCMP, and USMA.

  8. Enabling a Scientific Cloud Marketplace: VGL (Invited)

    NASA Astrophysics Data System (ADS)

    Fraser, R.; Woodcock, R.; Wyborn, L. A.; Vote, J.; Rankine, T.; Cox, S. J.

    2013-12-01

    The Virtual Geophysics Laboratory (VGL) provides a flexible, web based environment where researchers can browse data and use a variety of scientific software packaged into tool kits that run in the Cloud. Both data and tool kits are published by multiple researchers and registered with the VGL infrastructure forming a data and application marketplace. The VGL provides the basic work flow of Discovery and Access to the disparate data sources and a Library for tool kits and scripting to drive the scientific codes. Computation is then performed on the Research or Commercial Clouds. Provenance information is collected throughout the work flow and can be published alongside the results allowing for experiment comparison and sharing with other researchers. VGL's "mix and match" approach to data, computational resources and scientific codes, enables a dynamic approach to scientific collaboration. VGL allows scientists to publish their specific contribution, be it data, code, compute or work flow, knowing the VGL framework will provide other components needed for a complete application. Other scientists can choose the pieces that suit them best to assemble an experiment. The coarse grain workflow of the VGL framework combined with the flexibility of the scripting library and computational toolkits allows for significant customisation and sharing amongst the community. The VGL utilises the cloud computational and storage resources from the Australian academic research cloud provided by the NeCTAR initiative and a large variety of data accessible from national and state agencies via the Spatial Information Services Stack (SISS - http://siss.auscope.org). VGL v1.2 screenshot - http://vgl.auscope.org

  9. Application of electron closures in extended MHD

    NASA Astrophysics Data System (ADS)

    Held, Eric; Adair, Brett; Taylor, Trevor

    2017-10-01

    Rigorous closure of the extended MHD equations in plasma fluid codes includes the effects of electron heat conduction along perturbed magnetic fields and contributions of the electron collisional friction and stress to the extended Ohms law. In this work we discuss application of a continuum numerical solution to the Chapman-Enskog-like electron drift kinetic equation using the NIMROD code. The implementation is a tightly-coupled fluid/kinetic system that carefully addresses time-centering in the advance of the fluid variables with their kinetically-computed closures. Comparisons of spatial accuracy, computational efficiency and required velocity space resolution are presented for applications involving growing magnetic islands in cylindrical and toroidal geometry. The reduction in parallel heat conduction due to particle trapping in toroidal geometry is emphasized. Work supported by DOE under Grant Nos. DE-FC02-08ER54973 and DE-FG02-04ER54746.

  10. A Comparison of Computational Aeroacoustic Prediction Methods for Transonic Rotor Noise

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Lyrintzis, Anastasios; Koutsavdis, Evangelos K.

    1996-01-01

    This paper compares two methods for predicting transonic rotor noise for helicopters in hover and forward flight. Both methods rely on a computational fluid dynamics (CFD) solution as input to predict the acoustic near and far fields. For this work, the same full-potential rotor code has been used to compute the CFD solution for both acoustic methods. The first method employs the acoustic analogy as embodied in the Ffowcs Williams-Hawkings (FW-H) equation, including the quadrupole term. The second method uses a rotating Kirchhoff formulation. Computed results from both methods are compared with one other and with experimental data for both hover and advancing rotor cases. The results are quite good for all cases tested. The sensitivity of both methods to CFD grid resolution and to the choice of the integration surface/volume is investigated. The computational requirements of both methods are comparable; in both cases these requirements are much less than the requirements for the CFD solution.

  11. Interaction sorting method for molecular dynamics on multi-core SIMD CPU architecture.

    PubMed

    Matvienko, Sergey; Alemasov, Nikolay; Fomin, Eduard

    2015-02-01

    Molecular dynamics (MD) is widely used in computational biology for studying binding mechanisms of molecules, molecular transport, conformational transitions, protein folding, etc. The method is computationally expensive; thus, the demand for the development of novel, much more efficient algorithms is still high. Therefore, the new algorithm designed in 2007 and called interaction sorting (IS) clearly attracted interest, as it outperformed the most efficient MD algorithms. In this work, a new IS modification is proposed which allows the algorithm to utilize SIMD processor instructions. This paper shows that the improvement provides an additional gain in performance, 9% to 45% in comparison to the original IS method.

  12. Work and power analysis of the golf swing.

    PubMed

    Nesbit, Steven M; Serrano, Monika

    2005-12-01

    A work and power (energy) analysis of the golf swing is presented as a method for evaluating the mechanics of the golf swing. Two computer models were used to estimate the energy production, transfers, and conversions within the body and the golf club by employing standard methods of mechanics to calculate work of forces and torques, kinetic energies, strain energies, and power during the golf swing. A detailed model of the golf club determined the energy transfers and conversions within the club during the downswing. A full-body computer model of the golfer determined the internal work produced at the body joints during the downswing. Four diverse amateur subjects were analyzed and compared using these two models. The energy approach yielded new information on swing mechanics, determined the force and torque components that accelerated the club, illustrated which segments of the body produced work, determined the timing of internal work generation, measured swing efficiencies, calculated shaft energy storage and release, and proved that forces and range of motion were equally important in developing club head velocity. A more comprehensive description of the downswing emerged from information derived from an energy based analysis. Key PointsFull-Body Model of the golf swing.Energy analysis of the golf swing.Work of the body joints dDuring the golf swing.Comparisons of subject work and power characteristics.

  13. Work and Power Analysis of the Golf Swing

    PubMed Central

    Nesbit, Steven M.; Serrano, Monika

    2005-01-01

    A work and power (energy) analysis of the golf swing is presented as a method for evaluating the mechanics of the golf swing. Two computer models were used to estimate the energy production, transfers, and conversions within the body and the golf club by employing standard methods of mechanics to calculate work of forces and torques, kinetic energies, strain energies, and power during the golf swing. A detailed model of the golf club determined the energy transfers and conversions within the club during the downswing. A full-body computer model of the golfer determined the internal work produced at the body joints during the downswing. Four diverse amateur subjects were analyzed and compared using these two models. The energy approach yielded new information on swing mechanics, determined the force and torque components that accelerated the club, illustrated which segments of the body produced work, determined the timing of internal work generation, measured swing efficiencies, calculated shaft energy storage and release, and proved that forces and range of motion were equally important in developing club head velocity. A more comprehensive description of the downswing emerged from information derived from an energy based analysis. Key Points Full-Body Model of the golf swing. Energy analysis of the golf swing. Work of the body joints dDuring the golf swing. Comparisons of subject work and power characteristics. PMID:24627666

  14. Electron Scale Turbulence and Transport in an NSTX H-mode Plasma Using a Synthetic Diagnostic for High-k Scattering Measurements

    NASA Astrophysics Data System (ADS)

    Ruiz Ruiz, Juan; Guttenfelder, Walter; Loureiro, Nuno; Ren, Yang; White, Anne; MIT/PPPL Collaboration

    2017-10-01

    Turbulent fluctuations on the electron gyro-radius length scale are thought to cause anomalous transport of electron energy in spherical tokamaks such as NSTX and MAST in some parametric regimes. In NSTX, electron-scale turbulence is studied through a combination of experimental measurements from a high-k scattering system and gyrokinetic simulations. Until now most comparisons between experiment and simulation of electron scale turbulence have been qualitative, with recent work expanding to more quantitative comparisons via synthetic diagnostic development. In this new work, we propose two alternate, complementary ways to perform a synthetic diagnostic using the gyrokinetic code GYRO. The first approach builds on previous work and is based on the traditional selection of wavenumbers using a wavenumber filter, for which a new wavenumber mapping was implemented for general axisymmetric geometry. A second alternate approach selects wavenumbers in real-space to compute the power spectra. These approaches are complementary, and recent results from both synthetic diagnostic approaches applied to NSTX plasmas will be presented. Work supported by U.S. DOE contracts DE-AC02-09CH11466 and DE-AC02-05CH11231.

  15. Theory and Experiment of Multielement Airfoils: A Comparison

    NASA Technical Reports Server (NTRS)

    Czerwiec, Ryan; Edwards, J. R.; Rumsey, C. L.; Hassan, H. A.

    2000-01-01

    A detailed comparison of computed and measured pressure distributions, velocity profiles, transition onset, and Reynolds shear stresses for multi-element airfoils is presented. It is shown that the transitional k-zeta model, which is implemented into CFL3D, does a good job of predicting pressure distributions, transition onset, and velocity profiles with the exception of velocities in the slat wake region. Considering the fact that the hot wire used was not fine enough to resolve Reynolds stresses in the boundary layer, comparisons of turbulence stresses varied from good to fair. It is suggested that the effects of unsteadiness be thoroughly evaluated before more complicated transition/turbulence models are used. Further, it is concluded that the present work presents a viable and economical method for calculating laminar/transitional/turbuient flows over complex shapes without user interface.

  16. A comparison of radiosity with current methods of sound level prediction in commercial spaces

    NASA Astrophysics Data System (ADS)

    Beamer, C. Walter, IV; Muehleisen, Ralph T.

    2002-11-01

    The ray tracing and image methods (and variations thereof) are widely used for the computation of sound fields in architectural spaces. The ray tracing and image methods are best suited for spaces with mostly specular reflecting surfaces. The radiosity method, a method based on solving a system of energy balance equations, is best applied to spaces with mainly diffusely reflective surfaces. Because very few spaces are either purely specular or purely diffuse, all methods must deal with both types of reflecting surfaces. A comparison of the radiosity method to other methods for the prediction of sound levels in commercial environments is presented. [Work supported by NSF.

  17. Infrared Spectra and Optical Constants of Astronomical Ices: I. Amorphous and Crystalline Acetylene

    NASA Technical Reports Server (NTRS)

    Hudson, R. L.; Ferrante, R. F.; Moore, M. H.

    2013-01-01

    Here we report recent measurements on acetylene (C2H2) ices at temperatures applicable to the outer Solar System and the interstellar medium. New near- and mid-infrared data, including optical constants (n, k), absorption coefficients (alpha), and absolute band strengths (A), are presented for both amorphous and crystalline phases of C2H2 that exist below 70 K. Comparisons are made to earlier work. Electronic versions of the data are made available, as is a computer routine to use our reported n and k values to simulate the observed IR spectra. Suggestions are given for the use of the data and a comparison to a spectrum of Makemake is made.

  18. Statistical process control based chart for information systems security

    NASA Astrophysics Data System (ADS)

    Khan, Mansoor S.; Cui, Lirong

    2015-07-01

    Intrusion detection systems have a highly significant role in securing computer networks and information systems. To assure the reliability and quality of computer networks and information systems, it is highly desirable to develop techniques that detect intrusions into information systems. We put forward the concept of statistical process control (SPC) in computer networks and information systems intrusions. In this article we propose exponentially weighted moving average (EWMA) type quality monitoring scheme. Our proposed scheme has only one parameter which differentiates it from the past versions. We construct the control limits for the proposed scheme and investigate their effectiveness. We provide an industrial example for the sake of clarity for practitioner. We give comparison of the proposed scheme with EWMA schemes and p chart; finally we provide some recommendations for the future work.

  19. Norwood with right ventricle-to-pulmonary artery conduit is more effective than Norwood with Blalock-Taussig shunt for hypoplastic left heart syndrome: mathematic modeling of hemodynamics.

    PubMed

    Mroczek, Tomasz; Małota, Zbigniew; Wójcik, Elżbieta; Nawrat, Zbigniew; Skalski, Janusz

    2011-12-01

    The introduction of right ventricle to pulmonary artery (RV-PA) conduit in the Norwood procedure for hypoplastic left heart syndrome resulted in a higher survival rate in many centers. A higher diastolic aortic pressure and a higher mean coronary perfusion pressure were suggested as the hemodynamic advantage of this source of pulmonary blood flow. The main objective of this study was the comparison of two models of Norwood physiology with different types of pulmonary blood flow sources and their hemodynamics. Based on anatomic details obtained from echocardiographic assessment and angiographic studies, two three-dimensional computer models of post-Norwood physiology were developed. The finite-element method was applied for computational hemodynamic simulations. Norwood physiology with RV-PA 5-mm conduit and Blalock-Taussig shunt (BTS) 3.5-mm shunt were compared. Right ventricle work, wall stress, flow velocity, shear rate stress, energy loss and turbulence eddy dissipation were analyzed in both models. The total work of the right ventricle after Norwood procedure with the 5-mm RV-PA conduit was lower in comparison to the 3.5-mm BTS while establishing an identical systemic blood flow. The Qp/Qs ratio was higher in the BTS group. Hemodynamic performance after Norwood with the RV-PA conduit is more effective than after Norwood with BTS. Computer simulations of complicated hemodynamics after the Norwood procedure could be helpful in establishing optimal post-Norwood physiology. Copyright © 2011 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.

  20. Comparison of BRDF-Predicted and Observed Light Curves of GEO Satellites

    DTIC Science & Technology

    2015-10-18

    to validate the BRDF models . 7. ACKNOWLEDGEMENTS This work was partially funded by a Phase II SBIR (FA9453-14-C-029) from the AFRL Space...Bidirectional Reflectance Distribution Function ( BRDF ) models . These BRDF models have generally come from researchers in computer graphics and machine...characterization, there is a lack of research on the validation of BRDFs with regards to real data. In this paper, we compared telescope data provided by the

  1. Comparison of two methods to determine fan performance curves using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Onma, Patinya; Chantrasmi, Tonkid

    2018-01-01

    This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.

  2. MDTS: automatic complex materials design using Monte Carlo tree search.

    PubMed

    M Dieb, Thaer; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji

    2017-01-01

    Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at https://github.com/tsudalab/MDTS.

  3. Influence of wall couple stress in MHD flow of a micropolar fluid in a porous medium with energy and concentration transfer

    NASA Astrophysics Data System (ADS)

    Khalid, Asma; Khan, Ilyas; Khan, Arshad; Shafie, Sharidan

    2018-06-01

    The intention here is to investigate the effects of wall couple stress with energy and concentration transfer in magnetohydrodynamic (MHD) flow of a micropolar fluid embedded in a porous medium. The mathematical model contains the set of linear conservation forms of partial differential equations. Laplace transforms and convolution technique are used for computation of exact solutions of velocity, microrotations, temperature and concentration equations. Numerical values of skin friction, couple wall stress, Nusselt and Sherwood numbers are also computed. Characteristics for the significant variables on the physical quantities are graphically discussed. Comparison with previously published work in limiting sense shows an excellent agreement.

  4. MDTS: automatic complex materials design using Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Dieb, Thaer M.; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji

    2017-12-01

    Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at https://github.com/tsudalab/MDTS.

  5. Summary Report of Working Group 2: Computation

    NASA Astrophysics Data System (ADS)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-01

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) new hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.

  6. Summary Report of Working Group 2: Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-22

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) newmore » hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.« less

  7. Wind turbine design codes: A preliminary comparison of the aerodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buhl, M.L. Jr.; Wright, A.D.; Tangler, J.L.

    1997-12-01

    The National Wind Technology Center of the National Renewable Energy Laboratory is comparing several computer codes used to design and analyze wind turbines. The first part of this comparison is to determine how well the programs predict the aerodynamic behavior of turbines with no structural degrees of freedom. Without general agreement on the aerodynamics, it is futile to try to compare the structural response due to the aerodynamic input. In this paper, the authors compare the aerodynamic loads for three programs: Garrad Hassan`s BLADED, their own WT-PERF, and the University of Utah`s YawDyn. This report documents a work in progressmore » and compares only two-bladed, downwind turbines.« less

  8. ARBAN-A new method for analysis of ergonomic effort.

    PubMed

    Holzmann, P

    1982-06-01

    ARBAN is a method for the ergonomic analysis of work, including work situations which involve widely differing body postures and loads. The idea of the method is thal all phases of the analysis process that imply specific knowledge on ergonomics are teken over by filming equipment and a computer routine. All tasks that must be carried out by the investigator in the process of analysis are so designed that they appear as evident by the use of systematic common sense. The ARBAN analysis method contains four steps: 1. Recording of the workplace situation on video or film. 2. Coding the posture and load situation at a number of closely spaced 'frozen' situations. 3. Computerisation. 4. Evaluation of the results. The computer calculates figures for the total ergonomic stress on the whole body as well as on different parts of the body separately. They are presented as 'Ergonomic stress/ time curves', where the heavy load situations occur as peaks of the curve. The work cycle may also be divided into different tasks, where the stress and duration patterns can be compared. The integral of the curves are calculated for single-figure comparison of different tasks as well as different work situations.

  9. Computers and the internet: tools for youth empowerment.

    PubMed

    Valaitis, Ruta K

    2005-10-04

    Youth are often disenfranchised in their communities and may feel they have little voice. Since computers are an important aspect of youth culture, they may offer solutions to increasing youth participation in communities. This qualitative case study investigated the perceptions of 19 (predominantly female) inner-city school youth about their use of computers and the Internet in a school-based community development project. Youth working with public health nurses in a school-based community development project communicated with local community members using computer-mediated communication, surveyed peers online, built websites, searched for information online, and prepared project materials using computers and the Internet. Participant observation, semistructured interviews, analysis of online messages, and online- and paper-based surveys were used to gather data about youth's and adults' perceptions and use of the technologies. Constant comparison method and between-method triangulation were used in the analysis to satisfy the existence of themes. Not all youth were interested in working with computers. Some electronic messages from adults were perceived to be critical, and writing to adults was intimidating for some youth. In addition, technical problems were experienced. Despite these barriers, most youth perceived that using computers and the Internet reduced their anxiety concerning communication with adults, increased their control when dealing with adults, raised their perception of their social status, increased participation within the community, supported reflective thought, increased efficiency, and improved their access to resources. Overall, youth perceived computers and the Internet to be empowering tools, and they should be encouraged to use such technology to support them in community initiatives.

  10. Computers and the Internet: Tools for Youth Empowerment

    PubMed Central

    2005-01-01

    Background Youth are often disenfranchised in their communities and may feel they have little voice. Since computers are an important aspect of youth culture, they may offer solutions to increasing youth participation in communities. Objective This qualitative case study investigated the perceptions of 19 (predominantly female) inner-city school youth about their use of computers and the Internet in a school-based community development project. Methods Youth working with public health nurses in a school-based community development project communicated with local community members using computer-mediated communication, surveyed peers online, built websites, searched for information online, and prepared project materials using computers and the Internet. Participant observation, semistructured interviews, analysis of online messages, and online- and paper-based surveys were used to gather data about youth’s and adults’ perceptions and use of the technologies. Constant comparison method and between-method triangulation were used in the analysis to satisfy the existence of themes. Results Not all youth were interested in working with computers. Some electronic messages from adults were perceived to be critical, and writing to adults was intimidating for some youth. In addition, technical problems were experienced. Despite these barriers, most youth perceived that using computers and the Internet reduced their anxiety concerning communication with adults, increased their control when dealing with adults, raised their perception of their social status, increased participation within the community, supported reflective thought, increased efficiency, and improved their access to resources. Conclusions Overall, youth perceived computers and the Internet to be empowering tools, and they should be encouraged to use such technology to support them in community initiatives. PMID:16403715

  11. A comparison of several computational auditory scene analysis (CASA) techniques for monaural speech segregation.

    PubMed

    Zeremdini, Jihen; Ben Messaoud, Mohamed Anouar; Bouzid, Aicha

    2015-09-01

    Humans have the ability to easily separate a composed speech and to form perceptual representations of the constituent sources in an acoustic mixture thanks to their ears. Until recently, researchers attempt to build computer models of high-level functions of the auditory system. The problem of the composed speech segregation is still a very challenging problem for these researchers. In our case, we are interested in approaches that are addressed to the monaural speech segregation. For this purpose, we study in this paper the computational auditory scene analysis (CASA) to segregate speech from monaural mixtures. CASA is the reproduction of the source organization achieved by listeners. It is based on two main stages: segmentation and grouping. In this work, we have presented, and compared several studies that have used CASA for speech separation and recognition.

  12. Unsteady Aero Computation of a 1 1/2 Stage Large Scale Rotating Turbine

    NASA Technical Reports Server (NTRS)

    To, Wai-Ming

    2012-01-01

    This report is the documentation of the work performed for the Subsonic Rotary Wing Project under the NASA s Fundamental Aeronautics Program. It was funded through Task Number NNC10E420T under GESS-2 Contract NNC06BA07B in the period of 10/1/2010 to 8/31/2011. The objective of the task is to provide support for the development of variable speed power turbine technology through application of computational fluid dynamics analyses. This includes work elements in mesh generation, multistage URANS simulations, and post-processing of the simulation results for comparison with the experimental data. The unsteady CFD calculations were performed with the TURBO code running in multistage single passage (phase lag) mode. Meshes for the blade rows were generated with the NASA developed TCGRID code. The CFD performance is assessed and improvements are recommended for future research in this area. For that, the United Technologies Research Center's 1 1/2 stage Large Scale Rotating Turbine was selected to be the candidate engine configuration for this computational effort because of the completeness and availability of the data.

  13. Feasibility of MHD submarine propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doss, E.D.; Sikes, W.C.

    1992-09-01

    This report describes the work performed during Phase 1 and Phase 2 of the collaborative research program established between Argonne National Laboratory (ANL) and Newport News Shipbuilding and Dry Dock Company (NNS). Phase I of the program focused on the development of computer models for Magnetohydrodynamic (MHD) propulsion. Phase 2 focused on the experimental validation of the thruster performance models and the identification, through testing, of any phenomena which may impact the attractiveness of this propulsion system for shipboard applications. The report discusses in detail the work performed in Phase 2 of the program. In Phase 2, a two Teslamore » test facility was designed, built, and operated. The facility test loop, its components, and their design are presented. The test matrix and its rationale are discussed. Representative experimental results of the test program are presented, and are compared to computer model predictions. In general, the results of the tests and their comparison with the predictions indicate that thephenomena affecting the performance of MHD seawater thrusters are well understood and can be accurately predicted with the developed thruster computer models.« less

  14. Comparison of two computer programs by predicting turbulent mixing of helium in a ducted supersonic airstream

    NASA Technical Reports Server (NTRS)

    Pan, Y. S.; Drummond, J. P.; Mcclinton, C. R.

    1978-01-01

    Two parabolic flow computer programs, SHIP (a finite-difference program) and COMOC (a finite-element program), are used for predicting three-dimensional turbulent reacting flow fields in supersonic combustors. The theoretical foundation of the two computer programs are described, and then the programs are applied to a three-dimensional turbulent mixing experiment. The cold (nonreacting) flow experiment was performed to study the mixing of helium jets with a supersonic airstream in a rectangular duct. Surveys of the flow field at an upstream were used as the initial data by programs; surveys at a downstream station provided comparison to assess program accuracy. Both computer programs predicted the experimental results and data trends reasonably well. However, the comparison between the computations from the two programs indicated that SHIP was more accurate in computation and more efficient in both computer storage and computing time than COMOC.

  15. Comparison of digital intraoral scanners by single-image capture system and full-color movie system.

    PubMed

    Yamamoto, Meguru; Kataoka, Yu; Manabe, Atsufumi

    2017-01-01

    The use of dental computer-aided design/computer-aided manufacturing (CAD/CAM) restoration is rapidly increasing. This study was performed to evaluate the marginal and internal cement thickness and the adhesive gap of internal cavities comprising CAD/CAM materials using two digital impression acquisition methods and micro-computed tomography. Images obtained by a single-image acquisition system (Bluecam Ver. 4.0) and a full-color video acquisition system (Omnicam Ver. 4.2) were divided into the BL and OM groups, respectively. Silicone impressions were prepared from an ISO-standard metal mold, and CEREC Stone BC and New Fuji Rock IMP were used to create working models (n=20) in the BL and OM groups (n=10 per group), respectively. Individual inlays were designed in a conventional manner using designated software, and all restorations were prepared using CEREC inLab MC XL. These were assembled with the corresponding working models used for measurement, and the level of fit was examined by three-dimensional analysis based on micro-computed tomography. Significant differences in the marginal and internal cement thickness and adhesive gap spacing were found between the OM and BL groups. The full-color movie capture system appears to be a more optimal restoration system than the single-image capture system.

  16. A Comparison of Monte Carlo Tree Search and Rolling Horizon Optimization for Large Scale Dynamic Resource Allocation Problems

    DTIC Science & Technology

    2015-05-01

    decisions on the fly in an online retail environment. Tech. rep., Working Paper, Massachusetts Institute of Technology, Boston, MA. Arneson, Broderick , Ryan...Hayward, Philip Henderson. 2009. MoHex wins Hex tournament. International Computer Games Association Journal 32 114–116. Arneson, Broderick , Ryan B...Combina- torial Search. Enzenberger, Markus, Martin Muller, Broderick Arneson, Richard Segal. 2010. Fuego—an open-source framework for board games and

  17. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    PubMed

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  18. Elucidation of Heterogeneous Processes Controlling Boost Phase Signatures

    DTIC Science & Technology

    1990-09-12

    three year research program to develop efficient theoretical methods to study collisional processes involved in radiative signature modeling . The...Marlboro, MD 20772 I. Statement of Problem For strategic defense, it is important to be able to effectively model radiative signaturesl arising from...Thus our computational work was on problems or models for which exact results for making comparisons were available. Our key validations were

  19. Simulating smokers' acceptance of modifications in a cessation program.

    PubMed Central

    Spoth, R

    1992-01-01

    Recent research has underscored the importance of assessing barriers to smokers' acceptance of cessation programs. This paper illustrates the use of computer simulations to gauge smokers' response to program modifications which may produce barriers to participation. It also highlights methodological issues encountered in conducting this work. Computer simulations were based on conjoint analysis, a consumer research method which enables measurement of smokers' relative preference for various modifications of cessation programs. Results from two studies are presented in this paper. The primary study used a randomly selected sample of 218 adult smokers who participated in a computer-assisted phone interview. Initially, the study assessed smokers' relative utility rating of 30 features of cessation programs. Utility data were used in computer-simulated comparisons of a low-cost, self-help oriented program under development and five other existing programs. A baseline version of the program under development and two modifications (for example, use of a support group with a higher level of cost) were simulated. Both the baseline version and modifications received a favorable response vis-à-vis comparison programs. Modifications requiring higher program costs were, however, associated with moderately reduced levels of favorable consumer response. The second study used a sample of 70 smokers who responded to an expanded set of smoking cessation program features focusing on program packaging. This secondary study incorporate in-person, computer-assisted interviews at a shopping mall, with smokers viewing an artist's mock-up of various program options on display. A similar pattern of responses to simulated program modifications emerged, with monetary cost apparently playing a key role. The significance of conjoint-based computer simulation as a tool in program development or dissemination, salient methodological issues, and implications for further research are discussed. PMID:1738813

  20. Simulating smokers' acceptance of modifications in a cessation program.

    PubMed

    Spoth, R

    1992-01-01

    Recent research has underscored the importance of assessing barriers to smokers' acceptance of cessation programs. This paper illustrates the use of computer simulations to gauge smokers' response to program modifications which may produce barriers to participation. It also highlights methodological issues encountered in conducting this work. Computer simulations were based on conjoint analysis, a consumer research method which enables measurement of smokers' relative preference for various modifications of cessation programs. Results from two studies are presented in this paper. The primary study used a randomly selected sample of 218 adult smokers who participated in a computer-assisted phone interview. Initially, the study assessed smokers' relative utility rating of 30 features of cessation programs. Utility data were used in computer-simulated comparisons of a low-cost, self-help oriented program under development and five other existing programs. A baseline version of the program under development and two modifications (for example, use of a support group with a higher level of cost) were simulated. Both the baseline version and modifications received a favorable response vis-à-vis comparison programs. Modifications requiring higher program costs were, however, associated with moderately reduced levels of favorable consumer response. The second study used a sample of 70 smokers who responded to an expanded set of smoking cessation program features focusing on program packaging. This secondary study incorporate in-person, computer-assisted interviews at a shopping mall, with smokers viewing an artist's mock-up of various program options on display. A similar pattern of responses to simulated program modifications emerged, with monetary cost apparently playing a key role. The significance of conjoint-based computer simulation as a tool in program development or dissemination, salient methodological issues, and implications for further research are discussed.

  1. Numerical solutions of acoustic wave propagation problems using Euler computations

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.

    1984-01-01

    This paper reports solution procedures for problems arising from the study of engine inlet wave propagation. The first problem is the study of sound waves radiated from cylindrical inlets. The second one is a quasi-one-dimensional problem to study the effect of nonlinearities and the third one is the study of nonlinearities in two dimensions. In all three problems Euler computations are done with a fourth-order explicit scheme. For the first problem results are shown in agreement with experimental data and for the second problem comparisons are made with an existing asymptotic theory. The third problem is part of an ongoing work and preliminary results are presented for this case.

  2. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-01

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  3. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices.more » Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.« less

  4. To address accuracy and precision using methods from analytical chemistry and computational physics.

    PubMed

    Kozmutza, Cornelia; Picó, Yolanda

    2009-04-01

    In this work the pesticides were determined by liquid chromatography-mass spectrometry (LC-MS). In present study the occurrence of imidacloprid in 343 samples of oranges, tangerines, date plum, and watermelons from Valencian Community (Spain) has been investigated. The nine additional pesticides were chosen as they have been recommended for orchard treatment together with imidacloprid. The Mulliken population analysis has been applied to present the charge distribution in imidacloprid. Partitioned energy terms and the virial ratios have been calculated for certain molecules entering in interaction. A new technique based on the comparison of the decomposed total energy terms at various configurations is demonstrated in this work. The interaction ability could be established correctly in the studied case. An attempt is also made in this work to address accuracy and precision. These quantities are well-known in experimental measurements. In case precise theoretical description is achieved for the contributing monomers and also for the interacting complex structure some properties of this latter system can be predicted to quite a good accuracy. Based on simple hypothetical considerations we estimate the impact of applying computations on reducing the amount of analytical work.

  5. Three dimensional viscous analysis of a hypersonic inlet

    NASA Technical Reports Server (NTRS)

    Reddy, D. R.; Smith, G. E.; Liou, M.-F.; Benson, Thomas J.

    1989-01-01

    The flow fields in supersonic/hypersonic inlets are currently being studied at NASA Lewis Research Center using 2- and 3-D full Navier-Stokes and Parabolized Navier-Stokes solvers. These tools have been used to analyze the flow through the McDonnell Douglas Option 2 inlet which has been tested at Calspan in support of the National Aerospace Plane Program. Comparisons between the computational and experimental results are presented. These comparisons lead to better overall understanding of the complex flows present in this class of inlets. The aspects of the flow field emphasized in this work are the 3-D effects, the transition from laminar to turbulent flow, and the strong nonuniformities generated within the inlet.

  6. How parents can affect excessive spending of time on screen-based activities.

    PubMed

    Brindova, Daniela; Pavelka, Jan; Ševčikova, Anna; Žežula, Ivan; van Dijk, Jitse P; Reijneveld, Sijmen A; Geckova, Andrea Madarasova

    2014-12-12

    The aim of this study is to explore the association between family-related factors and excessive time spent on screen-based activities among school-aged children. A cross-sectional survey using the methodology of the Health Behaviour in School-aged Children study was performed in 2013, with data collected from Slovak (n = 258) and Czech (n = 406) 11- and 15-year-old children. The effects of age, gender, availability of a TV or computer in the bedroom, parental rules on time spent watching TV or working on a computer, parental rules on the content of TV programmes and computer work and watching TV together with parents on excessive time spent with screen-based activities were explored using logistic regression models. Two-thirds of respondents watch TV or play computer games at least two hours a day. Older children have a 1.80-times higher chance of excessive TV watching (CI: 1.30-2.51) and a 3.91-times higher chance of excessive computer use (CI: 2.82-5.43) in comparison with younger children. More than half of children have a TV (53%) and a computer (73%) available in their bedroom, which increases the chance of excessive TV watching by 1.59 times (CI: 1.17-2.16) and of computer use by 2.25 times (CI: 1.59-3.20). More than half of parents rarely or never apply rules on the length of TV watching (64%) or time spent on computer work (56%), and their children have a 1.76-times higher chance of excessive TV watching (CI: 1.26-2.46) and a 1.50-times greater chance of excessive computer use (CI: 1.07-2.08). A quarter of children reported that they are used to watching TV together with their parents every day, and these have a 1.84-times higher chance of excessive TV watching (1.25-2.70). Reducing time spent watching TV by applying parental rules or a parental role model might help prevent excessive time spent on screen-based activities.

  7. Deep Learning for Magnetic Resonance Fingerprinting: A New Approach for Predicting Quantitative Parameter Values from Time Series.

    PubMed

    Hoppe, Elisabeth; Körzdörfer, Gregor; Würfl, Tobias; Wetzl, Jens; Lugauer, Felix; Pfeuffer, Josef; Maier, Andreas

    2017-01-01

    The purpose of this work is to evaluate methods from deep learning for application to Magnetic Resonance Fingerprinting (MRF). MRF is a recently proposed measurement technique for generating quantitative parameter maps. In MRF a non-steady state signal is generated by a pseudo-random excitation pattern. A comparison of the measured signal in each voxel with the physical model yields quantitative parameter maps. Currently, the comparison is done by matching a dictionary of simulated signals to the acquired signals. To accelerate the computation of quantitative maps we train a Convolutional Neural Network (CNN) on simulated dictionary data. As a proof of principle we show that the neural network implicitly encodes the dictionary and can replace the matching process.

  8. Forensic facial comparison in South Africa: State of the science.

    PubMed

    Steyn, M; Pretorius, M; Briers, N; Bacci, N; Johnson, A; Houlton, T M R

    2018-06-01

    Forensic facial comparison (FFC) is a scientific technique used to link suspects to a crime scene based on the analysis of photos or video recordings from that scene. While basic guidelines on practice and training are provided by the Facial Identification Scientific Working Group, details of how these are applied across the world are scarce. FFC is frequently used in South Africa, with more than 700 comparisons conducted in the last two years alone. In this paper the standards of practice are outlined, with new proposed levels of agreement/conclusions. We outline three levels of training that were established, with training in facial anatomy, terminology, principles of image comparison, image science, facial recognition and computer skills being aimed at developing general competency. Training in generating court charts and understanding court case proceedings are being specifically developed for the South African context. Various shortcomings still exist, specifically with regard to knowledge of the reliability of the technique. These need to be addressed in future research. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Gaze data reveal distinct choice processes underlying model-based and model-free reinforcement learning

    PubMed Central

    Konovalov, Arkady; Krajbich, Ian

    2016-01-01

    Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. PMID:27511383

  10. Railroads and the Environment : Estimation of Fuel Consumption in Rail Transportation : Volume 3. Comparison of Computer Simulations with Field Measurements

    DOT National Transportation Integrated Search

    1978-09-01

    This report documents comparisons between extensive rail freight service measurements (previously presented in Volume II) and simulations of the same operations using a sophisticated train performance calculator computer program. The comparisons cove...

  11. Comparison of Computational-Model and Experimental-Example Trained Neural Networks for Processing Speckled Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.

    1998-01-01

    The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model-generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.

  12. Comparison of Computational, Model and Experimental, Example Trained Neural Networks for Processing Speckled Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.

    1998-01-01

    The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model- generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.

  13. Backscattering and extinction cross sections of two swimbladdered fishes at the lowest resonance, as modeled by the boundary-element method

    NASA Astrophysics Data System (ADS)

    Foote, Kenneth G.; Francis, David T. I.

    2003-04-01

    The boundary-element method has been applied to backscattering and extinction of sound by swimbladdered fish at the lowest, breathing-mode resonance. Corresponding cross sections have been computed for specimens of two representative kinds of swimbladder-bearing fish, namely physostomes and physoclists, which, respectively, possess and lack an external duct. The respective fishes are herring (Clupea harengus) and pollack (Pollachius pollachius), for which swimbladder morphometric data are available. The depth dependences of the cross sections are computed over the range 0-500 m. Comparisons are made with measurements and other modeled results for a number of species. [Work supported by ONR.

  14. A comparison between state-specific and linear-response formalisms for the calculation of vertical electronic transition energy in solution with the CCSD-PCM method.

    PubMed

    Caricato, Marco

    2013-07-28

    The calculation of vertical electronic transition energies of molecular systems in solution with accurate quantum mechanical methods requires the use of approximate and yet reliable models to describe the effect of the solvent on the electronic structure of the solute. The polarizable continuum model (PCM) of solvation represents a computationally efficient way to describe this effect, especially when combined with coupled cluster (CC) methods. Two formalisms are available to compute transition energies within the PCM framework: State-Specific (SS) and Linear-Response (LR). The former provides a more complete account of the solute-solvent polarization in the excited states, while the latter is computationally very efficient (i.e., comparable to gas phase) and transition properties are well defined. In this work, I review the theory for the two formalisms within CC theory with a focus on their computational requirements, and present the first implementation of the LR-PCM formalism with the coupled cluster singles and doubles method (CCSD). Transition energies computed with LR- and SS-CCSD-PCM are presented, as well as a comparison between solvation models in the LR approach. The numerical results show that the two formalisms provide different absolute values of transition energy, but similar relative solvatochromic shifts (from nonpolar to polar solvents). The LR formalism may then be used to explore the solvent effect on multiple states and evaluate transition probabilities, while the SS formalism may be used to refine the description of specific states and for the exploration of excited state potential energy surfaces of solvated systems.

  15. Comparison of Turbulent Heat-Transfer Results for Uniform Wall Heat Flux and Uniform Wall Temperature

    NASA Technical Reports Server (NTRS)

    Siegel, R.; Sparrow, E. M.

    1960-01-01

    The purpose of this note is to examine in a more precise way how the Nusselt numbers for turbulent heat transfer in both the fully developed and thermal entrance regions of a circular tube are affected by two different wall boundary conditions. The comparisons are made for: (a) Uniform wall temperature (UWT); and (b) uniform wall heat flux (UHF). Several papers which have been concerned with the turbulent thermal entrance region problem are given. 1 Although these analyses have all utilized an eigenvalue formulation for the thermal entrance region there were differences in the choices of eddy diffusivity expressions, velocity distributions, and methods for carrying out the numerical solutions. These differences were also found in the fully developed analyses. Hence when making a comparison of the analytical results for uniform wall temperature and uniform wall heat flux, it was not known if differences in the Nusselt numbers could be wholly attributed to the difference in wall boundary conditions, since all the analytical results were not obtained in a consistent way. To have results which could be directly compared, computations were carried out for the uniform wall temperature case, using the same eddy diffusivity, velocity distribution, and digital computer program employed for uniform wall heat flux. In addition, the previous work was extended to a lower Reynolds number range so that comparisons could be made over a wide range of both Reynolds and Prandtl numbers.

  16. Test and evaluation of a multifunction keyboard and a dedicated keyboard for control of a flight management computer

    NASA Technical Reports Server (NTRS)

    Crane, J. M.; Boucek, G. P., Jr.; Smith, W. D.

    1986-01-01

    A flight management computer (FMC) control display unit (CDU) test was conducted to compare two types of input devices: a fixed legend (dedicated) keyboard and a programmable legend (multifunction) keyboard. The task used for comparison was operation of the flight management computer for the Boeing 737-300. The same tasks were performed by twelve pilots on the FMC control display unit configured with a programmable legend keyboard and with the currently used B737-300 dedicated keyboard. Flight simulator work activity levels and input task complexity were varied during each pilot session. Half of the points tested were previously familiar with the B737-300 dedicated keyboard CDU and half had no prior experience with it. The data collected included simulator flight parameters, keystroke time and sequences, and pilot questionnaire responses. A timeline analysis was also used for evaluation of the two keyboard concepts.

  17. Substantial Fast-Wave Power Flux in the SOL of a Cylindrical Model; Comparison with Coaxial Modes

    NASA Astrophysics Data System (ADS)

    Perkins, R. J.; Bertelli, N.; Hosea, J. C.; Phillips, C. K.; Taylor, G.; Wilson, J. R.

    2015-11-01

    The NSTX high-harmonic fast-wave (HHFW) heating system can lose a significant amount of power along magnetic fields lines in the SOL to the divertor regions under certain conditions. A cylindrical cold-plasma model, with parameters resembling those of NSTX, shows the existence of modes with relatively large RF field amplitudes in the low-density annulus, similar to recent results found with the full-wave simulation AORSA. Here, we compare and contrast these modes against ``coaxial modes,'' modes that resemble TEM modes found in coaxial cables. We also compute the 3D Poynting flux as a function of length along the cylinder for comparison to NSTX. Such work is part of an effort to include the proper edge damping into full-wave codes so that they can reproduce the losses observed in NSTX and predict their importance for ITER. This work was supported by DOE Contract No. DE-AC02-09CH11466.

  18. CFD Modeling of Free-Piston Stirling Engines

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir B.; Zhang, Zhi-Guo; Tew, Roy C., Jr.; Gedeon, David; Simon, Terrence W.

    2001-01-01

    NASA Glenn Research Center (GRC) is funding Cleveland State University (CSU) to develop a reliable Computational Fluid Dynamics (CFD) code that can predict engine performance with the goal of significant improvements in accuracy when compared to one-dimensional (1-D) design code predictions. The funding also includes conducting code validation experiments at both the University of Minnesota (UMN) and CSU. In this paper a brief description of the work-in-progress is provided in the two areas (CFD and Experiments). Also, previous test results are compared with computational data obtained using (1) a 2-D CFD code obtained from Dr. Georg Scheuerer and further developed at CSU and (2) a multidimensional commercial code CFD-ACE+. The test data and computational results are for (1) a gas spring and (2) a single piston/cylinder with attached annular heat exchanger. The comparisons among the codes are discussed. The paper also discusses plans for conducting code validation experiments at CSU and UMN.

  19. Multigrid optimal mass transport for image registration and morphing

    NASA Astrophysics Data System (ADS)

    Rehman, Tauseef ur; Tannenbaum, Allen

    2007-02-01

    In this paper we present a computationally efficient Optimal Mass Transport algorithm. This method is based on the Monge-Kantorovich theory and is used for computing elastic registration and warping maps in image registration and morphing applications. This is a parameter free method which utilizes all of the grayscale data in an image pair in a symmetric fashion. No landmarks need to be specified for correspondence. In our work, we demonstrate significant improvement in computation time when our algorithm is applied as compared to the originally proposed method by Haker et al [1]. The original algorithm was based on a gradient descent method for removing the curl from an initial mass preserving map regarded as 2D vector field. This involves inverting the Laplacian in each iteration which is now computed using full multigrid technique resulting in an improvement in computational time by a factor of two. Greater improvement is achieved by decimating the curl in a multi-resolutional framework. The algorithm was applied to 2D short axis cardiac MRI images and brain MRI images for testing and comparison.

  20. INTERNATIONAL COMPARISON EXERCISE ON NEUTRON SPECTRA UNFOLDING IN BONNER SPHERES SPECTROMETRY: PROBLEM DESCRIPTION AND PRELIMINARY ANALYSIS.

    PubMed

    Gómez-Ros, J M; Bedogni, R; Domingo, C; Eakins, J S; Roberts, N; Tanner, R J

    2018-01-29

    This article describes the purpose, the proposed problems and the reference solutions of an international comparison on neutron spectra unfolding in Bonner spheres spectrometry, organised within the activities of EURADOS working group 6: computational dosimetry. The exercise considered four realistic situations: a medical accelerator, a workplace field, an irradiation room and a skyshine scenario. Although a detailed analysis of the submitted solutions is under preparation, the preliminary discussion of some physical aspects of the problem, e.g. the changes in the unfolding results due to the perturbation of the neutron field by the Bonner spheres, is presented. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. A comparative analysis of support vector machines and extreme learning machines.

    PubMed

    Liu, Xueyi; Gao, Chuanhou; Li, Ping

    2012-09-01

    The theory of extreme learning machines (ELMs) has recently become increasingly popular. As a new learning algorithm for single-hidden-layer feed-forward neural networks, an ELM offers the advantages of low computational cost, good generalization ability, and ease of implementation. Hence the comparison and model selection between ELMs and other kinds of state-of-the-art machine learning approaches has become significant and has attracted many research efforts. This paper performs a comparative analysis of the basic ELMs and support vector machines (SVMs) from two viewpoints that are different from previous works: one is the Vapnik-Chervonenkis (VC) dimension, and the other is their performance under different training sample sizes. It is shown that the VC dimension of an ELM is equal to the number of hidden nodes of the ELM with probability one. Additionally, their generalization ability and computational complexity are exhibited with changing training sample size. ELMs have weaker generalization ability than SVMs for small sample but can generalize as well as SVMs for large sample. Remarkably, great superiority in computational speed especially for large-scale sample problems is found in ELMs. The results obtained can provide insight into the essential relationship between them, and can also serve as complementary knowledge for their past experimental and theoretical comparisons. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Flow in curved ducts of varying cross-section

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, F.; Patel, V. C.

    1992-07-01

    Two numerical methods for solving the incompressible Navier-Stokes equations are compared with each other by applying them to calculate laminar and turbulent flows through curved ducts of regular cross-section. Detailed comparisons, between the computed solutions and experimental data, are carried out in order to validate the two methods and to identify their relative merits and disadvantages. Based on the conclusions of this comparative study a numerical method is developed for simulating viscous flows through curved ducts of varying cross-sections. The proposed method is capable of simulating the near-wall turbulence using fine computational meshes across the sublayer in conjunction with a two-layer k-epsilon model. Numerical solutions are obtained for: (1) a straight transition duct geometry, and (2) a hydroturbine draft-tube configuration at model scale Reynolds number for various inlet swirl intensities. The report also provides a detailed literature survey that summarizes all the experimental and computational work in the area of duct flows.

  3. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    PubMed Central

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241

  4. Data Quality Assessment of In Situ and Altimeter Observations Through Two-Way Intercomparison Methods

    NASA Astrophysics Data System (ADS)

    Guinehut, Stephanie; Valladeau, Guillaume; Legeais, Jean-Francois; Rio, Marie-Helene; Ablain, Michael; Larnicol, Gilles

    2013-09-01

    This proceeding presents an overview of the two-way inter-comparison activities performed at CLS for both space and in situ observation agencies and why this activity is a required step to obtain accurate and homogenous data sets that can then be used together for climate studies or in assimilation/validation tools. We first describe the work performed in the frame of the SALP program to assess the stability of altimeter missions through SSH comparisons with tide gauges (GLOSS/CLIVAR network). Then, we show how the SSH comparison between the Argo array and altimeter time series allows the detection of drifts or jumps in altimeter (SALP program) but also for some Argo floats (Ifremer/Coriolis center). Lastly, we describe how the combine use of altimeter and wind observations helps the detection of drogue loss of surface drifting buoys (GDP network) and allow the computation of a correction term for wind slippage.

  5. Euler equation computations for the flow over a hovering helicopter rotor

    NASA Technical Reports Server (NTRS)

    Roberts, Thomas Wesley

    1988-01-01

    A numerical solution technique is developed for computing the flow field around an isolated helicopter rotor in hover. The flow is governed by the compressible Euler equations which are integrated using a finite volume approach. The Euler equations are coupled to a free wake model of the rotary wing vortical wake. This wake model is incorporated into the finite volume solver using a prescribed flow, or perturbation, technique which eliminates the numerical diffusion of vorticity due to the artificial viscosity of the scheme. The work is divided into three major parts: (1) comparisons of Euler solutions to experimental data for the flow around isolated wings show good agreement with the surface pressures, but poor agreement with the vortical wake structure; (2) the perturbation method is developed and used to compute the interaction of a streamwise vortex with a semispan wing. The rapid diffusion of the vortex when only the basic Euler solver is used is illustrated, and excellent agreement with experimental section lift coefficients is demonstrated when using the perturbation approach; and (3) the free wake solution technique is described and the coupling of the wake to the Euler solver for an isolated rotor is presented. Comparisons with experimental blade load data for several cases show good agreement, with discrepancies largely attributable to the neglect of viscous effects. The computed wake geometries agree less well with experiment, the primary difference being that too rapid a wake contraction is predicted for all the cases.

  6. Three-dimensional computational fluid dynamics modelling and experimental validation of the Jülich Mark-F solid oxide fuel cell stack

    NASA Astrophysics Data System (ADS)

    Nishida, R. T.; Beale, S. B.; Pharoah, J. G.; de Haart, L. G. J.; Blum, L.

    2018-01-01

    This work is among the first where the results of an extensive experimental research programme are compared to performance calculations of a comprehensive computational fluid dynamics model for a solid oxide fuel cell stack. The model, which combines electrochemical reactions with momentum, heat, and mass transport, is used to obtain results for an established industrial-scale fuel cell stack design with complex manifolds. To validate the model, comparisons with experimentally gathered voltage and temperature data are made for the Jülich Mark-F, 18-cell stack operating in a test furnace. Good agreement is obtained between the model and experiment results for cell voltages and temperature distributions, confirming the validity of the computational methodology for stack design. The transient effects during ramp up of current in the experiment may explain a lower average voltage than model predictions for the power curve.

  7. Computer Based Collaborative Problem Solving for Introductory Courses in Physics

    NASA Astrophysics Data System (ADS)

    Ilie, Carolina; Lee, Kevin

    2010-03-01

    We discuss collaborative problem solving computer-based recitation style. The course is designed by Lee [1], and the idea was proposed before by Christian, Belloni and Titus [2,3]. The students find the problems on a web-page containing simulations (physlets) and they write the solutions on an accompanying worksheet after discussing it with a classmate. Physlets have the advantage of being much more like real-world problems than textbook problems. We also compare two protocols for web-based instruction using simulations in an introductory physics class [1]. The inquiry protocol allowed students to control input parameters while the worked example protocol did not. We will discuss which of the two methods is more efficient in relation to Scientific Discovery Learning and Cognitive Load Theory. 1. Lee, Kevin M., Nicoll, Gayle and Brooks, Dave W. (2004). ``A Comparison of Inquiry and Worked Example Web-Based Instruction Using Physlets'', Journal of Science Education and Technology 13, No. 1: 81-88. 2. Christian, W., and Belloni, M. (2001). Physlets: Teaching Physics With Interactive Curricular Material, Prentice Hall, Englewood Cliffs, NJ. 3. Christian,W., and Titus,A. (1998). ``Developing web-based curricula using Java Physlets.'' Computers in Physics 12: 227--232.

  8. Implementation of a Pseudo-Bending Seismic Travel-Time Calculator in a Distributed Parallel Computing Environment

    DTIC Science & Technology

    2008-09-01

    algorithms that have been proposed to accomplish it fall into three broad categories. Eikonal solvers (e.g., Vidale, 1988, 1990; Podvin and Lecomte, 1991...difference eikonal solvers, the FMM algorithm works by following a wavefront as it moves across a volume of grid points, updating the travel times in...the grid according to the eikonal differential equation, using a second-order finite-difference scheme. We chose to use FMM for our comparison because

  9. System identification using Nuclear Norm & Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.

    2018-01-01

    In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.

  10. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  11. Extraction and analysis of the image in the sight field of comparison goniometer to measure IR mirrors assembly

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-shan; Zhao, Yue-jin; Li, Zhuo; Dong, Liquan; Chu, Xuhong; Li, Ping

    2010-11-01

    The comparison goniometer is widely used to measure and inspect small angle, angle difference, and parallelism of two surfaces. However, the common manner to read a comparison goniometer is to inspect the ocular of the goniometer by one eye of the operator. To read an old goniometer that just equips with one adjustable ocular is a difficult work. In the fabrication of an IR reflecting mirrors assembly, a common comparison goniometer is used to measure the angle errors between two neighbor assembled mirrors. In this paper, a quick reading technique image-based for the comparison goniometer used to inspect the parallelism of mirrors in a mirrors assembly is proposed. One digital camera, one comparison goniometer and one set of computer are used to construct a reading system, the image of the sight field in the comparison goniometer will be extracted and recognized to get the angle positions of the reflection surfaces to be measured. In order to obtain the interval distance between the scale lines, a particular technique, left peak first method, based on the local peak values of intensity in the true color image is proposed. A program written in VC++6.0 has been developed to perform the color digital image processing.

  12. Results of comparative RBMK neutron computation using VNIIEF codes (cell computation, 3D statics, 3D kinetics). Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.

    1995-12-31

    In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less

  13. Soft drink effects on sensorimotor rhythm brain computer interface performance and resting-state spectral power.

    PubMed

    Mundahl, John; Jianjun Meng; He, Jeffrey; Bin He

    2016-08-01

    Brain-computer interface (BCI) systems allow users to directly control computers and other machines by modulating their brain waves. In the present study, we investigated the effect of soft drinks on resting state (RS) EEG signals and BCI control. Eight healthy human volunteers each participated in three sessions of BCI cursor tasks and resting state EEG. During each session, the subjects drank an unlabeled soft drink with either sugar, caffeine, or neither ingredient. A comparison of resting state spectral power shows a substantial decrease in alpha and beta power after caffeine consumption relative to control. Despite attenuation of the frequency range used for the control signal, caffeine average BCI performance was the same as control. Our work provides a useful characterization of caffeine, the world's most popular stimulant, on brain signal frequencies and their effect on BCI performance.

  14. Navier-Stokes simulation of external/internal transonic flow on the forebody/inlet of the AV-8B Harrier II

    NASA Technical Reports Server (NTRS)

    Mysko, Stephen J.; Chyu, Wei J.; Stortz, Michael W.; Chow, Chuen-Yen

    1993-01-01

    In this work, the computation of combined external/internal transonic flow on the complex forebody/inlet configuration of the AV-8B Harrier II is performed. The actual aircraft has been measured and its surface and surrounding domain, in which the fuselage and inlet have a common wall, have been described using structured grids. The 'thin-layer' Navier-Stokes equations were used to model the flow along with the Chimera embedded multi-block technique. A fully conservative, alternating direction implicit (ADI), approximately factored, partially fluxsplit algorithm was employed to perform the computation. Comparisons to some experimental wind tunnel data yielded good agreement for flow at zero incidence and angle of attack. The aim of this paper is to provide a methodology or computational tool for the numerical solution of complex external/internal flows.

  15. A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters.

    PubMed

    Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Serrano, Alejandro; Godoy, Jorge; Martínez-Álvarez, Antonio; Villagra, Jorge

    2017-11-11

    Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle.

  16. Measurements and Computations of Flow in an Urban Street System

    NASA Astrophysics Data System (ADS)

    Castro, Ian P.; Xie, Zheng-Tong; Fuka, V.; Robins, Alan G.; Carpentieri, M.; Hayden, P.; Hertwig, D.; Coceal, O.

    2017-02-01

    We present results from laboratory and computational experiments on the turbulent flow over an array of rectangular blocks modelling a typical, asymmetric urban canopy at various orientations to the approach flow. The work forms part of a larger study on dispersion within such arrays (project DIPLOS) and concentrates on the nature of the mean flow and turbulence fields within the canopy region, recognising that unless the flow field is adequately represented in computational models there is no reason to expect realistic simulations of the nature of the dispersion of pollutants emitted within the canopy. Comparisons between the experimental data and those obtained from both large-eddy simulation (LES) and direct numerical simulation (DNS) are shown and it is concluded that careful use of LES can produce generally excellent agreement with laboratory and DNS results, lending further confidence in the use of LES for such situations. Various crucial issues are discussed and advice offered to both experimentalists and those seeking to compute canopy flows with turbulence resolving models.

  17. Final Technical Report for Department of Energy award number DE-FG02-06ER54882, Revised

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eggleston, Dennis L.

    The research reported here involves studies of radial particle transport in a cylindrical, low-density Malmberg-Penning non-neutral plasma trap. The research is primarily experimental but involves careful comparisons to analytical theory and includes the results of a single-particle computer code. The transport is produced by applied electric fields that break the cylindrical symmetry of the trap, hence the term ``asymmetry-induced transport.'' Our computer studies have revealed the importance of a previously ignored class of particles that become trapped in the asymmetry potential. In many common situations these particles exhibit large radial excursions and dominate the radial transport. On the experimental side,more » we have developed new data analysis techniques that allowed us to determine the magnetic field dependence of the transport and to place empirical constraints on the form on the transport equation. Experiments designed to test the computer code results gave varying degrees of agreement with further work being necessary to understand the results. This work expands our knowledge of the varied mechanisms of cross-magnetic-field transport and should be of use to other workers studying plasma confinement.« less

  18. Extension of a Kinetic Approach to Chemical Reactions to Electronic Energy Levels and Reactions Involving Charged Species with Application to DSMC Simulations

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2014-01-01

    The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties are extended in the current work to include electronic energy level transitions and reactions involving charged particles. These extensions are shown to agree favorably with reported transition and reaction rates from the literature for near-equilibrium conditions. Also, the extensions are applied to the second flight of the Project FIRE flight experiment at 1634 seconds with a Knudsen number of 0.001 at an altitude of 76.4 km. In order to accomplish this, NASA's direct simulation Monte Carlo code DAC was rewritten to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced chemistry model, and to include the extensions presented in this work. The 1634 second data point was chosen for comparisons to be made in order to include a CFD solution. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid because, although near-transitional, the flow is still considered to be continuum. It is shown that the inclusion of electronic energy levels in the DSMC simulation is necessary for flows of this nature and is required for comparison to the CFD solution. The flow field solutions are also post-processed by the nonequilibrium radiation code HARA to compute the radiative portion.

  19. Air Intakes for High Speed Vehicles (Prises d’Air pour Vehicules a Grande Vitesse)

    DTIC Science & Technology

    1991-09-01

    contributors. The a number of test cases for which rather Working Group wishes to express its detailed experimental data were available sincere thanks to those...larger group . Figs. 3.6.2 and 3.6.3 shows a 3.3.6.3 CFD TECHNIQUES comparison of the computed and experimental static pressure This test case was attempted...by six distributions on the ramp and cowl of different research groups , using seven the intake. Experimental data is shown different codes, as noted

  20. Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo

    2017-10-01

    Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.

  1. Role of Soft Computing Approaches in HealthCare Domain: A Mini Review.

    PubMed

    Gambhir, Shalini; Malik, Sanjay Kumar; Kumar, Yugal

    2016-12-01

    In the present era, soft computing approaches play a vital role in solving the different kinds of problems and provide promising solutions. Due to popularity of soft computing approaches, these approaches have also been applied in healthcare data for effectively diagnosing the diseases and obtaining better results in comparison to traditional approaches. Soft computing approaches have the ability to adapt itself according to problem domain. Another aspect is a good balance between exploration and exploitation processes. These aspects make soft computing approaches more powerful, reliable and efficient. The above mentioned characteristics make the soft computing approaches more suitable and competent for health care data. The first objective of this review paper is to identify the various soft computing approaches which are used for diagnosing and predicting the diseases. Second objective is to identify various diseases for which these approaches are applied. Third objective is to categories the soft computing approaches for clinical support system. In literature, it is found that large number of soft computing approaches have been applied for effectively diagnosing and predicting the diseases from healthcare data. Some of these are particle swarm optimization, genetic algorithm, artificial neural network, support vector machine etc. A detailed discussion on these approaches are presented in literature section. This work summarizes various soft computing approaches used in healthcare domain in last one decade. These approaches are categorized in five different categories based on the methodology, these are classification model based system, expert system, fuzzy and neuro fuzzy system, rule based system and case based system. Lot of techniques are discussed in above mentioned categories and all discussed techniques are summarized in the form of tables also. This work also focuses on accuracy rate of soft computing technique and tabular information is provided for each category including author details, technique, disease and utility/accuracy.

  2. On the utility of threads for data parallel programming

    NASA Technical Reports Server (NTRS)

    Fahringer, Thomas; Haines, Matthew; Mehrotra, Piyush

    1995-01-01

    Threads provide a useful programming model for asynchronous behavior because of their ability to encapsulate units of work that can then be scheduled for execution at runtime, based on the dynamic state of a system. Recently, the threaded model has been applied to the domain of data parallel scientific codes, and initial reports indicate that the threaded model can produce performance gains over non-threaded approaches, primarily through the use of overlapping useful computation with communication latency. However, overlapping computation with communication is possible without the benefit of threads if the communication system supports asynchronous primitives, and this comparison has not been made in previous papers. This paper provides a critical look at the utility of lightweight threads as applied to data parallel scientific programming.

  3. Computer modelling of the optical behaviour of rare earth dopants in BaY2F8

    NASA Astrophysics Data System (ADS)

    Jackson, R. A.; Valerio, M. E. G.; Couto Dos Santos, M. A.; Amaral, J. B.

    2005-01-01

    BaY2F8, when doped with rare earth elements is a material of interest in the development of solid-state laser systems, especially for use in the infrared region. This paper presents the application of a new computational technique, which combines atomistic modelling and crystal field calculations in a study of rare earth doping of the material. Atomistic modelling is used to calculate the symmetry and detailed geometry of the dopant ion-host lattice system, and this information is then used to calculate the crystal field parameters, which are an important indicator in assessing the optical behaviour of the dopant-crystal system. Comparisons with the results of recent experimental work on this material are made.

  4. Direct Numerical Simulation of Liquid Nozzle Spray with Comparison to Shadowgraphy and X-Ray Computed Tomography Experimental Results

    NASA Astrophysics Data System (ADS)

    van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis

    2014-11-01

    In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.

  5. Experimental investigation and numerical modelling of 3D radial compressor stage and influence of the technological holes on the working characteristics

    NASA Astrophysics Data System (ADS)

    Matas, Richard; Syka, Tomáš; Hurda, Lukáš

    2018-06-01

    The article deals with a description of results from research and development of a radial compressor stage with 3D rotor blades. The experimental facility and the measurement and evaluation process is described briefly in the first part. The comparison of measured and computed characteristics can be found in the second part. The last part of this contribution is the evaluation of the rotor blades technological holes influence on the compressor stage characteristics.

  6. Supercomputer modelling of an electronic structure for KCl nanocrystal with edge dislocation with the use of semiempirical and nonempirical models

    NASA Astrophysics Data System (ADS)

    Timoshenko, Yu K.; Shunina, V. A.; Shashkin, A. I.

    2018-03-01

    In the present work we used semiempirical and non-empirical models for electronic states of KCl nanocrystal containing edge dislocation for comparison of the obtained results. Electronic levels and local densities of states were calculated. As a result we found a reasonable qualitative correlation of semiempirical and non-empirical results. Using the results of computer modelling we discuss the problem of localization of electronic states near the line of edge dislocation.

  7. Feasibility of MHD submarine propulsion. Phase II, MHD propulsion: Testing in a two Tesla test facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doss, E.D.; Sikes, W.C.

    1992-09-01

    This report describes the work performed during Phase 1 and Phase 2 of the collaborative research program established between Argonne National Laboratory (ANL) and Newport News Shipbuilding and Dry Dock Company (NNS). Phase I of the program focused on the development of computer models for Magnetohydrodynamic (MHD) propulsion. Phase 2 focused on the experimental validation of the thruster performance models and the identification, through testing, of any phenomena which may impact the attractiveness of this propulsion system for shipboard applications. The report discusses in detail the work performed in Phase 2 of the program. In Phase 2, a two Teslamore » test facility was designed, built, and operated. The facility test loop, its components, and their design are presented. The test matrix and its rationale are discussed. Representative experimental results of the test program are presented, and are compared to computer model predictions. In general, the results of the tests and their comparison with the predictions indicate that thephenomena affecting the performance of MHD seawater thrusters are well understood and can be accurately predicted with the developed thruster computer models.« less

  8. Acoustical qualification of Teatro Nuovo in Spoleto before refurbishing works

    NASA Astrophysics Data System (ADS)

    Cocchi, Alessandro; Cesare Consumi, Marco; Shimokura, Ryota

    2004-05-01

    To qualify the acoustical quality of an opera house two different approaches are now available: one is based on responses of qualified listeners (subjective judgments) compared with objective values of selected parameters, the other on comparison tests conducted in suited rooms and on a model of the auditory brain system (preference). In the occasion of the refurbishment of an opera house known for the Two Worlds Festival edited yearly by the Italian Composer G. C. Menotti, a large number of measurements were taken with different techniques, so it is possible to compare the different methods and also the results with some geometrical criterion, based on the most simple rules of musical harmony, now neglected as our attention is attracted to computer simulations, computer aided measurement techniques and similar modern methods. From this work some link between well known acoustical parameters (not known at the time when architects sketched the shape of ancient opera houses) and geometrical criteria (well known at the time when ancient opera houses were built) will be shown.

  9. A verification of the gyrokinetic microstability codes GEM, GYRO, and GS2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bravenec, R. V.; Chen, Y.; Wan, W.

    2013-10-15

    A previous publication [R. V. Bravenec et al., Phys. Plasmas 18, 122505 (2011)] presented favorable comparisons of linear frequencies and nonlinear fluxes from the Eulerian gyrokinetic codes gyro[J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] and gs2[W. Dorland et al., Phys. Rev. Lett. 85, 5579 (2000)]. The motivation was to verify the codes, i.e., demonstrate that they correctly solve the gyrokinetic-Maxwell equations. The premise was that it is highly unlikely for both codes to yield the same incorrect results. In this work, we add the Lagrangian particle-in-cell code gem[Y. Chen and S. Parker, J. Comput. Phys.more » 220, 839 (2007)] to the comparisons, not simply to add another code, but also to demonstrate that the codes' algorithms do not matter. We find good agreement of gem with gyro and gs2 for the plasma conditions considered earlier, thus establishing confidence that the codes are verified and that ongoing validation efforts for these plasma parameters are warranted.« less

  10. A Computational and Experimental Investigation of a Delta Wing with Vertical Tails

    NASA Technical Reports Server (NTRS)

    Krist. Sherrie L.; Washburn, Anthony E.; Visser, Kenneth D.

    2004-01-01

    The flow over an aspect ratio 1 delta wing with twin vertical tails is studied in a combined computational and experimental investigation. This research is conducted in an effort to understand the vortex and fin interaction process. The computational algorithm used solves both the thin-layer Navier-Stokes and the inviscid Euler equations and utilizes a chimera grid-overlapping technique. The results are compared with data obtained from a detailed experimental investigation. The laminar case presented is for an angle of attack of 20 and a Reynolds number of 500; 000. Good agreement is observed for the physics of the flow field, as evidenced by comparisons of computational pressure contours with experimental flow-visualization images, as well as by comparisons of vortex-core trajectories. While comparisons of the vorticity magnitudes indicate that the computations underpredict the magnitude in the wing primary-vortex-core region, grid embedding improves the computational prediction.

  11. A computational and experimental investigation of a delta wing with vertical tails

    NASA Technical Reports Server (NTRS)

    Krist, Sherrie L.; Washburn, Anthony E.; Visser, Kenneth D.

    1993-01-01

    The flow over an aspect ratio 1 delta wing with twin vertical tails is studied in a combined computational and experimental investigation. This research is conducted in an effort to understand the vortex and fin interaction process. The computational algorithm used solves both the thin-layer Navier-Stokes and the inviscid Euler equations and utilizes a chimera grid-overlapping technique. The results are compared with data obtained from a detailed experimental investigation. The laminar case presented is for an angle of attack of 20 deg and a Reynolds number of 500,000. Good agreement is observed for the physics of the flow field, as evidenced by comparisons of computational pressure contours with experimental flow-visualization images, as well as by comparisons of vortex-core trajectories. While comparisons of the vorticity magnitudes indicate that the computations underpredict the magnitude in the wing primary-vortex-core region, grid embedding improves the computational prediction.

  12. Experimental Comparison of Two Quantum Computing Architectures

    DTIC Science & Technology

    2017-03-28

    IN A U G U RA L A RT IC LE CO M PU TE R SC IE N CE S Experimental comparison of two quantum computing architectures Norbert M. Linkea,b,1, Dmitri...the vast computing power a universal quantumcomputer could offer, several candidate systems are being explored. They have allowed experimental ...existing systems and the role of architecture in quantum computer design . These will be crucial for the realization of more advanced future incarna

  13. Estimating direct, diffuse, and global solar radiation for various cities in Iran by two methods and their comparison with the measured data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashjaee, M.; Roomina, M.R.; Ghafouri-Azar, R.

    1993-05-01

    Two computational methods for calculating hourly, daily, and monthly average values of direct, diffuse, and global solar radiation on horizontal collectors have been presented in this article for location with different latitude, altitude, and atmospheric conditions in Iran. These methods were developed using two different independent sets of measured data from the Iranian Meteorological Organization (IMO) for two cities in Iran (Tehran and Isfahan) during 14 years of measurement for Tehran and 4 years of measurement for Isfahan. Comparison of calculated monthly average global solar radiation, using the two models for Tehran and Isfahan with measured data from the IMO,more » has indicated a good agreement between them. Then these developed methods were extended to another location (city of Bandar-Abbas), where measured data are not available. But the work of Daneshyar predicts its monthly global radiation. The maximum discrepancy of 7% between the developed models and the work of Daneshyar was observed.« less

  14. Computation of Cosmic Ray Ionization and Dose at Mars: a Comparison of HZETRN and Planetocosmics for Proton and Alpha Particles

    NASA Technical Reports Server (NTRS)

    Gronoff, Guillaume; Norman, Ryan B.; Mertens, Christopher J.

    2014-01-01

    The ability to evaluate the cosmic ray environment at Mars is of interest for future manned exploration. To support exploration, tools must be developed to accurately access the radiation environment in both free space and on planetary surfaces. The primary tool NASA uses to quantify radiation exposure behind shielding materials is the space radiation transport code, HZETRN. In order to build confidence in HZETRN, code benchmarking against Monte Carlo radiation transport codes is often used. This work compares the dose calculations at Mars by HZETRN and the Geant4 application Planetocosmics. The dose at ground and the energy deposited in the atmosphere by galactic cosmic ray protons and alpha particles has been calculated for the Curiosity landing conditions. In addition, this work has considered Solar Energetic Particle events, allowing for the comparison of varying input radiation environments. The results for protons and alpha particles show very good agreement between HZETRN and Planetocosmics.

  15. Integrated Computational System for Aerodynamic Steering and Visualization

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1999-01-01

    In February of 1994, an effort from the Fluid Dynamics and Information Sciences Divisions at NASA Ames Research Center with McDonnel Douglas Aerospace Company and Stanford University was initiated to develop, demonstrate, validate and disseminate automated software for numerical aerodynamic simulation. The goal of the initiative was to develop a tri-discipline approach encompassing CFD, Intelligent Systems, and Automated Flow Feature Recognition to improve the utility of CFD in the design cycle. This approach would then be represented through an intelligent computational system which could accept an engineer's definition of a problem and construct an optimal and reliable CFD solution. Stanford University's role focused on developing technologies that advance visualization capabilities for analysis of CFD data, extract specific flow features useful for the design process, and compare CFD data with experimental data. During the years 1995-1997, Stanford University focused on developing techniques in the area of tensor visualization and flow feature extraction. Software libraries were created enabling feature extraction and exploration of tensor fields. As a proof of concept, a prototype system called the Integrated Computational System (ICS) was developed to demonstrate CFD design cycle. The current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching and vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will (1) briefly review the technologies developed during 1995-1997 (2) describe current technologies in the area of comparison techniques, (4) describe the theory of our new method researched during the grant year (5) summarize a few of the results and finally (6) discuss work within the last 6 months that are direct extensions from the grant.

  16. Impacts of Mobile Computing on Student Learning in the University: A Comparison of Course Assessment Data

    ERIC Educational Resources Information Center

    Hawkes, Mark; Hategekimana, Claver

    2010-01-01

    This study focuses on the impact of wireless, mobile computing tools on student assessment outcomes. In a campus-wide wireless, mobile computing environment at an upper Midwest university, an empirical analysis is applied to understand the relationship between student performance and Tablet PC use. An experimental/control group comparison of…

  17. Formal Solutions for Polarized Radiative Transfer. II. High-order Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janett, Gioele; Steiner, Oskar; Belluzzi, Luca, E-mail: gioele.janett@irsol.ch

    When integrating the radiative transfer equation for polarized light, the necessity of high-order numerical methods is well known. In fact, well-performing high-order formal solvers enable higher accuracy and the use of coarser spatial grids. Aiming to provide a clear comparison between formal solvers, this work presents different high-order numerical schemes and applies the systematic analysis proposed by Janett et al., emphasizing their advantages and drawbacks in terms of order of accuracy, stability, and computational cost.

  18. Work-in-Progress Presented at the Army Symposium on Solid Mechanics, 1980 - Designing for Extremes: Environment, Loading, and Structural Behavior Held at Cape Cod, Massachusetts, 29 September-2 October 1980

    DTIC Science & Technology

    1980-09-01

    relating x’and y’ Figure 2: Basic Laboratory Simulation Model 73 COMPARISON OF COMPUTED AND MEASURED ACCELERATIONS IN A DYNAMICALLY LOADED TACTICAL...Survival (General) Displacements Mines (Ordnance) Telemeter Systems Dynamic Response Models Temperatures Dynamics Moisture Thermal Stresses Energy...probabilistic reliability model for the XM 753 projectile rocket motor to bulkhead joint under extreme loading conditions is constructed. The reliability

  19. Computational gestalts and perception thresholds.

    PubMed

    Desolneux, Agnès; Moisan, Lionel; Morel, Jean-Michel

    2003-01-01

    In 1923, Max Wertheimer proposed a research programme and method in visual perception. He conjectured the existence of a small set of geometric grouping laws governing the perceptual synthesis of phenomenal objects, or "gestalt" from the atomic retina input. In this paper, we review this set of geometric grouping laws, using the works of Metzger, Kanizsa and their schools. In continuation, we explain why the Gestalt theory research programme can be translated into a Computer Vision programme. This translation is not straightforward, since Gestalt theory never addressed two fundamental matters: image sampling and image information measurements. Using these advances, we shall show that gestalt grouping laws can be translated into quantitative laws allowing the automatic computation of gestalts in digital images. From the psychophysical viewpoint, a main issue is raised: the computer vision gestalt detection methods deliver predictable perception thresholds. Thus, we are set in a position where we can build artificial images and check whether some kind of agreement can be found between the computationally predicted thresholds and the psychophysical ones. We describe and discuss two preliminary sets of experiments, where we compared the gestalt detection performance of several subjects with the predictable detection curve. In our opinion, the results of this experimental comparison support the idea of a much more systematic interaction between computational predictions in Computer Vision and psychophysical experiments.

  20. Factors Affecting Career Choice: Comparison Between Students from Computer and Other Disciplines

    NASA Astrophysics Data System (ADS)

    Alexander, P. M.; Holmner, M.; Lotriet, H. H.; Matthee, M. C.; Pieterse, H. V.; Naidoo, S.; Twinomurinzi, H.; Jordaan, D.

    2011-06-01

    The number of student enrolments in computer-related courses remains a serious concern worldwide with far reaching consequences. This paper reports on an extensive survey about career choice and associated motivational factors amongst new students, only some of whom intend to major in computer-related courses, at two South African universities. The data were analyzed using some components of Social Cognitive Career Theory, namely external influences, self-efficacy beliefs and outcome expectations. The research suggests the need for new strategies for marketing computer-related courses and the avenues through which they are marketed. This can to some extent be achieved by studying strategies used by other (non-computer) university courses, and their professional bodies. However, there are also distinct differences, related to self-efficacy and career outcomes, between the computer majors and the `other' group and these need to be explored further in order to find strategies that work well for this group. It is not entirely clear what the underlying reasons are for these differences but it is noteworthy that the perceived importance of "Interest in the career field" when choosing a career remains very high for both groups of students.

  1. Numerical solution of the incompressible Navier-Stokes equations. Ph.D. Thesis - Stanford Univ., Mar. 1989

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.

    1990-01-01

    The current work is initiated in an effort to obtain an efficient, accurate, and robust algorithm for the numerical solution of the incompressible Navier-Stokes equations in two- and three-dimensional generalized curvilinear coordinates for both steady-state and time-dependent flow problems. This is accomplished with the use of the method of artificial compressibility and a high-order flux-difference splitting technique for the differencing of the convective terms. Time accuracy is obtained in the numerical solutions by subiterating the equations in psuedo-time for each physical time step. The system of equations is solved with a line-relaxation scheme which allows the use of very large pseudo-time steps leading to fast convergence for steady-state problems as well as for the subiterations of time-dependent problems. Numerous laminar test flow problems are computed and presented with a comparison against analytically known solutions or experimental results. These include the flow in a driven cavity, the flow over a backward-facing step, the steady and unsteady flow over a circular cylinder, flow over an oscillating plate, flow through a one-dimensional inviscid channel with oscillating back pressure, the steady-state flow through a square duct with a 90 degree bend, and the flow through an artificial heart configuration with moving boundaries. An adequate comparison with the analytical or experimental results is obtained in all cases. Numerical comparisons of the upwind differencing with central differencing plus artificial dissipation indicates that the upwind differencing provides a much more robust algorithm, which requires significantly less computing time. The time-dependent problems require on the order of 10 to 20 subiterations, indicating that the elliptical nature of the problem does require a substantial amount of computing effort.

  2. Empirical potential influence and effect of temperature on the mechanical properties of pristine and defective hexagonal boron nitride

    NASA Astrophysics Data System (ADS)

    Thomas, Siby; Ajith, K. M.; Valsakumar, M. C.

    2017-06-01

    The major objective of this work is to present results of a classical molecular dynamics study to investigate the effect of changing the cut-off distance in the empirical potential on the stress-strain relation and also the temperature dependent Young’s modulus of pristine and defective hexagonal boron nitride. As the temperature increases, the computed Young’s modulus shows a significant decrease along both the armchair and zigzag directions. The computed Young’s modulus shows a trend in keeping with the structural anisotropy of h-BN. The variation of Young’s modulus with system size is elucidated. The observed mechanical strength of h-BN is significantly affected by the vacancy and Stone-Wales type defects. The computed room temperature Young’s modulus of pristine h-BN is 755 GPa and 769 GPa respectively along the armchair and zigzag directions. The decrease of Young’s modulus with increase in temperature has been analyzed and the results show that the system with zigzag edge shows a higher value of Young’s modulus in comparison to that with armchair edge. As the temperature increases, the computed stiffness decreases and the system with zigzag edge possesses a higher value of stiffness as compared to the armchair counterpart and this behaviour is consistent with the variation of Young’s modulus. The defect analysis shows that presence of vacancy type defects leads to a higher Young’s modulus, in the studied range with different percentage of defect concentration, in comparison with Stone-Wales defect. The variations in the peak position of the computed radial distribution function reveals the changes in the structural features of systems with zigzag and armchair edges in the presence of applied stress.

  3. Enhanced Flexibility of the O2 + N2 Interaction and Its Effect on Collisional Vibrational Energy Exchange.

    PubMed

    Garcia, E; Laganà, A; Pirani, F; Bartolomei, M; Cacciatore, M; Kurnosov, A

    2016-07-14

    Prompted by a comparison of measured and computed rate coefficients of Vibration-to-Vibration and Vibration-to-Translation energy transfer in O2 + N2 non-reactive collisions, extended semiclassical calculations of the related cross sections were performed to rationalize the role played by attractive and repulsive components of the interaction on two different potential energy surfaces. By exploiting the distributed concurrent scheme of the Grid Empowered Molecular Simulator we extended the computational work to quasiclassical techniques, investigated in this way more in detail the underlying microscopic mechanisms, singled out the interaction components facilitating the energy transfer, improved the formulation of the potential, and performed additional calculations that confirmed the effectiveness of the improvement introduced.

  4. Calculation of inviscid flow over shuttle-like vehicles at high angles of attack and comparisons with experimental data

    NASA Technical Reports Server (NTRS)

    Weilmuenster, K. J.; Hamilton, H. H., II

    1983-01-01

    A computer code HALIS, designed to compute the three dimensional flow about shuttle like configurations at angles of attack greater than 25 deg, is described. Results from HALIS are compared where possible with an existing flow field code; such comparisons show excellent agreement. Also, HALIS results are compared with experimental pressure distributions on shuttle models over a wide range of angle of attack. These comparisons are excellent. It is demonstrated that the HALIS code can incorporate equilibrium air chemistry in flow field computations.

  5. A comparison of computer-generated lift and drag polars for a Wortmann airfoil to flight and wind tunnel results

    NASA Technical Reports Server (NTRS)

    Bowers, A. H.; Sandlin, D. R.

    1984-01-01

    Computations of drag polars for a low-speed Wortmann sailplane airfoil are compared to both wind tunnel and flight results. Excellent correlation is shown to exist between computations and flight results except when separated flow regimes were encountered. Wind tunnel transition locations are shown to agree with computed predictions. Smoothness of the input coordinates to the PROFILE airfoil analysis computer program was found to be essential to obtain accurate comparisons of drag polars or transition location to either the flight or wind tunnel results.

  6. Comparison of different approaches of modelling in a masonry building

    NASA Astrophysics Data System (ADS)

    Saba, M.; Meloni, D.

    2017-12-01

    The present work has the objective to model a simple masonry building, through two different modelling methods in order to assess their validity in terms of evaluation of static stresses. Have been chosen two of the most commercial software used to address this kind of problem, which are of S.T.A. Data S.r.l. and Sismicad12 of Concrete S.r.l. While the 3Muri software adopts the Frame by Macro Elements Method (FME), which should be more schematic and more efficient, Sismicad12 software uses the Finite Element Method (FEM), which guarantees accurate results, with greater computational burden. Remarkably differences of the static stresses, for such a simple structure between the two approaches have been found, and an interesting comparison and analysis of the reasons is proposed.

  7. Comparison of modeled and experimental PV array temperature profiles for accurate interpretation of module performance and degradation

    NASA Astrophysics Data System (ADS)

    Elwood, Teri; Simmons-Potter, Kelly

    2017-08-01

    Quantification of the effect of temperature on photovoltaic (PV) module efficiency is vital to the correct interpretation of PV module performance under varied environmental conditions. However, previous work has demonstrated that PV module arrays in the field are subject to significant location-based temperature variations associated with, for example, local heating/cooling and array edge effects. Such thermal non-uniformity can potentially lead to under-prediction or over-prediction of PV array performance due to an incorrect interpretation of individual module temperature de-rating. In the current work, a simulated method for modeling the thermal profile of an extended PV array has been investigated through extensive computational modeling utilizing ANSYS, a high-performance computational fluid dynamics (CFD) software tool. Using the local wind speed as an input, simulations were run to determine the velocity at particular points along modular strings corresponding to the locations of temperature sensors along strings in the field. The point velocities were utilized along with laminar flow theories in order to calculate Nusselt's number for each point. These calculations produced a heat flux profile which, when combined with local thermal and solar radiation profiles, were used as inputs in an ANSYS Thermal Transient model that generated a solar string operating temperature profile. A comparison of the data collected during field testing, and the data fabricated by ANSYS simulations, will be discussed in order to authenticate the accuracy of the model.

  8. Probability distributions of molecular observables computed from Markov models. II. Uncertainties in observables and their time-evolution

    NASA Astrophysics Data System (ADS)

    Chodera, John D.; Noé, Frank

    2010-09-01

    Discrete-state Markov (or master equation) models provide a useful simplified representation for characterizing the long-time statistical evolution of biomolecules in a manner that allows direct comparison with experiments as well as the elucidation of mechanistic pathways for an inherently stochastic process. A vital part of meaningful comparison with experiment is the characterization of the statistical uncertainty in the predicted experimental measurement, which may take the form of an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following a perturbation, or the observation of some statistic (such as the correlation function) of the equilibrium dynamics of a single molecule. Without meaningful error bars (which arise from both approximation and statistical error), there is no way to determine whether the deviations between model and experiment are statistically meaningful. Previous work has demonstrated that a Bayesian method that enforces microscopic reversibility can be used to characterize the statistical component of correlated uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from molecular simulation data. Here, we extend this approach to include the uncertainty in observables that are functions of molecular conformation (such as surrogate spectroscopic signals) characterizing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be assessed. We test the approach in a simple model system to demonstrate that the computed uncertainties provide a useful indicator of statistical variation, and then apply it to the computation of the fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both experiment and simulation.

  9. Experimental and analytical study of close-coupled ventral nozzles for ASTOVL aircraft

    NASA Technical Reports Server (NTRS)

    Mcardle, Jack G.; Smith, C. Frederic

    1990-01-01

    Flow in a generic ventral nozzle system was studied experimentally and analytically with a block version of the PARC3D computational fluid dynamics program (a full Navier-Stokes equation solver) in order to evaluate the program's ability to predict system performance and internal flow patterns. For the experimental work a one-third-size model tailpipe with a single large rectangular ventral nozzle mounted normal to the tailpipe axis was tested with unheated air at steady-state pressure ratios up to 4.0. The end of the tailpipe was closed to simulate a blocked exhaust nozzle. Measurements showed about 5 1/2 percent flow-turning loss, reasonable nozzle performance coefficients, and a significant aftward axial component of thrust due to flow turning loss, reasonable nozzle performance coefficients, and a significant aftward axial component of thrust due to flow turning more than 90 deg. Flow behavior into and through the ventral duct is discussed and illustrated with paint streak flow visualization photographs. For the analytical work the same ventral system configuration was modeled with two computational grids to evaluate the effect of grid density. Both grids gave good results. The finer-grid solution produced more detailed flow patterns and predicted performance parameters, such as thrust and discharge coefficient, within 1 percent of the measured values. PARC3D flow visualization images are shown for comparison with the paint streak photographs. Modeling and computational issues encountered in the analytical work are discussed.

  10. A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters

    PubMed Central

    Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Godoy, Jorge; Martínez-Álvarez, Antonio

    2017-01-01

    Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle. PMID:29137137

  11. Comparison of tests of accommodation for computer users.

    PubMed

    Kolker, David; Hutchinson, Robert; Nilsen, Erik

    2002-04-01

    With the increased use of computers in the workplace and at home, optometrists are finding more patients presenting with symptoms of Computer Vision Syndrome. Among these symptomatic individuals, research supports that accommodative disorders are the most common vision finding. A prepresbyopic group (N= 30) and a presbyopic group (N = 30) were selected from a private practice. Assignment to a group was determined by age, accommodative amplitude, and near visual acuity with their distance prescription. Each subject was given a thorough vision and ocular health examination, then administered several nearpoint tests of accommodation at a computer working distance. All the tests produced similar results in the presbyopic group. For the prepresbyopic group, the tests yielded very different results. To effectively treat symptomatic VDT users, optometrists must assess the accommodative system along with the binocular and refractive status. For presbyopic patients, all nearpoint tests studied will yield virtually the same result. However, the method of testing accommodation, as well as the test stimulus presented, will yield significantly different responses for prepresbyopic patients. Previous research indicates that a majority of patients prefer the higher plus prescription yielded by the Gaussian image test.

  12. Thermodynamic properties of oxygen and nitrogen III

    NASA Technical Reports Server (NTRS)

    Stewart, R. B.; Jacobsen, R. T.; Myers, A. F.

    1972-01-01

    The final equation for nitrogen was determined. In the work on the equation of state for nitrogen, coefficients were determined by constraining the critical point to selected critical point parameters. Comparisons of this equation with all the P-density-T data were made, as well as comparisons to all other thermodynamic data reported in the literature. The extrapolation of the equation of state was studied for vapor to higher temperatures and lower temperatures, and for the liquid surface to the saturated liquid and the fusion lines. A new vapor pressure equation was also determined which was constrained to the same critical temperature, pressure, and slope (dP/dT) as the equation of state. Work on the equation of state for oxygen included studies for improving the equation at the critical point. Comparisons of velocity of sound data for oxygen were also made between values calculated with a preliminary equation of state and experimental data. Functions for the calculation of the derived thermodynamic properties using the equation of state are given, together with the derivative and integral functions for the calculation of the thermodynamic properties using the equations of state. Summary tables of the thermodynamic properties of nitrogen and oxygen are also included to serve as a check for those preparing computer programs using the equations of state.

  13. GWAS with longitudinal phenotypes: performance of approximate procedures

    PubMed Central

    Sikorska, Karolina; Montazeri, Nahid Mostafavi; Uitterlinden, André; Rivadeneira, Fernando; Eilers, Paul HC; Lesaffre, Emmanuel

    2015-01-01

    Analysis of genome-wide association studies with longitudinal data using standard procedures, such as linear mixed model (LMM) fitting, leads to discouragingly long computation times. There is a need to speed up the computations significantly. In our previous work (Sikorska et al: Fast linear mixed model computations for genome-wide association studies with longitudinal data. Stat Med 2012; 32.1: 165–180), we proposed the conditional two-step (CTS) approach as a fast method providing an approximation to the P-value for the longitudinal single-nucleotide polymorphism (SNP) effect. In the first step a reduced conditional LMM is fit, omitting all the SNP terms. In the second step, the estimated random slopes are regressed on SNPs. The CTS has been applied to the bone mineral density data from the Rotterdam Study and proved to work very well even in unbalanced situations. In another article (Sikorska et al: GWAS on your notebook: fast semi-parallel linear and logistic regression for genome-wide association studies. BMC Bioinformatics 2013; 14: 166), we suggested semi-parallel computations, greatly speeding up fitting many linear regressions. Combining CTS with fast linear regression reduces the computation time from several weeks to a few minutes on a single computer. Here, we explore further the properties of the CTS both analytically and by simulations. We investigate the performance of our proposal in comparison with a related but different approach, the two-step procedure. It is analytically shown that for the balanced case, under mild assumptions, the P-value provided by the CTS is the same as from the LMM. For unbalanced data and in realistic situations, simulations show that the CTS method does not inflate the type I error rate and implies only a minimal loss of power. PMID:25712081

  14. Evaluation of direct and indirect additive manufacture of maxillofacial prostheses.

    PubMed

    Eggbeer, Dominic; Bibb, Richard; Evans, Peter; Ji, Lu

    2012-09-01

    The efficacy of computer-aided technologies in the design and manufacture of maxillofacial prostheses has not been fully proven. This paper presents research into the evaluation of direct and indirect additive manufacture of a maxillofacial prosthesis against conventional laboratory-based techniques. An implant/magnet-retained nasal prosthesis case from a UK maxillofacial unit was selected as a case study. A benchmark prosthesis was fabricated using conventional laboratory-based techniques for comparison against additive manufactured prostheses. For the computer-aided workflow, photogrammetry, computer-aided design and additive manufacture (AM) methods were evaluated in direct prosthesis body fabrication and indirect production using an additively manufactured mould. Qualitative analysis of position, shape, colour and edge quality was undertaken. Mechanical testing to ISO standards was also used to compare the silicone rubber used in the conventional prosthesis with the AM material. Critical evaluation has shown that utilising a computer-aided work-flow can produce a prosthesis body that is comparable to that produced using existing best practice. Technical limitations currently prevent the direct fabrication method demonstrated in this paper from being clinically viable. This research helps prosthesis providers understand the application of a computer-aided approach and guides technology developers and researchers to address the limitations identified.

  15. Estimation of the influence of tool wear on force signals: A finite element approach in AISI 1045 orthogonal cutting

    NASA Astrophysics Data System (ADS)

    Equeter, Lucas; Ducobu, François; Rivière-Lorphèvre, Edouard; Abouridouane, Mustapha; Klocke, Fritz; Dehombreux, Pierre

    2018-05-01

    Industrial concerns arise regarding the significant cost of cutting tools in machining process. In particular, their improper replacement policy can lead either to scraps, or to early tool replacements, which would waste fine tools. ISO 3685 provides the flank wear end-of-life criterion. Flank wear is also the nominal type of wear for longest tool lifetimes in optimal cutting conditions. Its consequences include bad surface roughness and dimensional discrepancies. In order to aid the replacement decision process, several tool condition monitoring techniques are suggested. Force signals were shown in the literature to be strongly linked with tools flank wear. It can therefore be assumed that force signals are highly relevant for monitoring the condition of cutting tools and providing decision-aid information in the framework of their maintenance and replacement. The objective of this work is to correlate tools flank wear with numerically computed force signals. The present work uses a Finite Element Model with a Coupled Eulerian-Lagrangian approach. The geometry of the tool is changed for different runs of the model, in order to obtain results that are specific to a certain level of wear. The model is assessed by comparison with experimental data gathered earlier on fresh tools. Using the model at constant cutting parameters, force signals under different tool wear states are computed and provide force signals for each studied tool geometry. These signals are qualitatively compared with relevant data from the literature. At this point, no quantitative comparison could be performed on worn tools because the reviewed literature failed to provide similar studies in this material, either numerical or experimental. Therefore, further development of this work should include experimental campaigns aiming at collecting cutting forces signals and assessing the numerical results that were achieved through this work.

  16. Toward a generalized equation for the Reynolds stress: Turbulence momentum balance in non-canonical flows

    NASA Astrophysics Data System (ADS)

    Lee, T.-W.

    2017-11-01

    Recently, we developed a theoretical basis for determination of the Reynolds stress in canonical flows. Writing momentum balance for a control volume moving at the local mean velocity, along with a differential transform ∂/∂x =C1 U∂/∂y , a turbulence momentum balance is discovered which includes the Reynolds stress as a function of root turbulence parameters: ∂(u'v')/∂y = -C1 U∂u'2/∂y +νm∂2urms'/∂y2 . Then, the Reynolds stress can simply be computed by integrating in the y-direction using the right-hand side (RHS). This is obviously a far simplification of complex modeling of the Reynolds stress, but contains the correct physics, as borne out by comparisons with experimental and DNS data in canonical flows in our earlier works (e.g. in APS 2016). The RHS contains only two parameters, U and u'. In this work, we seek extensions of this solution to non-canonical flows such as wakes, flow over a step, and mixing layers. Comparisons with experimental and DNS data will be presented.

  17. Performance Comparison of Big Data Analytics With NEXUS and Giovanni

    NASA Astrophysics Data System (ADS)

    Jacob, J. C.; Huang, T.; Lynnes, C.

    2016-12-01

    NEXUS is an emerging data-intensive analysis framework developed with a new approach for handling science data that enables large-scale data analysis. It is available through open source. We compare performance of NEXUS and Giovanni for 3 statistics algorithms applied to NASA datasets. Giovanni is a statistics web service at NASA Distributed Active Archive Centers (DAACs). NEXUS is a cloud-computing environment developed at JPL and built on Apache Solr, Cassandra, and Spark. We compute global time-averaged map, correlation map, and area-averaged time series. The first two algorithms average over time to produce a value for each pixel in a 2-D map. The third algorithm averages spatially to produce a single value for each time step. This talk is our report on benchmark comparison findings that indicate 15x speedup with NEXUS over Giovanni to compute area-averaged time series of daily precipitation rate for the Tropical Rainfall Measuring Mission (TRMM with 0.25 degree spatial resolution) for the Continental United States over 14 years (2000-2014) with 64-way parallelism and 545 tiles per granule. 16-way parallelism with 16 tiles per granule worked best with NEXUS for computing an 18-year (1998-2015) TRMM daily precipitation global time averaged map (2.5 times speedup) and 18-year global map of correlation between TRMM daily precipitation and TRMM real time daily precipitation (7x speedup). These and other benchmark results will be presented along with key lessons learned in applying the NEXUS tiling approach to big data analytics in the cloud.

  18. Tensor scale: An analytic approach with efficient computation and applications☆

    PubMed Central

    Xu, Ziyue; Saha, Punam K.; Dasgupta, Soura

    2015-01-01

    Scale is a widely used notion in computer vision and image understanding that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, we introduced a notion of local morphometric scale referred to as “tensor scale” using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, the application of tensor scale in 3-D using the previous framework is not practical due to high computational complexity. In this paper, an analytic definition of tensor scale is formulated for n-dimensional (n-D) images that captures local structure size, orientation and anisotropy. Also, an efficient computational solution in 2- and 3-D using several novel differential geometric approaches is presented and the accuracy of results is experimentally examined. Also, a matrix representation of tensor scale is derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Finally, the applications of tensor scale in image filtering and n-linear interpolation are presented and the performance of their results is examined in comparison with respective state-of-art methods. Specifically, the performance of tensor scale based image filtering is compared with gradient and Weickert’s structure tensor based diffusive filtering algorithms. Also, the performance of tensor scale based n-linear interpolation is evaluated in comparison with standard n-linear and windowed-sinc interpolation methods. PMID:26236148

  19. SU-E-J-120: Comparing 4D CT Computed Ventilation to Lung Function Measured with Hyperpolarized Xenon-129 MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neal, B; Chen, Q

    2015-06-15

    Purpose: To correlate ventilation parameters computed from 4D CT to ventilation, profusion, and gas exchange measured with hyperpolarized Xenon-129 MRI for a set of lung cancer patients. Methods: Hyperpolarized Xe-129 MRI lung scans were acquired for lung cancer patients, before and after radiation therapy, measuring ventilation, perfusion, and gas exchange. In the standard clinical workflow, these patients also received 4D CT scans before treatment. Ventilation was computed from 4D CT using deformable image registration (DIR). All phases of the 4D CT scan were registered using a B-spline deformable registration. Ventilation at the voxel level was then computed for each phasemore » based on a Jacobian volume expansion metric, yielding phase sorted ventilation images. Ventilation based upon 4D CT and Xe-129 MRI were co-registered, allowing qualitative visual comparison and qualitative comparison via the Pearson correlation coefficient. Results: Analysis shows a weak correlation between hyperpolarized Xe-129 MRI and 4D CT DIR ventilation, with a Pearson correlation coefficient of 0.17 to 0.22. Further work will refine the DIR parameters to optimize the correlation. The weak correlation could be due to the limitations of 4D CT, registration algorithms, or the Xe-129 MRI imaging. Continued development will refine parameters to optimize correlation. Conclusion: Current analysis yields a minimal correlation between 4D CT DIR and Xe-129 MRI ventilation. Funding provided by the 2014 George Amorino Pilot Grant in Radiation Oncology at the University of Virginia.« less

  20. Comparison of different models for non-invasive FFR estimation

    NASA Astrophysics Data System (ADS)

    Mirramezani, Mehran; Shadden, Shawn

    2017-11-01

    Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.

  1. Two-dimensional simulation of quantum reflection

    NASA Astrophysics Data System (ADS)

    Galiffi, Emanuele; Sünderhauf, Christoph; DeKieviet, Maarten; Wimberger, Sandro

    2017-05-01

    A propagation method for the scattering of a quantum wave packet from a potential surface is presented. It is used to model the quantum reflection of single atoms from a corrugated (metallic) surface. Our numerical procedure works well in two spatial dimensions requiring only reasonable amounts of memory and computing time. The effects of the surface corrugation on the reflectivity are investigated via simulations with a paradigm potential. These indicate that our approach should allow for future tests of realistic, effective potentials obtained from theory in a quantitative comparison to experimental data.

  2. Preliminary validation of computational model for neutron flux prediction of Thai Research Reactor (TRR-1/M1)

    NASA Astrophysics Data System (ADS)

    Sabaibang, S.; Lekchaum, S.; Tipayakul, C.

    2015-05-01

    This study is a part of an on-going work to develop a computational model of Thai Research Reactor (TRR-1/M1) which is capable of accurately predicting the neutron flux level and spectrum. The computational model was created by MCNPX program and the CT (Central Thimble) in-core irradiation facility was selected as the location for validation. The comparison was performed with the typical flux measurement method routinely practiced at TRR-1/M1, that is, the foil activation technique. In this technique, gold foil is irradiated for a certain period of time and the activity of the irradiated target is measured to derive the thermal neutron flux. Additionally, the flux measurement with SPND (self-powered neutron detector) was also performed for comparison. The thermal neutron flux from the MCNPX simulation was found to be 1.79×1013 neutron/cm2s while that from the foil activation measurement was 4.68×1013 neutron/cm2s. On the other hand, the thermal neutron flux from the measurement using SPND was 2.47×1013 neutron/cm2s. An assessment of the differences among the three methods was done. The difference of the MCNPX with the foil activation technique was found to be 67.8% and the difference of the MCNPX with the SPND was found to be 27.8%.

  3. Students' Comparison of Their Trigonometric Answers with the Answers of a Computer Algebra System in Terms of Equivalence and Correctness

    ERIC Educational Resources Information Center

    Tonisson, Eno; Lepp, Marina

    2015-01-01

    The answers offered by computer algebra systems (CAS) can sometimes differ from those expected by the students or teachers. The comparison of the students' answers and CAS answers could provide ground for discussion about equivalence and correctness. Investigating the students' comparison of the answers gives the possibility to study different…

  4. Recent advances in analytical satellite theory

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. M.

    1978-01-01

    Recent work on analytical satellite perturbation theory has involved the completion of a revision to 4th order for zonal harmonics, the addition of a treatment for ocean tides, an extension of the treatment for the noninertial reference system, and the completion of a theory for direct solar-radiation pressure and earth-albedo pressure. Combined with a theory for tesseral-harmonics, lunisolar, and body-tide perturbations, these formulations provide a comprehensive orbit-computation program. Detailed comparisons with numerical integration and observations are presented to assess the accuracy of each theoretical development.

  5. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    NASA Astrophysics Data System (ADS)

    Evans, D.; Fisk, I.; Holzman, B.; Melo, A.; Metson, S.; Pordes, R.; Sheldon, P.; Tiradani, A.

    2011-12-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely "on-demand" as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the "base-line" needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  6. Application of the control volume mixed finite element method to a triangular discretization

    USGS Publications Warehouse

    Naff, R.L.

    2012-01-01

    A two-dimensional control volume mixed finite element method is applied to the elliptic equation. Discretization of the computational domain is based in triangular elements. Shape functions and test functions are formulated on the basis of an equilateral reference triangle with unit edges. A pressure support based on the linear interpolation of elemental edge pressures is used in this formulation. Comparisons are made between results from the standard mixed finite element method and this control volume mixed finite element method. Published 2011. This article is a US Government work and is in the public domain in the USA. ?? 2012 John Wiley & Sons, Ltd. This article is a US Government work and is in the public domain in the USA.

  7. Large eddy simulation for atmospheric boundary layer flow over flat and complex terrains

    NASA Astrophysics Data System (ADS)

    Han, Yi; Stoellinger, Michael; Naughton, Jonathan

    2016-09-01

    In this work, we present Large Eddy Simulation (LES) results of atmospheric boundary layer (ABL) flow over complex terrain with neutral stratification using the OpenFOAM-based simulator for on/offshore wind farm applications (SOWFA). The complete work flow to investigate the LES for the ABL over real complex terrain is described including meteorological-tower data analysis, mesh generation and case set-up. New boundary conditions for the lateral and top boundaries are developed and validated to allow inflow and outflow as required in complex terrain simulations. The turbulent inflow data for the terrain simulation is generated using a precursor simulation of a flat and neutral ABL. Conditionally averaged met-tower data is used to specify the conditions for the flat precursor simulation and is also used for comparison with the simulation results of the terrain LES. A qualitative analysis of the simulation results reveals boundary layer separation and recirculation downstream of a prominent ridge that runs across the simulation domain. Comparisons of mean wind speed, standard deviation and direction between the computed results and the conditionally averaged tower data show a reasonable agreement.

  8. A comparison of Wortmann airfoil computer-generated lift and drag polars with flight and wind tunnel results

    NASA Technical Reports Server (NTRS)

    Bowers, A. H.; Sim, A. G.

    1984-01-01

    Computations of drag polars for a low-speed Wortmann sailplane airfoil are compared with both wind tunnel and flight test results. Excellent correlation was shown to exist between computations and flight results except when separated flow regimes were encountered. Smoothness of the input coordinates to the PROFILE computer program was found to be essential to obtain accurate comparisons of drag polars or transition location to either the flight or wind tunnel flight results.

  9. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-05-04

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroicmore » effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster.« less

  10. Computer Simulation of Biological Ageing-A Bird's-Eye View

    NASA Astrophysics Data System (ADS)

    Dasgupta, Subinay

    For living organisms, the process of ageing consists of acquiring good and bad genetic mutations, which increase and decrease (respectively) the survival probability. When a child is born, the hereditary mutations of the parents are transmitted to the offspring. Such stochastic processes seem to be amenable to computer simulation. Over the last 10 years, simulation studies of this sort have been done in different parts of the globe to explain ageing. The objective of these studies have been to attempt an explanation of demographic data and of natural phenomena like preference of nature to the process of sexual reproduction (in comparison to the process of asexual reproduction). Here we shall attempt to discuss briefly the principles and the results of these works, with an emphasis on what is called Penna bit-string model.

  11. Comparison between measured and computed magnetic flux density distribution of simulated transformer core joints assembled from grain-oriented and non-oriented electrical steel

    NASA Astrophysics Data System (ADS)

    Shahrouzi, Hamid; Moses, Anthony J.; Anderson, Philip I.; Li, Guobao; Hu, Zhuochao

    2018-04-01

    The flux distribution in an overlapped linear joint constructed in the central region of an Epstein Square was studied experimentally and results compared with those obtained using a computational magnetic field solver. High permeability grain-oriented (GO) and low permeability non-oriented (NO) electrical steels were compared at a nominal core flux density of 1.60 T at 50 Hz. It was found that the experimental results only agreed well at flux densities at which the reluctance of different paths of the flux are similar. Also it was revealed that the flux becomes more uniform when the working point of the electrical steel is close to the knee point of the B-H curve of the steel.

  12. Low-Pressure Turbine Separation Control: Comparison With Experimental Data

    NASA Technical Reports Server (NTRS)

    Garg, Vijay K.

    2002-01-01

    The present work details a computational study, using the Glenn HT code, that analyzes the use of vortex generator jets (VGJs) to control separation on a low-pressure turbine (LPT) blade at low Reynolds numbers. The computational results are also compared with the experimental data for steady VGJs. It is found that the code determines the proper location of the separation point on the suction surface of the baseline blade (without any VGJ) for Reynolds numbers of 50,000 or less. Also, the code finds that the separated region on the suction surface of the blade vanishes with the use of VGJs. However, the separated region and the wake characteristics are not well predicted. The wake width is generally over-predicted while the wake depth is under-predicted.

  13. A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Willert, Jeffrey; Park, H.; Knoll, D. A.

    2014-10-01

    Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton-Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.

  14. Numerical simulations of the flow with the prescribed displacement of the airfoil and comparison with experiment

    NASA Astrophysics Data System (ADS)

    Řidký, V.; Šidlof, P.; Vlček, V.

    2013-04-01

    The work is devoted to comparing measured data with the results of numerical simulations. As mathematical model was used mathematical model whitout turbulence for incompressible flow In the experiment was observed the behavior of designed NACA0015 airfoil in airflow. For the numerical solution was used OpenFOAM computational package, this is open-source software based on finite volume method. In the numerical solution is prescribed displacement of the airfoil, which corresponds to the experiment. The velocity at a point close to the airfoil surface is compared with the experimental data obtained from interferographic measurements of the velocity field. Numerical solution is computed on a 3D mesh composed of about 1 million ortogonal hexahedron elements. The time step is limited by the Courant number. Parallel computations are run on supercomputers of the CIV at Technical University in Prague (HAL and FOX) and on a computer cluster of the Faculty of Mechatronics of Liberec (HYDRA). Run time is fixed at five periods, the results from the fifth periods and average value for all periods are then be compared with experiment.

  15. Streamtube expansion effects on the Darrieus wind turbine

    NASA Astrophysics Data System (ADS)

    Paraschivoiu, I.; Fraunie, P.; Beguier, C.

    1985-04-01

    The purpose of the work described in this paper was to determine the aerodynamic loads and performance of a Darrieus wind turbine by including the expansion effects of the streamtubes through the rotor. The double-multiple streamtube model with variable interference factors was used to estimate the induced velocities with a modified CARDAAV computer code. Comparison with measured data and predictions shows that the stream-tube expansion effects are relatively significant at high tip-speed ratios, allowing a more realistic modeling of the upwind/downwind flowfield asymmetries inherent in the Darrieus rotor.

  16. A general software reliability process simulation technique

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1991-01-01

    The structure and rationale of the generalized software reliability process, together with the design and implementation of a computer program that simulates this process are described. Given assumed parameters of a particular project, the users of this program are able to generate simulated status timelines of work products, numbers of injected anomalies, and the progress of testing, fault isolation, repair, validation, and retest. Such timelines are useful in comparison with actual timeline data, for validating the project input parameters, and for providing data for researchers in reliability prediction modeling.

  17. M18. Lack of Generalization From a High-Dose, Well-Powered Randomized Controlled Trial of Working Memory-Focused Training for Schizophrenia

    PubMed Central

    Nienow, Tasha; MacDonald, Angus

    2017-01-01

    Abstract Background: Cognitive deficits contribute to the functional disability associated with schizophrenia. Cognitive training has shown promise as a method of intervention; however, there is considerable variability in the implementation of this approach. The aim of this study was to test the efficacy of a high dose of cognitive training that targeted working memory-related functions. Methods: A randomized, double blind, active placebo-controlled, clinical trial was conducted with 80 outpatients with schizophrenia (mean age 46.44 years, 25% female). Patients were randomized to either working memory-based cognitive training or a computer skills training course that taught computer applications. In both conditions, participants received an average of 3 hours of training weekly for 16 weeks. Cognitive and functional outcomes were assessed with the MATRICS Consensus Cognitive Battery, N-Back performance, 2 measures of functional capacity (UPSA and SSPA) and a measure of community functioning, the Social Functioning Scale. Results: An intent-to-treat analysis found that patients who received cognitive training demonstrated significantly greater change on a trained task (Word N-Back), F(78) = 21.69, P < .0001, and a novel version of a trained task (Picture N-Back) as compared to those in the comparison condition, F(78) = 13.59, P = .002. However, only very modest support was found for generalization of training gains. A trend for an interaction was found on the MCCB Attention Domain score, F(78) = 2.56, P = .12. Participants who received cognitive training demonstrated significantly improved performance, t(39) = 3.79, P = .001, while those in computer skills did not, t(39) = 1.07, P = .37. Conclusion: A well-powered, high-dose, working memory focused, computer-based, cognitive training protocol produced only a small effect in patients with schizophrenia. Results indicate the importance of measuring generalization from training tasks in cognitive remediation studies. Computer-based training was not an effective method of producing change in cognition in patients with schizophrenia.

  18. Automated quantification of pulmonary emphysema from computed tomography scans: comparison of variation and correlation of common measures in a large cohort

    NASA Astrophysics Data System (ADS)

    Keller, Brad M.; Reeves, Anthony P.; Yankelevitz, David F.; Henschke, Claudia I.

    2010-03-01

    The purpose of this work was to retrospectively investigate the variation of standard indices of pulmonary emphysema from helical computed tomographic (CT) scans as related to inspiration differences over a 1 year interval and determine the strength of the relationship between these measures in a large cohort. 626 patients that had 2 scans taken at an interval of 9 months to 15 months (μ: 381 days, σ: 31 days) were selected for this work. All scans were acquired at a 1.25mm slice thickness using a low dose protocol. For each scan, the emphysema index (EI), fractal dimension (FD), mean lung density (MLD), and 15th percentile of the histogram (HIST) were computed. The absolute and relative changes for each measure were computed and the empirical 95% confidence interval was reported both in non-normalized and normalized scales. Spearman correlation coefficients are computed between the relative change in each measure and relative change in inspiration between each scan-pair, as well as between each pair-wise combination of the four measures. EI varied on a range of -10.5 to 10.5 on a non-normalized scale and -15 to 15 on a normalized scale, with FD and MLD showing slightly larger but comparable spreads, and HIST having a much larger variation. MLD was found to show the strongest correlation to inspiration change (r=0.85, p<0.001), and EI, FD, and HIST to have moderately strong correlation (r = 0.61-0.74, p<0.001). Finally, HIST showed very strong correlation to EI (r = 0.92, p<0.001), while FD showed the least strong relationship to EI (r = 0.82, p<0.001). This work shows that emphysema index and fractal dimension have the least variability overall of the commonly used measures of emphysema and that they offer the most unique quantification of emphysema relative to each other.

  19. The impact of personalized probabilistic wall thickness models on peak wall stress in abdominal aortic aneurysms.

    PubMed

    Biehler, J; Wall, W A

    2018-02-01

    If computational models are ever to be used in high-stakes decision making in clinical practice, the use of personalized models and predictive simulation techniques is a must. This entails rigorous quantification of uncertainties as well as harnessing available patient-specific data to the greatest extent possible. Although researchers are beginning to realize that taking uncertainty in model input parameters into account is a necessity, the predominantly used probabilistic description for these uncertain parameters is based on elementary random variable models. In this work, we set out for a comparison of different probabilistic models for uncertain input parameters using the example of an uncertain wall thickness in finite element models of abdominal aortic aneurysms. We provide the first comparison between a random variable and a random field model for the aortic wall and investigate the impact on the probability distribution of the computed peak wall stress. Moreover, we show that the uncertainty about the prevailing peak wall stress can be reduced if noninvasively available, patient-specific data are harnessed for the construction of the probabilistic wall thickness model. Copyright © 2017 John Wiley & Sons, Ltd.

  20. High resolution simulations of a variable HH jet

    NASA Astrophysics Data System (ADS)

    Raga, A. C.; de Colle, F.; Kajdič, P.; Esquivel, A.; Cantó, J.

    2007-04-01

    Context: In many papers, the flows in Herbig-Haro (HH) jets have been modeled as collimated outflows with a time-dependent ejection. In particular, a supersonic variability of the ejection velocity leads to the production of "internal working surfaces" which (for appropriate forms of the time-variability) can produce emitting knots that resemble the chains of knots observed along HH jets. Aims: In this paper, we present axisymmetric simulations of an "internal working surface" in a radiative jet (produced by an ejection velocity variability). We concentrate on a given parameter set (i.e., on a jet with a constante ejection density, and a sinusoidal velocity variability with a 20 yr period and a 40 km s-1 half-amplitude), and carry out a study of the behaviour of the solution for increasing numerical resolutions. Methods: In our simulations, we solve the gasdynamic equations together with a 17-species atomic/ionic network, and we are therefore able to compute emission coefficients for different emission lines. Results: We compute 3 adaptive grid simulations, with 20, 163 and 1310 grid points (at the highest grid resolution) across the initial jet radius. From these simulations we see that successively more complex structures are obtained for increasing numerical resolutions. Such an effect is seen in the stratifications of the flow variables as well as in the predicted emission line intensity maps. Conclusions: .We find that while the detailed structure of an internal working surface depends on resolution, the predicted emission line luminosities (integrated over the volume of the working surface) are surprisingly stable. This is definitely good news for the future computation of predictions from radiative jet models for carrying out comparisons with observations of HH objects.

  1. Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sego, Landon H.; Marquez, Andres; Rawson, Andrew

    2013-06-30

    As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented,more » high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.« less

  2. Image formation simulation for computer-aided inspection planning of machine vision systems

    NASA Astrophysics Data System (ADS)

    Irgenfried, Stephan; Bergmann, Stephan; Mohammadikaji, Mahsa; Beyerer, Jürgen; Dachsbacher, Carsten; Wörn, Heinz

    2017-06-01

    In this work, a simulation toolset for Computer Aided Inspection Planning (CAIP) of systems for automated optical inspection (AOI) is presented along with a versatile two-robot-setup for verification of simulation and system planning results. The toolset helps to narrow down the large design space of optical inspection systems in interaction with a system expert. The image formation taking place in optical inspection systems is simulated using GPU-based real time graphics and high quality off-line-rendering. The simulation pipeline allows a stepwise optimization of the system, from fast evaluation of surface patch visibility based on real time graphics up to evaluation of image processing results based on off-line global illumination calculation. A focus of this work is on the dependency of simulation quality on measuring, modeling and parameterizing the optical surface properties of the object to be inspected. The applicability to real world problems is demonstrated by taking the example of planning a 3D laser scanner application. Qualitative and quantitative comparison results of synthetic and real images are presented.

  3. Alignment-free genetic sequence comparisons: a review of recent approaches by word analysis

    PubMed Central

    Steele, Joe; Bastola, Dhundy

    2014-01-01

    Modern sequencing and genome assembly technologies have provided a wealth of data, which will soon require an analysis by comparison for discovery. Sequence alignment, a fundamental task in bioinformatics research, may be used but with some caveats. Seminal techniques and methods from dynamic programming are proving ineffective for this work owing to their inherent computational expense when processing large amounts of sequence data. These methods are prone to giving misleading information because of genetic recombination, genetic shuffling and other inherent biological events. New approaches from information theory, frequency analysis and data compression are available and provide powerful alternatives to dynamic programming. These new methods are often preferred, as their algorithms are simpler and are not affected by synteny-related problems. In this review, we provide a detailed discussion of computational tools, which stem from alignment-free methods based on statistical analysis from word frequencies. We provide several clear examples to demonstrate applications and the interpretations over several different areas of alignment-free analysis such as base–base correlations, feature frequency profiles, compositional vectors, an improved string composition and the D2 statistic metric. Additionally, we provide detailed discussion and an example of analysis by Lempel–Ziv techniques from data compression. PMID:23904502

  4. Experimental, Numerical and Analytical Characterization of Slosh Dynamics Applied to In-Space Propellant Storage, Management and Transfer

    NASA Technical Reports Server (NTRS)

    Storey, Jedediah M.; Kirk, Daniel; Gutierrez, Hector; Marsell, Brandon; Schallhorn, Paul; Lapilli, Gabriel D.

    2015-01-01

    Experimental and numerical results are presented from a new cryogenic fluid slosh program at the Florida Institute of Technology (FIT). Water and cryogenic liquid nitrogen are used in various ground-based tests with an approximately 30 cm diameter spherical tank to characterize damping, slosh mode frequencies, and slosh forces. The experimental results are compared to a computational fluid dynamics (CFD) model for validation. An analytical model is constructed from prior work for comparison. Good agreement is seen between experimental, numerical, and analytical results.

  5. Comparison of artificial intelligence classifiers for SIP attack data

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Slachta, Jiri

    2016-05-01

    Honeypot application is a source of valuable data about attacks on the network. We run several SIP honeypots in various computer networks, which are separated geographically and logically. Each honeypot runs on public IP address and uses standard SIP PBX ports. All information gathered via honeypot is periodically sent to the centralized server. This server classifies all attack data by neural network algorithm. The paper describes optimizations of a neural network classifier, which lower the classification error. The article contains the comparison of two neural network algorithm used for the classification of validation data. The first is the original implementation of the neural network described in recent work; the second neural network uses further optimizations like input normalization or cross-entropy cost function. We also use other implementations of neural networks and machine learning classification algorithms. The comparison test their capabilities on validation data to find the optimal classifier. The article result shows promise for further development of an accurate SIP attack classification engine.

  6. Numerical solution methods for viscoelastic orthotropic materials

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1988-01-01

    Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.

  7. Comparisons of Computed Mobile Phone Induced SAR in the SAM Phantom to That in Anatomically Correct Models of the Human Head.

    PubMed

    Beard, Brian B; Kainz, Wolfgang; Onishi, Teruo; Iyama, Takahiro; Watanabe, Soichi; Fujiwara, Osamu; Wang, Jianqing; Bit-Babik, Giorgi; Faraone, Antonio; Wiart, Joe; Christ, Andreas; Kuster, Niels; Lee, Ae-Kyoung; Kroeze, Hugo; Siegbahn, Martin; Keshvari, Jafar; Abrishamkar, Houman; Simon, Winfried; Manteuffel, Dirk; Nikoloski, Neviana

    2006-06-05

    The specific absorption rates (SAR) determined computationally in the specific anthropomorphic mannequin (SAM) and anatomically correct models of the human head when exposed to a mobile phone model are compared as part of a study organized by IEEE Standards Coordinating Committee 34, SubCommittee 2, and Working Group 2, and carried out by an international task force comprising 14 government, academic, and industrial research institutions. The detailed study protocol defined the computational head and mobile phone models. The participants used different finite-difference time-domain software and independently positioned the mobile phone and head models in accordance with the protocol. The results show that when the pinna SAR is calculated separately from the head SAR, SAM produced a higher SAR in the head than the anatomically correct head models. Also the larger (adult) head produced a statistically significant higher peak SAR for both the 1- and 10-g averages than did the smaller (child) head for all conditions of frequency and position.

  8. Metabolic assessments during extra-vehicular activity.

    PubMed

    Osipov YuYu; Spichkov, A N; Filipenkov, S N

    1998-01-01

    Extra-vehicular activity (EVA) has a significant role during extended space flights. It demonstrates that humans can survive and perform useful work outside the Orbital Space Stations (OSS) while wearing protective space suits (SS). When the International Space Station 'Alpha' (ISSA) is fully operational, EVA assembly, installation, maintenance and repair operations will become an everyday repetitive work activity in space. It needs new ergonomic evaluation of the work/rest schedule for an increasing of the labor amount per EVA hour. The metabolism assessment is a helpful method to control the productivity of the EVA astronaut and to optimize the work/rest regime. Three following methods were used in Russia to estimate real-time metabolic rates during EVA: 1. Oxygen consumption, computed from the pressure drop in a high pressure bottle per unit time (with actual thermodynamic oxygen properties under high pressure and oxygen leakage taken into account). 2. Carbon dioxide production, computed from CO2 concentration at the contaminant control cartridge and gas flow rate in the life support subsystem closed loop (nominal mode) or gas leakage in the SS open loop (emergency mode). 3. Heat removal, computed from the difference between the temperatures of coolant water or gas and its flow rate in a unit of time (with assumed humidity and wet oxygen state taken into account). Comparison of heat removal values with metabolic rates enables us to determine the thermal balance during an operative medical control of EVA at "Salyut-6", "Salyut-7" and "Mir" OSS. Complex analysis of metabolism, body temperature and heat rate supports a differential diagnosis between emotional and thermal components of stress during EVA. It gives a prognosis of human homeostasis during EVA. Available information has been acquired into an EVA data base which is an effective tool for ergonomical optimization.

  9. Metabolic assessments during extra-vehicular activity

    NASA Astrophysics Data System (ADS)

    Osipov, Yu. Yu.; Spichkov, A. N.; Filipenkov, S. N.

    Extra-vehicular activity (EVA) has a significant role during extended space flights. It demonstrates that humans can survive and perform useful work outside the Orbital Space Stations (OSS) while wearing protective space suits (SS). When the International Space Station 'Alpha'(ISSA) is fully operational, EVA assembly, installation, maintenance and repair operations will become an everyday repetitive work activity in space. It needs new ergonomic evaluation of the work/rest schedule for an increasing of the labor amount per EVA hour. The metabolism assessment is a helpful method to control the productivity of the EVA astronaut and to optimize the work/rest regime. Three following methods were used in Russia to estimate real-time metabolic rates during EVA: 1. Oxygen consumption, computed from the pressure drop in a high pressure bottle per unit time (with actual thermodynamic oxygen properties under high pressure and oxygen leakage taken into account). 2. Carbon dioxide production, computed from CO 2 concentration at the contaminant control cartridge and gas flow rate in the life support subsystem closed loop (nominal mode) or gas leakage in the SS open loop (emergency mode). 3. Heat removal, computed from the difference between the temperatures of coolant water or gas and its flow rate in a unit of time (with assumed humidity and wet oxygen state taken into account). Comparison of heat removal values with metabolic rates enables us to determine the thermal balance during an operative medical control of EVA at "Salyut-6", "Salyut-7" and "Mir" OSS. Complex analysis of metabolism, body temperature and heat rate supports a differential diagnosis between emotional and thermal components of stress during EVA. It gives a prognosis of human homeostasis during EVA. Available information has been acquired into an EVA data base which is an effective tool for ergonomical optimization.

  10. Interferograms, schlieren, and shadowgraphs constructed from real- and ideal-gas, two- and three-dimensional computed flowfields

    NASA Technical Reports Server (NTRS)

    Yates, Leslie A.

    1993-01-01

    The construction of interferograms, schlieren, and shadowgraphs from computed flowfield solutions permits one-to-one comparisons of computed and experimental results. A method of constructing these images from both ideal- and real-gas, two and three-dimensional computed flowfields is described. The computational grids can be structured or unstructured, and multiple grids are an option. Constructed images are shown for several types of computed flows including nozzle, wake, and reacting flows; comparisons to experimental images are also shown. In addition, th sensitivity of these images to errors in the flowfield solution is demonstrated, and the constructed images can be used to identify problem areas in the computations.

  11. Interferograms, Schlieren, and Shadowgraphs Constructed from Real- and Ideal-Gas, Two- and Three-Dimensional Computed Flowfields

    NASA Technical Reports Server (NTRS)

    Yates, Leslie A.

    1992-01-01

    The construction of interferograms, schlieren, and shadowgraphs from computed flowfield solutions permits one-to-one comparisons of computed and experimental results. A method for constructing these images from both ideal- and real-gas, two- and three-dimensional computed flowfields is described. The computational grids can be structured or unstructured, and multiple grids are an option. Constructed images are shown for several types of computed flows including nozzle, wake, and reacting flows; comparisons to experimental images are also shown. In addition, the sensitivity of these images to errors in the flowfield solution is demonstrated, and the constructed images can be used to identify problem areas in the computations.

  12. Open source system OpenVPN in a function of Virtual Private Network

    NASA Astrophysics Data System (ADS)

    Skendzic, A.; Kovacic, B.

    2017-05-01

    Using of Virtual Private Networks (VPN) can establish high security level in network communication. VPN technology enables high security networking using distributed or public network infrastructure. VPN uses different security and managing rules inside networks. It can be set up using different communication channels like Internet or separate ISP communication infrastructure. VPN private network makes security communication channel over public network between two endpoints (computers). OpenVPN is an open source software product under GNU General Public License (GPL) that can be used to establish VPN communication between two computers inside business local network over public communication infrastructure. It uses special security protocols and 256-bit Encryption and it is capable of traversing network address translators (NATs) and firewalls. It allows computers to authenticate each other using a pre-shared secret key, certificates or username and password. This work gives review of VPN technology with a special accent on OpenVPN. This paper will also give comparison and financial benefits of using open source VPN software in business environment.

  13. Atmospheric numerical modeling resource enhancement and model convective parameterization/scale interaction studies

    NASA Technical Reports Server (NTRS)

    Cushman, Paula P.

    1993-01-01

    Research will be undertaken in this contract in the area of Modeling Resource and Facilities Enhancement to include computer, technical and educational support to NASA investigators to facilitate model implementation, execution and analysis of output; to provide facilities linking USRA and the NASA/EADS Computer System as well as resident work stations in ESAD; and to provide a centralized location for documentation, archival and dissemination of modeling information pertaining to NASA's program. Additional research will be undertaken in the area of Numerical Model Scale Interaction/Convective Parameterization Studies to include implementation of the comparison of cloud and rain systems and convective-scale processes between the model simulations and what was observed; and to incorporate the findings of these and related research findings in at least two refereed journal articles.

  14. Sequence Tolerance of a Highly Stable Single Domain Antibody: Comparison of Computational and Experimental Profiles

    DTIC Science & Technology

    2016-09-09

    evaluating 18 mutants using either the A or B conformer is only r = ~ 0.2. Given the poor performance of approximating the observed experimental ...1    Sequence Tolerance of a Highly Stable Single Domain Antibody: Comparison of Computational and Experimental Profiles Mark A. Olson,1 Patricia...unusually high thermal stability is explored by a combined computational and experimental study. Starting with the crystallographic structure

  15. ABrox-A user-friendly Python module for approximate Bayesian computation with a focus on model comparison.

    PubMed

    Mertens, Ulf Kai; Voss, Andreas; Radev, Stefan

    2018-01-01

    We give an overview of the basic principles of approximate Bayesian computation (ABC), a class of stochastic methods that enable flexible and likelihood-free model comparison and parameter estimation. Our new open-source software called ABrox is used to illustrate ABC for model comparison on two prominent statistical tests, the two-sample t-test and the Levene-Test. We further highlight the flexibility of ABC compared to classical Bayesian hypothesis testing by computing an approximate Bayes factor for two multinomial processing tree models. Last but not least, throughout the paper, we introduce ABrox using the accompanied graphical user interface.

  16. The Next Generation of Lab and Classroom Computing - The Silver Lining

    DTIC Science & Technology

    2016-12-01

    desktop infrastructure (VDI) solution, as well as the computing solutions at three universities, was selected as the basis for comparison. The research... infrastructure , VDI, hardware cost, software cost, manpower, availability, cloud computing, private cloud, bring your own device, BYOD, thin client...virtual desktop infrastructure (VDI) solution, as well as the computing solutions at three universities, was selected as the basis for comparison. The

  17. Many-integrated core (MIC) technology for accelerating Monte Carlo simulation of radiation transport: A study based on the code DPM

    NASA Astrophysics Data System (ADS)

    Rodriguez, M.; Brualla, L.

    2018-04-01

    Monte Carlo simulation of radiation transport is computationally demanding to obtain reasonably low statistical uncertainties of the estimated quantities. Therefore, it can benefit in a large extent from high-performance computing. This work is aimed at assessing the performance of the first generation of the many-integrated core architecture (MIC) Xeon Phi coprocessor with respect to that of a CPU consisting of a double 12-core Xeon processor in Monte Carlo simulation of coupled electron-photonshowers. The comparison was made twofold, first, through a suite of basic tests including parallel versions of the random number generators Mersenne Twister and a modified implementation of RANECU. These tests were addressed to establish a baseline comparison between both devices. Secondly, through the p DPM code developed in this work. p DPM is a parallel version of the Dose Planning Method (DPM) program for fast Monte Carlo simulation of radiation transport in voxelized geometries. A variety of techniques addressed to obtain a large scalability on the Xeon Phi were implemented in p DPM. Maximum scalabilities of 84 . 2 × and 107 . 5 × were obtained in the Xeon Phi for simulations of electron and photon beams, respectively. Nevertheless, in none of the tests involving radiation transport the Xeon Phi performed better than the CPU. The disadvantage of the Xeon Phi with respect to the CPU owes to the low performance of the single core of the former. A single core of the Xeon Phi was more than 10 times less efficient than a single core of the CPU for all radiation transport simulations.

  18. Computers for real time flight simulation: A market survey

    NASA Technical Reports Server (NTRS)

    Bekey, G. A.; Karplus, W. J.

    1977-01-01

    An extensive computer market survey was made to determine those available systems suitable for current and future flight simulation studies at Ames Research Center. The primary requirement is for the computation of relatively high frequency content (5 Hz) math models representing powered lift flight vehicles. The Rotor Systems Research Aircraft (RSRA) was used as a benchmark vehicle for computation comparison studies. The general nature of helicopter simulations and a description of the benchmark model are presented, and some of the sources of simulation difficulties are examined. A description of various applicable computer architectures is presented, along with detailed discussions of leading candidate systems and comparisons between them.

  19. Comparison of numerical techniques for integration of stiff ordinary differential equations arising in combustion chemistry

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    The efficiency and accuracy of several algorithms recently developed for the efficient numerical integration of stiff ordinary differential equations are compared. The methods examined include two general-purpose codes, EPISODE and LSODE, and three codes (CHEMEQ, CREK1D, and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes are applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code currently available for the integration of combustion kinetic rate equations. An important finding is that an interactive solution of the algebraic energy conservation equation to compute the temperature does not result in significant errors. In addition, this method is more efficient than evaluating the temperature by integrating its time derivative. Significant reductions in computational work are realized by updating the rate constants (k = at(supra N) N exp(-E/RT) only when the temperature change exceeds an amount delta T that is problem dependent. An approximate expression for the automatic evaluation of delta T is derived and is shown to result in increased efficiency.

  20. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  1. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    NASA Astrophysics Data System (ADS)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  2. Vibrational spectroscopy and theoretical studies on 2,4-dinitrophenylhydrazine

    NASA Astrophysics Data System (ADS)

    Chiş, V.; Filip, S.; Miclăuş, V.; Pîrnău, A.; Tănăselia, C.; Almăşan, V.; Vasilescu, M.

    2005-06-01

    In this work, we will report a combined experimental and theoretical study on molecular and vibrational structure of 2,4-dinitrophenylhydrazine. FT-IR, FT-IR/ATR and Raman spectra of normal and deuterated DNPH have been recorded and analyzed in order to get new insights into molecular structure and properties of this molecule, with particular emphasize on its intra- and intermolecular hydrogen bonds (HB's). For computational purposes we used density functional theory (DFT) methods, with B3LYP and BLYP exchange-correlation functionals, in conjunction with 6-31G(d) basis set. All experimental vibrational bands have been discussed and assigned to normal modes on the basis of DFT calculations and isotopic shifts and by comparison to other dinitro- substituted compounds [V. Chiş, Chem. Phys., 300 (2004) 1]. To aid in mode assignments, we based on the direct comparison between experimental and calculated spectra by considering both the frequency sequence and the intensity pattern of the experimental and computed vibrational bands. It is also shown that semiempirical AM1 method predicts geometrical parameters and vibrational frequencies related to the HB in a pleasant agreement with experiment, being surprisingly accurate from this perspective.

  3. Trialability, observability and risk reduction accelerating individual innovation adoption decisions.

    PubMed

    Hayes, Kathryn J; Eljiz, Kathy; Dadich, Ann; Fitzgerald, Janna-Anneke; Sloan, Terry

    2015-01-01

    The purpose of this paper is to provide a retrospective analysis of computer simulation's role in accelerating individual innovation adoption decisions. The process innovation examined is Lean Systems Thinking, and the organizational context is the imaging department of an Australian public hospital. Intrinsic case study methods including observation, interviews with radiology and emergency personnel about scheduling procedures, mapping patient appointment processes and document analysis were used over three years and then complemented with retrospective interviews with key hospital staff. The multiple data sources and methods were combined in a pragmatic and reflexive manner to explore an extreme case that provides potential to act as an instructive template for effective change. Computer simulation of process change ideas offered by staff to improve patient-flow accelerated the adoption of the process changes, largely because animated computer simulation permitted experimentation (trialability), provided observable predictions of change results (observability) and minimized perceived risk. The difficulty of making accurate comparisons between time periods in a health care setting is acknowledged. This work has implications for policy, practice and theory, particularly for inducing the rapid diffusion of process innovations to address challenges facing health service organizations and national health systems. Originality/value - The research demonstrates the value of animated computer simulation in presenting the need for change, identifying options, and predicting change outcomes and is the first work to indicate the importance of trialability, observability and risk reduction in individual adoption decisions in health services.

  4. Testing and Analysis of Sensor Ports

    NASA Technical Reports Server (NTRS)

    Zhang, M.; Frendi, A.; Thompson, W.; Casiano, M. J.

    2016-01-01

    This Technical Publication summarizes the work focused on the testing and analysis of sensor ports. The tasks under this contract were divided into three areas: (1) Development of an Analytical Model, (2) Conducting a Set of Experiments, and (3) Obtaining Computational Solutions. Results from the experiment using both short and long sensor ports were obtained using harmonic, random, and frequency sweep plane acoustic waves. An amplification factor of the pressure signal between the port inlet and the back of the port is obtained and compared to models. Comparisons of model and experimental results showed very good agreement.

  5. Viscous flow calculations for the AGARD standard configuration airfoils with experimental comparisons

    NASA Technical Reports Server (NTRS)

    Howlett, James T.

    1989-01-01

    Recent experience in calculating unsteady transonic flow by means of viscous-inviscid interactions with the XTRAN2L computer code is examined. The boundary layer method for attached flows is based upon the work of Rizzetta. The nonisentropic corrections of Fuglsang and Williams are also incorporated along with the viscous interaction for some cases and initial results are presented. For unsteady flows, the inverse boundary layer equations developed by Vatsa and Carter are used in a quasi-steady manner and preliminary results are presented.

  6. Faster Heavy Ion Transport for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.

    2013-01-01

    The deterministic particle transport code HZETRN was developed to enable fast and accurate space radiation transport through materials. As more complex transport solutions are implemented for neutrons, light ions (Z < 2), mesons, and leptons, it is important to maintain overall computational efficiency. In this work, the heavy ion (Z > 2) transport algorithm in HZETRN is reviewed, and a simple modification is shown to provide an approximate 5x decrease in execution time for galactic cosmic ray transport. Convergence tests and other comparisons are carried out to verify that numerical accuracy is maintained in the new algorithm.

  7. An Efficient Mutual Authentication Framework for Healthcare System in Cloud Computing.

    PubMed

    Kumar, Vinod; Jangirala, Srinivas; Ahmad, Musheer

    2018-06-28

    The increasing role of Telecare Medicine Information Systems (TMIS) makes its accessibility for patients to explore medical treatment, accumulate and approach medical data through internet connectivity. Security and privacy preservation is necessary for medical data of the patient in TMIS because of the very perceptive purpose. Recently, Mohit et al.'s proposed a mutual authentication protocol for TMIS in the cloud computing environment. In this work, we reviewed their protocol and found that it is not secure against stolen verifier attack, many logged in patient attack, patient anonymity, impersonation attack, and fails to protect session key. For enhancement of security level, we proposed a new mutual authentication protocol for the similar environment. The presented framework is also more capable in terms of computation cost. In addition, the security evaluation of the protocol protects resilience of all possible security attributes, and we also explored formal security evaluation based on random oracle model. The performance of the proposed protocol is much better in comparison to the existing protocol.

  8. Validity of computational hemodynamics in human arteries based on 3D time-of-flight MR angiography and 2D electrocardiogram gated phase contrast images

    NASA Astrophysics Data System (ADS)

    Yu, Huidan (Whitney); Chen, Xi; Chen, Rou; Wang, Zhiqiang; Lin, Chen; Kralik, Stephen; Zhao, Ye

    2015-11-01

    In this work, we demonstrate the validity of 4-D patient-specific computational hemodynamics (PSCH) based on 3-D time-of-flight (TOF) MR angiography (MRA) and 2-D electrocardiogram (ECG) gated phase contrast (PC) images. The mesoscale lattice Boltzmann method (LBM) is employed to segment morphological arterial geometry from TOF MRA, to extract velocity profiles from ECG PC images, and to simulate fluid dynamics on a unified GPU accelerated computational platform. Two healthy volunteers are recruited to participate in the study. For each volunteer, a 3-D high resolution TOF MRA image and 10 2-D ECG gated PC images are acquired to provide the morphological geometry and the time-varying flow velocity profiles for necessary inputs of the PSCH. Validation results will be presented through comparisons of LBM vs. 4D Flow Software for flow rates and LBM simulation vs. MRA measurement for blood flow velocity maps. Indiana University Health (IUH) Values Fund.

  9. Implementation and evaluation of an efficient secure computation system using 'R' for healthcare statistics.

    PubMed

    Chida, Koji; Morohashi, Gembu; Fuji, Hitoshi; Magata, Fumihiko; Fujimura, Akiko; Hamada, Koki; Ikarashi, Dai; Yamamoto, Ryuichi

    2014-10-01

    While the secondary use of medical data has gained attention, its adoption has been constrained due to protection of patient privacy. Making medical data secure by de-identification can be problematic, especially when the data concerns rare diseases. We require rigorous security management measures. Using secure computation, an approach from cryptography, our system can compute various statistics over encrypted medical records without decrypting them. An issue of secure computation is that the amount of processing time required is immense. We implemented a system that securely computes healthcare statistics from the statistical computing software 'R' by effectively combining secret-sharing-based secure computation with original computation. Testing confirmed that our system could correctly complete computation of average and unbiased variance of approximately 50,000 records of dummy insurance claim data in a little over a second. Computation including conditional expressions and/or comparison of values, for example, t test and median, could also be correctly completed in several tens of seconds to a few minutes. If medical records are simply encrypted, the risk of leaks exists because decryption is usually required during statistical analysis. Our system possesses high-level security because medical records remain in encrypted state even during statistical analysis. Also, our system can securely compute some basic statistics with conditional expressions using 'R' that works interactively while secure computation protocols generally require a significant amount of processing time. We propose a secure statistical analysis system using 'R' for medical data that effectively integrates secret-sharing-based secure computation and original computation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  10. The effects of a computer skill training programme adopting social comparison and self-efficacy enhancement strategies on self-concept and skill outcome in trainees with physical disabilities.

    PubMed

    Tam, S F

    2000-10-15

    The aim of this controlled, quasi-experimental study was to evaluate the effects of both self-efficacy enhancement and social comparison training strategy on computer skills learning and self-concept outcome of trainees with physical disabilities. The self-efficacy enhancement group comprised 16 trainees, the tutorial training group comprised 15 trainees, and there were 25 subjects in the control group. Both the self-efficacy enhancement group and the tutorial training group received a 15 week computer skills training course, including generic Chinese computer operation, Chinese word processing and Chinese desktop publishing skills. The self-efficacy enhancement group received training with tutorial instructions that incorporated self-efficacy enhancement strategies and experienced self-enhancing social comparisons. The tutorial training group received behavioural learning-based tutorials only, and the control group did not receive any training. The following measurements were employed to evaluate the outcomes: the Self-Concept Questionnaire for the Physically Disabled Hong Kong Chinese (SCQPD), the computer self-efficacy rating scale and the computer performance rating scale. The self-efficacy enhancement group showed significantly better computer skills learning outcome, total self-concept, and social self-concept than the tutorial training group. The self-efficacy enhancement group did not show significant changes in their computer self-efficacy: however, the tutorial training group showed a significant lowering of their computer self-efficacy. The training strategy that incorporated self-efficacy enhancement and positive social comparison experiences maintained the computer self-efficacy of trainees with physical disabilities. This strategy was more effective in improving the learning outcome (p = 0.01) and self-concept (p = 0.05) of the trainees than the conventional tutorial-based training strategy.

  11. Estimation of Noise Properties for TV-regularized Image Reconstruction in Computed Tomography

    PubMed Central

    Sánchez, Adrian A.

    2016-01-01

    A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR. PMID:26308968

  12. On the numerical computation of nonlinear force-free magnetic fields. [from solar photosphere

    NASA Technical Reports Server (NTRS)

    Wu, S. T.; Sun, M. T.; Chang, H. M.; Hagyard, M. J.; Gary, G. A.

    1990-01-01

    An algorithm has been developed to extrapolate nonlinear force-free magnetic fields from the photosphere, given the proper boundary conditions. This paper presents the results of this work, describing the mathematical formalism that was developed, the numerical techniques employed, and comments on the stability criteria and accuracy developed for these numerical schemes. An analytical solution is used for a benchmark test; the results show that the computational accuracy for the case of a nonlinear force-free magnetic field was on the order of a few percent (less than 5 percent). This newly developed scheme was applied to analyze a solar vector magnetogram, and the results were compared with the results deduced from the classical potential field method. The comparison shows that additional physical features of the vector magnetogram were revealed in the nonlinear force-free case.

  13. Discrete Adjoint-Based Design Optimization of Unsteady Turbulent Flows on Dynamic Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris; Yamaleev, Nail K.

    2009-01-01

    An adjoint-based methodology for design optimization of unsteady turbulent flows on dynamic unstructured grids is described. The implementation relies on an existing unsteady three-dimensional unstructured grid solver capable of dynamic mesh simulations and discrete adjoint capabilities previously developed for steady flows. The discrete equations for the primal and adjoint systems are presented for the backward-difference family of time-integration schemes on both static and dynamic grids. The consistency of sensitivity derivatives is established via comparisons with complex-variable computations. The current work is believed to be the first verified implementation of an adjoint-based optimization methodology for the true time-dependent formulation of the Navier-Stokes equations in a practical computational code. Large-scale shape optimizations are demonstrated for turbulent flows over a tiltrotor geometry and a simulated aeroelastic motion of a fighter jet.

  14. Estimation of noise properties for TV-regularized image reconstruction in computed tomography.

    PubMed

    Sánchez, Adrian A

    2015-09-21

    A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.

  15. Estimation of noise properties for TV-regularized image reconstruction in computed tomography

    NASA Astrophysics Data System (ADS)

    Sánchez, Adrian A.

    2015-09-01

    A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128× 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.

  16. Evaluation of out-of-core computer programs for the solution of symmetric banded linear equations. [simultaneous equations

    NASA Technical Reports Server (NTRS)

    Dunham, R. S.

    1976-01-01

    FORTRAN coded out-of-core equation solvers that solve using direct methods symmetric banded systems of simultaneous algebraic equations. Banded, frontal and column (skyline) solvers were studied as well as solvers that can partition the working area and thus could fit into any available core. Comparison timings are presented for several typical two dimensional and three dimensional continuum type grids of elements with and without midside nodes. Extensive conclusions are also given.

  17. Simple algorithms for digital pulse-shape discrimination with liquid scintillation detectors

    NASA Astrophysics Data System (ADS)

    Alharbi, T.

    2015-01-01

    The development of compact, battery-powered digital liquid scintillation neutron detection systems for field applications requires digital pulse processing (DPP) algorithms with minimum computational overhead. To meet this demand, two DPP algorithms for the discrimination of neutron and γ-rays with liquid scintillation detectors were developed and examined by using a NE213 liquid scintillation detector in a mixed radiation field. The first algorithm is based on the relation between the amplitude of a current pulse at the output of a photomultiplier tube and the amount of charge contained in the pulse. A figure-of-merit (FOM) value of 0.98 with 450 keVee (electron equivalent energy) energy threshold was achieved with this method when pulses were sampled at 250 MSample/s and with 8-bit resolution. Compared to the similar method of charge-comparison this method requires only a single integration window, thereby reducing the amount of computations by approximately 40%. The second approach is a digital version of the trailing-edge constant-fraction discrimination method. A FOM value of 0.84 with an energy threshold of 450 keVee was achieved with this method. In comparison with the similar method of rise-time discrimination this method requires a single time pick-off, thereby reducing the amount of computations by approximately 50%. The algorithms described in this work are useful for developing portable detection systems for applications such as homeland security, radiation dosimetry and environmental monitoring.

  18. A comparison study of visually stimulated brain-computer and eye-tracking interfaces

    NASA Astrophysics Data System (ADS)

    Suefusa, Kaori; Tanaka, Toshihisa

    2017-06-01

    Objective. Brain-computer interfacing (BCI) based on visual stimuli detects the target on a screen on which a user is focusing. The detection of the gazing target can be achieved by tracking gaze positions with a video camera, which is called eye-tracking or eye-tracking interfaces (ETIs). The two types of interface have been developed in different communities. Thus, little work on a comprehensive comparison between these two types of interface has been reported. This paper quantitatively compares the performance of these two interfaces on the same experimental platform. Specifically, our study is focused on two major paradigms of BCI and ETI: steady-state visual evoked potential-based BCIs and dwelling-based ETIs. Approach. Recognition accuracy and the information transfer rate were measured by giving subjects the task of selecting one of four targets by gazing at it. The targets were displayed in three different sizes (with sides 20, 40 and 60 mm long) to evaluate performance with respect to the target size. Main results. The experimental results showed that the BCI was comparable to the ETI in terms of accuracy and the information transfer rate. In particular, when the size of a target was relatively small, the BCI had significantly better performance than the ETI. Significance. The results on which of the two interfaces works better in different situations would not only enable us to improve the design of the interfaces but would also allow for the appropriate choice of interface based on the situation. Specifically, one can choose an interface based on the size of the screen that displays the targets.

  19. Comparing DNS and Experiments of Subcritical Flow Past an Isolated Surface Roughness Element

    NASA Astrophysics Data System (ADS)

    Doolittle, Charles; Goldstein, David

    2009-11-01

    Results are presented from computational and experimental studies of subcritical roughness within a Blasius boundary layer. This work stems from discrepancies presented by Stephani and Goldstein (AIAA Paper 2009-585) where DNS results did not agree with hot-wire measurements. The near wake regions of cylindrical surface roughness elements corresponding to roughness-based Reynolds numbers Rek of about 202 are of specific concern. Laser-Doppler anemometry and flow visualization in water, as well as the same spectral DNS code used by Stephani and Goldstein are used to obtain both quantitative and qualitative comparisons with previous results. Conclusions regarding previous studies will be presented alongside discussion of current work including grid resolution studies and an examination of vorticity dynamics.

  20. Implementation and evaluation of an efficient secure computation system using ‘R’ for healthcare statistics

    PubMed Central

    Chida, Koji; Morohashi, Gembu; Fuji, Hitoshi; Magata, Fumihiko; Fujimura, Akiko; Hamada, Koki; Ikarashi, Dai; Yamamoto, Ryuichi

    2014-01-01

    Background and objective While the secondary use of medical data has gained attention, its adoption has been constrained due to protection of patient privacy. Making medical data secure by de-identification can be problematic, especially when the data concerns rare diseases. We require rigorous security management measures. Materials and methods Using secure computation, an approach from cryptography, our system can compute various statistics over encrypted medical records without decrypting them. An issue of secure computation is that the amount of processing time required is immense. We implemented a system that securely computes healthcare statistics from the statistical computing software ‘R’ by effectively combining secret-sharing-based secure computation with original computation. Results Testing confirmed that our system could correctly complete computation of average and unbiased variance of approximately 50 000 records of dummy insurance claim data in a little over a second. Computation including conditional expressions and/or comparison of values, for example, t test and median, could also be correctly completed in several tens of seconds to a few minutes. Discussion If medical records are simply encrypted, the risk of leaks exists because decryption is usually required during statistical analysis. Our system possesses high-level security because medical records remain in encrypted state even during statistical analysis. Also, our system can securely compute some basic statistics with conditional expressions using ‘R’ that works interactively while secure computation protocols generally require a significant amount of processing time. Conclusions We propose a secure statistical analysis system using ‘R’ for medical data that effectively integrates secret-sharing-based secure computation and original computation. PMID:24763677

  1. A Temporal Credential-Based Mutual Authentication with Multiple-Password Scheme for Wireless Sensor Networks

    PubMed Central

    Zhang, Ruisheng; Liu, Qidong

    2017-01-01

    Wireless sensor networks (WSNs), which consist of a large number of sensor nodes, have become among the most important technologies in numerous fields, such as environmental monitoring, military surveillance, control systems in nuclear reactors, vehicle safety systems, and medical monitoring. The most serious drawback for the widespread application of WSNs is the lack of security. Given the resource limitation of WSNs, traditional security schemes are unsuitable. Approaches toward withstanding related attacks with small overhead have thus recently been studied by many researchers. Numerous studies have focused on the authentication scheme for WSNs, but most of these works cannot achieve the security performance and overhead perfectly. Nam et al. proposed a two-factor authentication scheme with lightweight sensor computation for WSNs. In this paper, we review this scheme, emphasize its drawbacks, and propose a temporal credential-based mutual authentication with a multiple-password scheme for WSNs. Our scheme uses multiple passwords to achieve three-factor security performance and generate a session key between user and sensor nodes. The security analysis phase shows that our scheme can withstand related attacks, including a lost password threat, and the comparison phase shows that our scheme involves a relatively small overhead. In the comparison of the overhead phase, the result indicates that more than 95% of the overhead is composed of communication and not computation overhead. Therefore, the result motivates us to pay further attention to communication overhead than computation overhead in future research. PMID:28135288

  2. A Temporal Credential-Based Mutual Authentication with Multiple-Password Scheme for Wireless Sensor Networks.

    PubMed

    Liu, Xin; Zhang, Ruisheng; Liu, Qidong

    2017-01-01

    Wireless sensor networks (WSNs), which consist of a large number of sensor nodes, have become among the most important technologies in numerous fields, such as environmental monitoring, military surveillance, control systems in nuclear reactors, vehicle safety systems, and medical monitoring. The most serious drawback for the widespread application of WSNs is the lack of security. Given the resource limitation of WSNs, traditional security schemes are unsuitable. Approaches toward withstanding related attacks with small overhead have thus recently been studied by many researchers. Numerous studies have focused on the authentication scheme for WSNs, but most of these works cannot achieve the security performance and overhead perfectly. Nam et al. proposed a two-factor authentication scheme with lightweight sensor computation for WSNs. In this paper, we review this scheme, emphasize its drawbacks, and propose a temporal credential-based mutual authentication with a multiple-password scheme for WSNs. Our scheme uses multiple passwords to achieve three-factor security performance and generate a session key between user and sensor nodes. The security analysis phase shows that our scheme can withstand related attacks, including a lost password threat, and the comparison phase shows that our scheme involves a relatively small overhead. In the comparison of the overhead phase, the result indicates that more than 95% of the overhead is composed of communication and not computation overhead. Therefore, the result motivates us to pay further attention to communication overhead than computation overhead in future research.

  3. Comparison of two computer codes for crack growth analysis: NASCRAC Versus NASA/FLAGRO

    NASA Technical Reports Server (NTRS)

    Stallworth, R.; Meyers, C. A.; Stinson, H. C.

    1989-01-01

    Results are presented from the comparison study of two computer codes for crack growth analysis - NASCRAC and NASA/FLAGRO. The two computer codes gave compatible conservative results when the part through crack analysis solutions were analyzed versus experimental test data. Results showed good correlation between the codes for the through crack at a lug solution. For the through crack at a lug solution, NASA/FLAGRO gave the most conservative results.

  4. D-region blunt probe data analysis using hybrid computer techniques

    NASA Technical Reports Server (NTRS)

    Burkhard, W. J.

    1973-01-01

    The feasibility of performing data reduction techniques with a hybrid computer was studied. The data was obtained from the flight of a parachute born probe through the D-region of the ionosphere. A presentation of the theory of blunt probe operation is included with emphasis on the equations necessary to perform the analysis. This is followed by a discussion of computer program development. Included in this discussion is a comparison of computer and hand reduction results for the blunt probe launched on 31 January 1972. The comparison showed that it was both feasible and desirable to use the computer for data reduction. The results of computer data reduction performed on flight data acquired from five blunt probes are also presented.

  5. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    PubMed

    Durstewitz, Daniel

    2017-06-01

    The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties.

  6. Computer modelling of BaY2F8: defect structure, rare earth doping and optical behaviour

    NASA Astrophysics Data System (ADS)

    Amaral, J. B.; Couto Dos Santos, M. A.; Valerio, M. E. G.; Jackson, R. A.

    2005-10-01

    BaY2F8, when doped with rare earth elements, is a material of interest in the development of solid-state laser systems, especially for use in the infrared region. This paper presents the application of a computational technique, which combines atomistic modelling and crystal field calculations, in a study of rare earth doping of the material. Atomistic modelling is used to calculate the intrinsic defect structure and the symmetry and detailed geometry of the dopant ion-host lattice system, and this information is then used to calculate the crystal field parameters, which are an important indicator in assessing the optical behaviour of the dopant-crystal system. Energy levels are then calculated for the Dy3+-substituted material, and comparisons with the results of recent experimental work are made.

  7. Comparison of digital controllers used in magnetic suspension and balance systems

    NASA Technical Reports Server (NTRS)

    Kilgore, William A.

    1990-01-01

    Dynamic systems that were once controlled by analog circuits are now controlled by digital computers. Presented is a comparison of the digital controllers presently used with magnetic suspension and balance systems. The overall responses of the systems are compared using a computer simulation of the magnetic suspension and balance system and the digital controllers. The comparisons include responses to both simulated force and position inputs. A preferred digital controller is determined from the simulated responses.

  8. Cloud Computing for Protein-Ligand Binding Site Comparison

    PubMed Central

    2013-01-01

    The proteome-wide analysis of protein-ligand binding sites and their interactions with ligands is important in structure-based drug design and in understanding ligand cross reactivity and toxicity. The well-known and commonly used software, SMAP, has been designed for 3D ligand binding site comparison and similarity searching of a structural proteome. SMAP can also predict drug side effects and reassign existing drugs to new indications. However, the computing scale of SMAP is limited. We have developed a high availability, high performance system that expands the comparison scale of SMAP. This cloud computing service, called Cloud-PLBS, combines the SMAP and Hadoop frameworks and is deployed on a virtual cloud computing platform. To handle the vast amount of experimental data on protein-ligand binding site pairs, Cloud-PLBS exploits the MapReduce paradigm as a management and parallelizing tool. Cloud-PLBS provides a web portal and scalability through which biologists can address a wide range of computer-intensive questions in biology and drug discovery. PMID:23762824

  9. Cloud computing for protein-ligand binding site comparison.

    PubMed

    Hung, Che-Lun; Hua, Guan-Jie

    2013-01-01

    The proteome-wide analysis of protein-ligand binding sites and their interactions with ligands is important in structure-based drug design and in understanding ligand cross reactivity and toxicity. The well-known and commonly used software, SMAP, has been designed for 3D ligand binding site comparison and similarity searching of a structural proteome. SMAP can also predict drug side effects and reassign existing drugs to new indications. However, the computing scale of SMAP is limited. We have developed a high availability, high performance system that expands the comparison scale of SMAP. This cloud computing service, called Cloud-PLBS, combines the SMAP and Hadoop frameworks and is deployed on a virtual cloud computing platform. To handle the vast amount of experimental data on protein-ligand binding site pairs, Cloud-PLBS exploits the MapReduce paradigm as a management and parallelizing tool. Cloud-PLBS provides a web portal and scalability through which biologists can address a wide range of computer-intensive questions in biology and drug discovery.

  10. Disability, employment and work performance among people with ICD-10 anxiety disorders.

    PubMed

    Waghorn, Geoff; Chant, David; White, Paul; Whiteford, Harvey

    2005-01-01

    To ascertain at a population level, patterns of disability, labour force participation, employment and work performance among people with ICD-10 anxiety disorders in comparison to people without disability or long-term health conditions. A secondary analysis was conducted of a probability sample of 42 664 individuals collected in an Australian Bureau of Statistics (ABS) national survey in 1998. Trained lay interviewers using ICD-10 computer-assisted interviews identified household residents with anxiety disorders. Anxiety disorders were associated with: reduced labour force participation, degraded employment trajectories and impaired work performance compared to people without disabilities or long-term health conditions. People with anxiety disorders may need more effective treatments and assistance with completing education and training, joining and rejoining the workforce, developing career pathways, remaining in the workforce and sustaining work performance. A whole-of-government approach appears needed to reduce the burden of disease and increase community labour resources. Implications for clinicians, vocational professionals and policy makers are discussed.

  11. [The assessment of the general functional status and of the psychosomatic complaints of workers at hydroelectric power plants].

    PubMed

    Danev, S; Dapov, E; Pavlov, E; Nikolova, R

    1992-01-01

    Evaluation of the general functional status and psychosomatic complaints of 61 workers from the hydroelectric power stations is made. The following methods are used: 1. Assessment of the general functional state, by means of computer analysis of the cardiac variability, analysing the changes in the values of the following indices: average value of the cardiac intervals (X), their standard deviation (SD), coefficient of variation (CV), amplitude of the mode (AMO), index of stress (IS), index of the vegetative balance (IVB), homeostatic index (HI). The last 3 indices serve for determination of the complex evaluation of chronic fatigue and work adaptation (ChFWA). 2. Evaluation of the psychosomatic complaints, by the use of a questionnaire for the subjective psychosomatic complaints. 3. Studying the systolic and diastolic blood pressure. The average values received in workers from HPS were compared with the average values of the population of the country and with the average values of a similar working activity of a group of operators from the thermal power station HPS. In conclusion it could be noted that concerning ChFWA the received values in workers from HPS are not more unfavourable generalized values from that measured in workers, occupied with similar type of work in other industrial branches of the country. However, they are with more unfavourable data in comparison with the workers from HPS. The subjective evaluation of the operators concerning their psychic and body health status is moderately worse, both in comparison with the values of the index for the country, and in comparison with those of the operators from HPS.

  12. A modified elastic foundation contact model for application in 3D models of the prosthetic knee.

    PubMed

    Pérez-González, Antonio; Fenollosa-Esteve, Carlos; Sancho-Bru, Joaquín L; Sánchez-Marín, Francisco T; Vergara, Margarita; Rodríguez-Cervantes, Pablo J

    2008-04-01

    Different models have been used in the literature for the simulation of surface contact in biomechanical knee models. However, there is a lack of systematic comparisons of these models applied to the simulation of a common case, which will provide relevant information about their accuracy and suitability for application in models of the implanted knee. In this work a comparison of the Hertz model (HM), the elastic foundation model (EFM) and the finite element model (FEM) for the simulation of the elastic contact in a 3D model of the prosthetic knee is presented. From the results of this comparison it is found that although the nature of the EFM offers advantages when compared with that of the HM for its application to realistic prosthetic surfaces, and when compared with the FEM in CPU time, its predictions can differ from FEM in some circumstances. These differences are considerable if the comparison is performed for prescribed displacements, although they are less important for prescribed loads. To solve these problems a new modified elastic foundation model (mEFM) is proposed that maintains basically the simplicity of the original model while producing much more accurate results. In this paper it is shown that this new mEFM calculates pressure distribution and contact area with accuracy and short computation times for toroidal contacting surfaces. Although further work is needed to confirm its validity for more complex geometries the mEFM is envisaged as a good option for application in 3D knee models to predict prosthetic knee performance.

  13. pK(A) in proteins solving the Poisson-Boltzmann equation with finite elements.

    PubMed

    Sakalli, Ilkay; Knapp, Ernst-Walter

    2015-11-05

    Knowledge on pK(A) values is an eminent factor to understand the function of proteins in living systems. We present a novel approach demonstrating that the finite element (FE) method of solving the linearized Poisson-Boltzmann equation (lPBE) can successfully be used to compute pK(A) values in proteins with high accuracy as a possible replacement to finite difference (FD) method. For this purpose, we implemented the software molecular Finite Element Solver (mFES) in the framework of the Karlsberg+ program to compute pK(A) values. This work focuses on a comparison between pK(A) computations obtained with the well-established FD method and with the new developed FE method mFES, solving the lPBE using protein crystal structures without conformational changes. Accurate and coarse model systems are set up with mFES using a similar number of unknowns compared with the FD method. Our FE method delivers results for computations of pK(A) values and interaction energies of titratable groups, which are comparable in accuracy. We introduce different thermodynamic cycles to evaluate pK(A) values and we show for the FE method how different parameters influence the accuracy of computed pK(A) values. © 2015 Wiley Periodicals, Inc.

  14. Extension of a Kinetic Approach to Chemical Reactions to Electronic Energy Levels and Reactions Involving Charged Species With Application to DSMC Simulations

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2013-01-01

    The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties are extended in the current work to include electronic energy level transitions and reactions involving charged particles. These extensions are shown to agree favorably with reported transition and reaction rates from the literature for nearequilibrium conditions. Also, the extensions are applied to the second flight of the Project FIRE flight experiment at 1634 seconds with a Knudsen number of 0.001 at an altitude of 76.4 km. In order to accomplish this, NASA's direct simulation Monte Carlo code DAC was rewritten to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced chemistry model, and to include the extensions presented in this work. The 1634 second data point was chosen for comparisons to be made in order to include a CFD solution. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid because, although near-transitional, the flow is still considered to be continuum. It is shown that the inclusion of electronic energy levels in the DSMC simulation is necessary for flows of this nature and is required for comparison to the CFD solution. The flow field solutions are also post-processed by the nonequilibrium radiation code HARA to compute the radiative portion of the heating and is then compared to the total heating measured in flight.

  15. Distributed Fast Self-Organized Maps for Massive Spectrophotometric Data Analysis †.

    PubMed

    Dafonte, Carlos; Garabato, Daniel; Álvarez, Marco A; Manteiga, Minia

    2018-05-03

    Analyzing huge amounts of data becomes essential in the era of Big Data, where databases are populated with hundreds of Gigabytes that must be processed to extract knowledge. Hence, classical algorithms must be adapted towards distributed computing methodologies that leverage the underlying computational power of these platforms. Here, a parallel, scalable, and optimized design for self-organized maps (SOM) is proposed in order to analyze massive data gathered by the spectrophotometric sensor of the European Space Agency (ESA) Gaia spacecraft, although it could be extrapolated to other domains. The performance comparison between the sequential implementation and the distributed ones based on Apache Hadoop and Apache Spark is an important part of the work, as well as the detailed analysis of the proposed optimizations. Finally, a domain-specific visualization tool to explore astronomical SOMs is presented.

  16. galario: Gpu Accelerated Library for Analyzing Radio Interferometer Observations

    NASA Astrophysics Data System (ADS)

    Tazzari, Marco; Beaujean, Frederik; Testi, Leonardo

    2017-10-01

    The galario library exploits the computing power of modern graphic cards (GPUs) to accelerate the comparison of model predictions to radio interferometer observations. It speeds up the computation of the synthetic visibilities given a model image (or an axisymmetric brightness profile) and their comparison to the observations.

  17. Multifractal analysis of information processing in hippocampal neural ensembles during working memory under Δ9-tetrahydrocannabinol administration

    PubMed Central

    Fetterhoff, Dustin; Opris, Ioan; Simpson, Sean L.; Deadwyler, Sam A.; Hampson, Robert E.; Kraft, Robert A.

    2014-01-01

    Background Multifractal analysis quantifies the time-scale-invariant properties in data by describing the structure of variability over time. By applying this analysis to hippocampal interspike interval sequences recorded during performance of a working memory task, a measure of long-range temporal correlations and multifractal dynamics can reveal single neuron correlates of information processing. New method Wavelet leaders-based multifractal analysis (WLMA) was applied to hippocampal interspike intervals recorded during a working memory task. WLMA can be used to identify neurons likely to exhibit information processing relevant to operation of brain–computer interfaces and nonlinear neuronal models. Results Neurons involved in memory processing (“Functional Cell Types” or FCTs) showed a greater degree of multifractal firing properties than neurons without task-relevant firing characteristics. In addition, previously unidentified FCTs were revealed because multifractal analysis suggested further functional classification. The cannabinoid-type 1 receptor partial agonist, tetrahydrocannabinol (THC), selectively reduced multifractal dynamics in FCT neurons compared to non-FCT neurons. Comparison with existing methods WLMA is an objective tool for quantifying the memory-correlated complexity represented by FCTs that reveals additional information compared to classification of FCTs using traditional z-scores to identify neuronal correlates of behavioral events. Conclusion z-Score-based FCT classification provides limited information about the dynamical range of neuronal activity characterized by WLMA. Increased complexity, as measured with multifractal analysis, may be a marker of functional involvement in memory processing. The level of multifractal attributes can be used to differentially emphasize neural signals to improve computational models and algorithms underlying brain–computer interfaces. PMID:25086297

  18. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).

    PubMed

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  19. Controlling user access to electronic resources without password

    DOEpatents

    Smith, Fred Hewitt

    2015-06-16

    Described herein are devices and techniques for remotely controlling user access to a restricted computer resource. The process includes pre-determining an association of the restricted computer resource and computer-resource-proximal environmental information. Indicia of user-proximal environmental information are received from a user requesting access to the restricted computer resource. Received indicia of user-proximal environmental information are compared to associated computer-resource-proximal environmental information. User access to the restricted computer resource is selectively granted responsive to a favorable comparison in which the user-proximal environmental information is sufficiently similar to the computer-resource proximal environmental information. In at least some embodiments, the process further includes comparing user-supplied biometric measure and comparing it with a predetermined association of at least one biometric measure of an authorized user. Access to the restricted computer resource is granted in response to a favorable comparison.

  20. Metrics for comparing dynamic earthquake rupture simulations

    USGS Publications Warehouse

    Barall, Michael; Harris, Ruth A.

    2014-01-01

    Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.

  1. A comparison of film and computer workstation measurements of degenerative spondylolisthesis: intraobserver and interobserver reliability.

    PubMed

    Bolesta, Michael J; Winslow, Lauren; Gill, Kevin

    2010-06-01

    A comparison of measurements of degenerative spondylolisthesis made on film and on computer workstations. To determine whether the 2 methodologies are comparable in some of the parameters used to assess lumbar degenerative spondylolisthesis. Digital radiology has been replacing analog radiographs. In scoliosis, several studies have shown that measurements made on digital and analog films are similar and that they are also similar to those made on computer workstations. Such work has not been done in spondylolisthesis. Twenty-four cases of lumbar degenerative spondylolisthesis were identified from our clinic practice. Three observers measured anterior displacement, sagittal rotation, and lumbar lordosis on digital films using the same protractor and pencil. The same parameters were measured on the same studies at clinical workstations. All measurements were repeated 2 weeks later. A statistician determined the intra and interobserver reliability of the 2 measurement methods and the degree of agreement between the 2 methods. The differences between the first and second readings did reach statistical significance in some cases, but none of them were large enough to be clinically meaningful. The interclass correlation coefficients (ICCs) were >or=0.80 except for one (0.67). The difference among the 3 observers was similarly statistically significant in a few instances but not enough to influence clinical decisions and with good ICCs (0.67 and better). Similarly, the differences in the 2 methods were small, and ICCs ranged from 0.69 to 0.98. This study supports the use of computer workstation measurements in lumbar degenerative spondylolisthesis. The parameters used in this study were comparable, whether measured on film or at clinical workstations.

  2. Hybrid computers and simulation languages in the study of dynamics of continuous systems

    NASA Technical Reports Server (NTRS)

    Acaccia, G. M.; Lucifredi, A. L.

    1970-01-01

    A comparison is presented of the use of hybrid computers and simulation languages as a means of studying the behavior of dynamic systems. Both procedures are defined and their advantages and disadvantages at the present state of the art are discussed. Some comparison and evaluation criteria are presented.

  3. Density functional theory calculations of 95Mo NMR parameters in solid-state compounds.

    PubMed

    Cuny, Jérôme; Furet, Eric; Gautier, Régis; Le Pollès, Laurent; Pickard, Chris J; d'Espinose de Lacaillerie, Jean-Baptiste

    2009-12-21

    The application of periodic density functional theory-based methods to the calculation of (95)Mo electric field gradient (EFG) and chemical shift (CS) tensors in solid-state molybdenum compounds is presented. Calculations of EFG tensors are performed using the projector augmented-wave (PAW) method. Comparison of the results with those obtained using the augmented plane wave + local orbitals (APW+lo) method and with available experimental values shows the reliability of the approach for (95)Mo EFG tensor calculation. CS tensors are calculated using the recently developed gauge-including projector augmented-wave (GIPAW) method. This work is the first application of the GIPAW method to a 4d transition-metal nucleus. The effects of ultra-soft pseudo-potential parameters, exchange-correlation functionals and structural parameters are precisely examined. Comparison with experimental results allows the validation of this computational formalism.

  4. Comparison of Experimental and Computational Aerothermodynamics of a 70-deg Sphere-Cone

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Perkins, John N.

    1996-01-01

    Numerical solutions for hypersonic flows of carbon-dioxide and air around a 70-deg sphere-cone have been computed using an axisymmetric non-equilibrium Navier-Stokes solver. Freestream flow conditions for these computations were equivalent to those obtained in an experimental blunt-body heat-transfer study conducted in a high-enthalpy, hypervelocity expansion tube. Comparisons have been made between the computed and measured surface heat-transfer rates on the forebody and afterbody of the sphere-cone and on the sting which supported the test model. Computed forebody heating rates were within the estimated experimental uncertainties of 10% on the forebody and 15% in the wake except for within the recirculating flow region of the wake.

  5. Two-Dimensional Thermal Boundary Layer Corrections for Convective Heat Flux Gauges

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Haddad, George

    2007-01-01

    This work presents a CFD (Computational Fluid Dynamics) study of two-dimensional thermal boundary layer correction factors for convective heat flux gauges mounted in flat plate subjected to a surface temperature discontinuity with variable properties taken into account. A two-equation k - omega turbulence model is considered. Results are obtained for a wide range of Mach numbers (1 to 5), gauge radius ratio, and wall temperature discontinuity. Comparisons are made for correction factors with constant properties and variable properties. It is shown that the variable-property effects on the heat flux correction factors become significant

  6. Dopamine modulation of learning and memory in the prefrontal cortex: insights from studies in primates, rodents, and birds

    PubMed Central

    Puig, M. Victoria; Rose, Jonas; Schmidt, Robert; Freund, Nadja

    2014-01-01

    In this review, we provide a brief overview over the current knowledge about the role of dopamine transmission in the prefrontal cortex during learning and memory. We discuss work in humans, monkeys, rats, and birds in order to provide a basis for comparison across species that might help identify crucial features and constraints of the dopaminergic system in executive function. Computational models of dopamine function are introduced to provide a framework for such a comparison. We also provide a brief evolutionary perspective showing that the dopaminergic system is highly preserved across mammals. Even birds, following a largely independent evolution of higher cognitive abilities, have evolved a comparable dopaminergic system. Finally, we discuss the unique advantages and challenges of using different animal models for advancing our understanding of dopamine function in the healthy and diseased brain. PMID:25140130

  7. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  8. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids by Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  9. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  10. Automated bond order assignment as an optimization problem.

    PubMed

    Dehof, Anna Katharina; Rurainski, Alexander; Bui, Quang Bao Anh; Böcker, Sebastian; Lenhof, Hans-Peter; Hildebrandt, Andreas

    2011-03-01

    Numerous applications in Computational Biology process molecular structures and hence strongly rely not only on correct atomic coordinates but also on correct bond order information. For proteins and nucleic acids, bond orders can be easily deduced but this does not hold for other types of molecules like ligands. For ligands, bond order information is not always provided in molecular databases and thus a variety of approaches tackling this problem have been developed. In this work, we extend an ansatz proposed by Wang et al. that assigns connectivity-based penalty scores and tries to heuristically approximate its optimum. In this work, we present three efficient and exact solvers for the problem replacing the heuristic approximation scheme of the original approach: an A*, an ILP and an fixed-parameter approach (FPT) approach. We implemented and evaluated the original implementation, our A*, ILP and FPT formulation on the MMFF94 validation suite and the KEGG Drug database. We show the benefit of computing exact solutions of the penalty minimization problem and the additional gain when computing all optimal (or even suboptimal) solutions. We close with a detailed comparison of our methods. The A* and ILP solution are integrated into the open-source C++ LGPL library BALL and the molecular visualization and modelling tool BALLView and can be downloaded from our homepage www.ball-project.org. The FPT implementation can be downloaded from http://bio.informatik.uni-jena.de/software/.

  11. Effects of portable computing devices on posture, muscle activation levels and efficiency.

    PubMed

    Werth, Abigail; Babski-Reeves, Kari

    2014-11-01

    Very little research exists on ergonomic exposures when using portable computing devices. This study quantified muscle activity (forearm and neck), posture (wrist, forearm and neck), and performance (gross typing speed and error rates) differences across three portable computing devices (laptop, netbook, and slate computer) and two work settings (desk and computer) during data entry tasks. Twelve participants completed test sessions on a single computer using a test-rest-test protocol (30min of work at one work setting, 15min of rest, 30min of work at the other work setting). The slate computer resulted in significantly more non-neutral wrist, elbow and neck postures, particularly when working on the sofa. Performance on the slate computer was four times less than that of the other computers, though lower muscle activity levels were also found. Potential or injury or illness may be elevated when working on smaller, portable computers in non-traditional work settings. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  12. A practical comparison of algorithms for the measurement of multiscale entropy in neural time series data.

    PubMed

    Kuntzelman, Karl; Jack Rhodes, L; Harrington, Lillian N; Miskovic, Vladimir

    2018-06-01

    There is a broad family of statistical methods for capturing time series regularity, with increasingly widespread adoption by the neuroscientific community. A common feature of these methods is that they permit investigators to quantify the entropy of brain signals - an index of unpredictability/complexity. Despite the proliferation of algorithms for computing entropy from neural time series data there is scant evidence concerning their relative stability and efficiency. Here we evaluated several different algorithmic implementations (sample, fuzzy, dispersion and permutation) of multiscale entropy in terms of their stability across sessions, internal consistency and computational speed, accuracy and precision using a combination of electroencephalogram (EEG) and synthetic 1/ƒ noise signals. Overall, we report fair to excellent internal consistency and longitudinal stability over a one-week period for the majority of entropy estimates, with several caveats. Computational timing estimates suggest distinct advantages for dispersion and permutation entropy over other entropy estimates. Considered alongside the psychometric evidence, we suggest several ways in which researchers can maximize computational resources (without sacrificing reliability), especially when working with high-density M/EEG data or multivoxel BOLD time series signals. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Four Classical Methods for Determining Planetary Elliptic Elements: A Comparison

    NASA Astrophysics Data System (ADS)

    Celletti, Alessandra; Pinzari, Gabriella

    2005-09-01

    The discovery of the asteroid Ceres by Piazzi in 1801 motivated the development of a mathematical technique proposed by Gauss, (Theory of the Motion of the Heavenly Bodies Moving about the Sun in Conic Sections, 1963) which allows to recover the orbit of a celestial body starting from a minimum of three observations. Here we compare the method proposed by Gauss (Theory of the Motion of the Heavenly Bodies Moving about the Sun in Conic Sections, New York, 1963) with the techniques (based on three observations) developed by Laplace (Collected Works 10, 93 146, 1780) and by Mossotti (Memoria Postuma, 1866). We also consider another method developed by Mossotti (Nuova analisi del problema di determinare le orbite dei corpi celesti, 1816 1818), based on four observations. We provide a theoretical and numerical comparison among the different procedures. As an application, we consider the computation of the orbit of the asteroid Juno.

  14. Segmented Domain Decomposition Multigrid For 3-D Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Celestina, M. L.; Adamczyk, J. J.; Rubin, S. G.

    2001-01-01

    A Segmented Domain Decomposition Multigrid (SDDMG) procedure was developed for three-dimensional viscous flow problems as they apply to turbomachinery flows. The procedure divides the computational domain into a coarse mesh comprised of uniformly spaced cells. To resolve smaller length scales such as the viscous layer near a surface, segments of the coarse mesh are subdivided into a finer mesh. This is repeated until adequate resolution of the smallest relevant length scale is obtained. Multigrid is used to communicate information between the different grid levels. To test the procedure, simulation results will be presented for a compressor and turbine cascade. These simulations are intended to show the ability of the present method to generate grid independent solutions. Comparisons with data will also be presented. These comparisons will further demonstrate the usefulness of the present work for they allow an estimate of the accuracy of the flow modeling equations independent of error attributed to numerical discretization.

  15. Bayesian model comparison and parameter inference in systems biology using nested sampling.

    PubMed

    Pullen, Nick; Morris, Richard J

    2014-01-01

    Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling's nested sampling to address both of these problems. Nested sampling is a Bayesian method for exploring parameter space that transforms a multi-dimensional integral to a 1D integration over likelihood space. This approach focuses on the computation of the marginal likelihood or evidence. The ratio of evidences of different models leads to the Bayes factor, which can be used for model comparison. We demonstrate how nested sampling can be used to reverse-engineer a system's behaviour whilst accounting for the uncertainty in the results. The effect of missing initial conditions of the variables as well as unknown parameters is investigated. We show how the evidence and the model ranking can change as a function of the available data. Furthermore, the addition of data from extra variables of the system can deliver more information for model comparison than increasing the data from one variable, thus providing a basis for experimental design.

  16. Learning by statistical cooperation of self-interested neuron-like computing elements.

    PubMed

    Barto, A G

    1985-01-01

    Since the usual approaches to cooperative computation in networks of neuron-like computating elements do not assume that network components have any "preferences", they do not make substantive contact with game theoretic concepts, despite their use of some of the same terminology. In the approach presented here, however, each network component, or adaptive element, is a self-interested agent that prefers some inputs over others and "works" toward obtaining the most highly preferred inputs. Here we describe an adaptive element that is robust enough to learn to cooperate with other elements like itself in order to further its self-interests. It is argued that some of the longstanding problems concerning adaptation and learning by networks might be solvable by this form of cooperativity, and computer simulation experiments are described that show how networks of self-interested components that are sufficiently robust can solve rather difficult learning problems. We then place the approach in its proper historical and theoretical perspective through comparison with a number of related algorithms. A secondary aim of this article is to suggest that beyond what is explicitly illustrated here, there is a wealth of ideas from game theory and allied disciplines such as mathematical economics that can be of use in thinking about cooperative computation in both nervous systems and man-made systems.

  17. Wavelet subband coding of computer simulation output using the A++ array class library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.

    1995-07-01

    The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using amore » bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.« less

  18. Bulk Enthalpy Calculations in the Arc Jet Facility at NASA ARC

    NASA Technical Reports Server (NTRS)

    Thompson, Corinna S.; Prabhu, Dinesh; Terrazas-Salinas, Imelda; Mach, Jeffrey J.

    2011-01-01

    The Arc Jet Facilities at NASA Ames Research Center generate test streams with enthalpies ranging from 5 MJ/kg to 25 MJ/kg. The present work describes a rigorous method, based on equilibrium thermodynamics, for calculating the bulk enthalpy of the flow produced in two of these facilities. The motivation for this work is to determine a dimensionally-correct formula for calculating the bulk enthalpy that is at least as accurate as the conventional formulas that are currently used. Unlike previous methods, the new method accounts for the amount of argon that is present in the flow. Comparisons are made with bulk enthalpies computed from an energy balance method. An analysis of primary facility operating parameters and their associated uncertainties is presented in order to further validate the enthalpy calculations reported herein.

  19. Simulation of Clinical Diagnosis: A Comparative Study

    PubMed Central

    de Dombal, F. T.; Horrocks, Jane C.; Staniland, J. R.; Gill, P. W.

    1971-01-01

    This paper presents a comparison between three different modes of simulation of the diagnostic process—a computer-based system, a verbal mode, and a further mode in which cards were selected from a large board. A total of 34 subjects worked through a series of 444 diagnostic simulations. The verbal mode was found to be most enjoyable and realistic. At the board, considerable amounts of extra irrelevant data were selected. At the computer, the users asked the same questions every time, whether or not they were relevant to the particular diagnosis. They also found the teletype distracting, noisy, and slow. The need for an acceptable simulation system remains, and at present our Minisim and verbal modes are proving useful in training junior clinical students. Future simulators should be flexible, economical, and acceptably realistic—and to us this latter criterion implies the two-way use of speech. We are currently developing and testing such a system. PMID:5579197

  20. Segmentation of 3D ultrasound computer tomography reflection images using edge detection and surface fitting

    NASA Astrophysics Data System (ADS)

    Hopp, T.; Zapf, M.; Ruiter, N. V.

    2014-03-01

    An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.

  1. Scalable quantum information processing with atomic ensembles and flying photons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei Feng; Yu Yafei; Feng Mang

    2009-10-15

    We present a scheme for scalable quantum information processing with atomic ensembles and flying photons. Using the Rydberg blockade, we encode the qubits in the collective atomic states, which could be manipulated fast and easily due to the enhanced interaction in comparison to the single-atom case. We demonstrate that our proposed gating could be applied to generation of two-dimensional cluster states for measurement-based quantum computation. Moreover, the atomic ensembles also function as quantum repeaters useful for long-distance quantum state transfer. We show the possibility of our scheme to work in bad cavity or in weak coupling regime, which could muchmore » relax the experimental requirement. The efficient coherent operations on the ensemble qubits enable our scheme to be switchable between quantum computation and quantum communication using atomic ensembles.« less

  2. Comparison of alternative designs for reducing complex neurons to equivalent cables.

    PubMed

    Burke, R E

    2000-01-01

    Reduction of the morphological complexity of actual neurons into accurate, computationally efficient surrogate models is an important problem in computational neuroscience. The present work explores the use of two morphoelectrotonic transformations, somatofugal voltage attenuation (AT cables) and signal propagation delay (DL cables), as bases for construction of electrotonically equivalent cable models of neurons. In theory, the AT and DL cables should provide more accurate lumping of membrane regions that have the same transmembrane potential than the familiar equivalent cables that are based only on somatofugal electrotonic distance (LM cables). In practice, AT and DL cables indeed provided more accurate simulations of the somatic transient responses produced by fully branched neuron models than LM cables. This was the case in the presence of a somatic shunt as well as when membrane resistivity was uniform.

  3. 3D CAFE modeling of grain structures: application to primary dendritic and secondary eutectic solidification

    NASA Astrophysics Data System (ADS)

    Carozzani, T.; Digonnet, H.; Gandin, Ch-A.

    2012-01-01

    A three-dimensional model is presented for the prediction of grain structures formed in casting. It is based on direct tracking of grain boundaries using a cellular automaton (CA) method. The model is fully coupled with a solution of the heat flow computed with a finite element (FE) method. Several unique capabilities are implemented including (i) the possibility to track the development of several types of grain structures, e.g. dendritic and eutectic grains, (ii) a coupling scheme that permits iterations between the FE method and the CA method, and (iii) tabulated enthalpy curves for the solid and liquid phases that offer the possibility to work with multicomponent alloys. The present CAFE model is also fully parallelized and runs on a cluster of computers. Demonstration is provided by direct comparison between simulated and recorded cooling curves for a directionally solidified aluminum-7 wt% silicon alloy.

  4. A performance comparison of current HPC systems: Blue Gene/Q, Cray XE6 and InfiniBand systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerbyson, Darren J.; Barker, Kevin J.; Vishnu, Abhinav

    2014-01-01

    We present here a performance analysis of three of current architectures that have become commonplace in the High Performance Computing world. Blue Gene/Q is the third generation of systems from IBM that use modestly performing cores but at large-scale in order to achieve high performance. The XE6 is the latest in a long line of Cray systems that use a 3-D topology but the first to use its Gemini interconnection network. InfiniBand provides the flexibility of using compute nodes from many vendors that can be connected in many possible topologies. The performance characteristics of each vary vastly, and the waymore » in which nodes are allocated in each type of system can significantly impact on achieved performance. In this work we compare these three systems using a combination of micro-benchmarks and a set of production applications. In addition we also examine the differences in performance variability observed on each system and quantify the lost performance using a combination of both empirical measurements and performance models. Our results show that significant performance can be lost in normal production operation of the Cray XE6 and InfiniBand Clusters in comparison to Blue Gene/Q.« less

  5. Exploration of factors that affect the comparative effectiveness of physical and virtual manipulatives in an undergraduate laboratory

    NASA Astrophysics Data System (ADS)

    Chini, Jacquelyn J.; Madsen, Adrian; Gire, Elizabeth; Rebello, N. Sanjay; Puntambekar, Sadhana

    2012-06-01

    Recent research results have failed to support the conventionally held belief that students learn physics best from hands-on experiences with physical equipment. Rather, studies have found that students who perform similar experiments with computer simulations perform as well or better on measures of conceptual understanding than their peers who used physical equipment. In this study, we explored how university-level nonscience majors’ understanding of the physics concepts related to pulleys was supported by experimentation with real pulleys and a computer simulation of pulleys. We report that when students use one type of manipulative (physical or virtual), the comparison is influenced both by the concept studied and the timing of the post-test. Students performed similarly on questions related to force and mechanical advantage regardless of the type of equipment used. On the other hand, students who used the computer simulation performed better on questions related to work immediately after completing the activities; however, the two groups performed similarly on the work questions on a test given one week later. Additionally, both sequences of experimentation (physical-virtual and virtual-physical) equally supported students’ understanding of all of the concepts. These results suggest that both the concept learned and the stability of learning gains should continue to be explored to improve educators’ ability to select the best learning experience for a given topic.

  6. Fast Whole-Engine Stirling Analysis

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2005-01-01

    An experimentally validated approach is described for fast axisymmetric Stirling engine simulations. These simulations include the entire displacer interior and demonstrate it is possible to model a complete engine cycle in less than an hour. The focus of this effort was to demonstrate it is possible to produce useful Stirling engine performance results in a time-frame short enough to impact design decisions. The combination of utilizing the latest 64-bit Opteron computer processors, fiber-optical Myrinet communications, dynamic meshing, and across zone partitioning has enabled solution times at least 240 times faster than previous attempts at simulating the axisymmetric Stirling engine. A comparison of the multidimensional results, calibrated one-dimensional results, and known experimental results is shown. This preliminary comparison demonstrates that axisymmetric simulations can be very accurate, but more work remains to improve the simulations through such means as modifying the thermal equilibrium regenerator models, adding fluid-structure interactions, including radiation effects, and incorporating mechanodynamics.

  7. Fast Whole-Engine Stirling Analysis

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2007-01-01

    An experimentally validated approach is described for fast axisymmetric Stirling engine simulations. These simulations include the entire displacer interior and demonstrate it is possible to model a complete engine cycle in less than an hour. The focus of this effort was to demonstrate it is possible to produce useful Stirling engine performance results in a time-frame short enough to impact design decisions. The combination of utilizing the latest 64-bit Opteron computer processors, fiber-optical Myrinet communications, dynamic meshing, and across zone partitioning has enabled solution times at least 240 times faster than previous attempts at simulating the axisymmetric Stirling engine. A comparison of the multidimensional results, calibrated one-dimensional results, and known experimental results is shown. This preliminary comparison demonstrates that axisymmetric simulations can be very accurate, but more work remains to improve the simulations through such means as modifying the thermal equilibrium regenerator models, adding fluid-structure interactions, including radiation effects, and incorporating mechanodynamics.

  8. Computer use at work is associated with self-reported depressive and anxiety disorder.

    PubMed

    Kim, Taeshik; Kang, Mo-Yeol; Yoo, Min-Sang; Lee, Dongwook; Hong, Yun-Chul

    2016-01-01

    With the development of technology, extensive use of computers in the workplace is prevalent and increases efficiency. However, computer users are facing new harmful working conditions with high workloads and longer hours. This study aimed to investigate the association between computer use at work and self-reported depressive and anxiety disorder (DAD) in a nationally representative sample of South Korean workers. This cross-sectional study was based on the third Korean Working Conditions Survey (2011), and 48,850 workers were analyzed. Information about computer use and DAD was obtained from a self-administered questionnaire. We investigated the relation between computer use at work and DAD using logistic regression. The 12-month prevalence of DAD in computer-using workers was 1.46 %. After adjustment for socio-demographic factors, the odds ratio for DAD was higher in workers using computers more than 75 % of their workday (OR 1.69, 95 % CI 1.30-2.20) than in workers using computers less than 50 % of their shift. After stratifying by working hours, computer use for over 75 % of the work time was significantly associated with increased odds of DAD in 20-39, 41-50, 51-60, and over 60 working hours per week. After stratifying by occupation, education, and job status, computer use for more than 75 % of the work time was related with higher odds of DAD in sales and service workers, those with high school and college education, and those who were self-employed and employers. A high proportion of computer use at work may be associated with depressive and anxiety disorder. This finding suggests the necessity of a work guideline to help the workers suffering from high computer use at work.

  9. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.

  10. Self-adaptive trust based ABR protocol for MANETs using Q-learning.

    PubMed

    Kumar, Anitha Vijaya; Jeyapal, Akilandeswari

    2014-01-01

    Mobile ad hoc networks (MANETs) are a collection of mobile nodes with a dynamic topology. MANETs work under scalable conditions for many applications and pose different security challenges. Due to the nomadic nature of nodes, detecting misbehaviour is a complex problem. Nodes also share routing information among the neighbours in order to find the route to the destination. This requires nodes to trust each other. Thus we can state that trust is a key concept in secure routing mechanisms. A number of cryptographic protection techniques based on trust have been proposed. Q-learning is a recently used technique, to achieve adaptive trust in MANETs. In comparison to other machine learning computational intelligence techniques, Q-learning achieves optimal results. Our work focuses on computing a score using Q-learning to weigh the trust of a particular node over associativity based routing (ABR) protocol. Thus secure and stable route is calculated as a weighted average of the trust value of the nodes in the route and associativity ticks ensure the stability of the route. Simulation results show that Q-learning based trust ABR protocol improves packet delivery ratio by 27% and reduces the route selection time by 40% over ABR protocol without trust calculation.

  11. Self-Adaptive Trust Based ABR Protocol for MANETs Using Q-Learning

    PubMed Central

    Jeyapal, Akilandeswari

    2014-01-01

    Mobile ad hoc networks (MANETs) are a collection of mobile nodes with a dynamic topology. MANETs work under scalable conditions for many applications and pose different security challenges. Due to the nomadic nature of nodes, detecting misbehaviour is a complex problem. Nodes also share routing information among the neighbours in order to find the route to the destination. This requires nodes to trust each other. Thus we can state that trust is a key concept in secure routing mechanisms. A number of cryptographic protection techniques based on trust have been proposed. Q-learning is a recently used technique, to achieve adaptive trust in MANETs. In comparison to other machine learning computational intelligence techniques, Q-learning achieves optimal results. Our work focuses on computing a score using Q-learning to weigh the trust of a particular node over associativity based routing (ABR) protocol. Thus secure and stable route is calculated as a weighted average of the trust value of the nodes in the route and associativity ticks ensure the stability of the route. Simulation results show that Q-learning based trust ABR protocol improves packet delivery ratio by 27% and reduces the route selection time by 40% over ABR protocol without trust calculation. PMID:25254243

  12. In search of Leonardo: computer-based facial image analysis of Renaissance artworks for identifying Leonardo as subject

    NASA Astrophysics Data System (ADS)

    Tyler, Christopher W.; Smith, William A. P.; Stork, David G.

    2012-03-01

    One of the enduring mysteries in the history of the Renaissance is the adult appearance of the archetypical "Renaissance Man," Leonardo da Vinci. His only acknowledged self-portrait is from an advanced age, and various candidate images of younger men are difficult to assess given the absence of documentary evidence. One clue about Leonardo's appearance comes from the remark of the contemporary historian, Vasari, that the sculpture of David by Leonardo's master, Andrea del Verrocchio, was based on the appearance of Leonardo when he was an apprentice. Taking a cue from this statement, we suggest that the more mature sculpture of St. Thomas, also by Verrocchio, might also have been a portrait of Leonardo. We tested the possibility Leonardo was the subject for Verrocchio's sculpture by a novel computational technique for the comparison of three-dimensional facial configurations. Based on quantitative measures of similarities, we also assess whether another pair of candidate two-dimensional images are plausibly attributable as being portraits of Leonardo as a young adult. Our results are consistent with the claim Leonardo is indeed the subject in these works, but we need comparisons with images in a larger corpora of candidate artworks before our results achieve statistical significance.

  13. BESIII physical offline data analysis on virtualization platform

    NASA Astrophysics Data System (ADS)

    Huang, Q.; Li, H.; Kan, B.; Shi, J.; Lei, X.

    2015-12-01

    In this contribution, we present an ongoing work, which aims at benefiting BESIII computing system for higher resource utilization and more efficient job operations brought by cloud and virtualization technology with Openstack and KVM. We begin with the architecture of BESIII offline software to understand how it works. We mainly report the KVM performance evaluation and optimization from various factors in hardware and kernel. Experimental results show the CPU performance penalty of KVM can be approximately decreased to 3%. In addition, the performance comparison between KVM and physical machines in aspect of CPU, disk IO and network IO is also presented. Finally, we present our development work, an adaptive cloud scheduler, which allocates and reclaims VMs dynamically according to the status of TORQUE queue and the size of resource pool to improve resource utilization and job processing efficiency.

  14. Fully Decentralized Semi-supervised Learning via Privacy-preserving Matrix Completion.

    PubMed

    Fierimonte, Roberto; Scardapane, Simone; Uncini, Aurelio; Panella, Massimo

    2016-08-26

    Distributed learning refers to the problem of inferring a function when the training data are distributed among different nodes. While significant work has been done in the contexts of supervised and unsupervised learning, the intermediate case of Semi-supervised learning in the distributed setting has received less attention. In this paper, we propose an algorithm for this class of problems, by extending the framework of manifold regularization. The main component of the proposed algorithm consists of a fully distributed computation of the adjacency matrix of the training patterns. To this end, we propose a novel algorithm for low-rank distributed matrix completion, based on the framework of diffusion adaptation. Overall, the distributed Semi-supervised algorithm is efficient and scalable, and it can preserve privacy by the inclusion of flexible privacy-preserving mechanisms for similarity computation. The experimental results and comparison on a wide range of standard Semi-supervised benchmarks validate our proposal.

  15. Comparison of pre/post-operative CT image volumes to preoperative digitization of partial hepatectomies: a feasibility study in surgical validation

    NASA Astrophysics Data System (ADS)

    Dumpuri, Prashanth; Clements, Logan W.; Li, Rui; Waite, Jonathan M.; Stefansic, James D.; Geller, David A.; Miga, Michael I.; Dawant, Benoit M.

    2009-02-01

    Preoperative planning combined with image-guidance has shown promise towards increasing the accuracy of liver resection procedures. The purpose of this study was to validate one such preoperative planning tool for four patients undergoing hepatic resection. Preoperative computed tomography (CT) images acquired before surgery were used to identify tumor margins and to plan the surgical approach for resection of these tumors. Surgery was then performed with intraoperative digitization data acquire by an FDA approved image-guided liver surgery system (Pathfinder Therapeutics, Inc., Nashville, TN). Within 5-7 days after surgery, post-operative CT image volumes were acquired. Registration of data within a common coordinate reference was achieved and preoperative plans were compared to the postoperative volumes. Semi-quantitative comparisons are presented in this work and preliminary results indicate that significant liver regeneration/hypertrophy in the postoperative CT images may be present post-operatively. This could challenge pre/post operative CT volume change comparisons as a means to evaluate the accuracy of preoperative surgical plans.

  16. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  17. A comparison of queueing, cluster and distributed computing systems

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Nelson, Michael L.

    1993-01-01

    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  18. 20 plus Years of Computational Fluid Dynamics for the Space Shuttle

    NASA Technical Reports Server (NTRS)

    Gomez, Reynaldo J., III

    2011-01-01

    This slide presentation reviews the use of computational fluid dynamics in performing analysis of the space shuttle with particular reference to the return to flight analysis and other shuttle problems. Slides show a comparison of pressure coefficient with the shuttle ascent configuration between the wind tunnel test and the computed values. the evolution of the grid system for the space shuttle launch vehicle (SSLv) from the early 80's to one in 2004, the grid configuration of the bipod ramp redesign from the original design to the current configuration, charts with the computations showing solid rocket booster surface pressures from wind tunnel data, calculated over two grid systems (i.e., the original 14 grid system, and the enhanced 113 grid system), and the computed flight orbiter wing loads are compared with strain gage data on STS-50 during flight. The loss of STS-107 initiated an unprecedented review of all external environments. The current SSLV grid system of 600+ grids, 1.8 Million surface points and 95+ million volume points is shown. The inflight entry analyses is shown, and the use of Overset CFD as a key part to many external tank redesign and debris assessments is discussed. The work that still remains to be accomplished for future shuttle flights is discussed.

  19. Uncertainty Quantification and Certification Prediction of Low-Boom Supersonic Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Reuter, Bryan W.; Walker, Eric L.; Kleb, Bil; Park, Michael A.

    2014-01-01

    The primary objective of this work was to develop and demonstrate a process for accurate and efficient uncertainty quantification and certification prediction of low-boom, supersonic, transport aircraft. High-fidelity computational fluid dynamics models of multiple low-boom configurations were investigated including the Lockheed Martin SEEB-ALR body of revolution, the NASA 69 Delta Wing, and the Lockheed Martin 1021-01 configuration. A nonintrusive polynomial chaos surrogate modeling approach was used for reduced computational cost of propagating mixed, inherent (aleatory) and model-form (epistemic) uncertainty from both the computation fluid dynamics model and the near-field to ground level propagation model. A methodology has also been introduced to quantify the plausibility of a design to pass a certification under uncertainty. Results of this study include the analysis of each of the three configurations of interest under inviscid and fully turbulent flow assumptions. A comparison of the uncertainty outputs and sensitivity analyses between the configurations is also given. The results of this study illustrate the flexibility and robustness of the developed framework as a tool for uncertainty quantification and certification prediction of low-boom, supersonic aircraft.

  20. Alignment-free genetic sequence comparisons: a review of recent approaches by word analysis.

    PubMed

    Bonham-Carter, Oliver; Steele, Joe; Bastola, Dhundy

    2014-11-01

    Modern sequencing and genome assembly technologies have provided a wealth of data, which will soon require an analysis by comparison for discovery. Sequence alignment, a fundamental task in bioinformatics research, may be used but with some caveats. Seminal techniques and methods from dynamic programming are proving ineffective for this work owing to their inherent computational expense when processing large amounts of sequence data. These methods are prone to giving misleading information because of genetic recombination, genetic shuffling and other inherent biological events. New approaches from information theory, frequency analysis and data compression are available and provide powerful alternatives to dynamic programming. These new methods are often preferred, as their algorithms are simpler and are not affected by synteny-related problems. In this review, we provide a detailed discussion of computational tools, which stem from alignment-free methods based on statistical analysis from word frequencies. We provide several clear examples to demonstrate applications and the interpretations over several different areas of alignment-free analysis such as base-base correlations, feature frequency profiles, compositional vectors, an improved string composition and the D2 statistic metric. Additionally, we provide detailed discussion and an example of analysis by Lempel-Ziv techniques from data compression. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  1. Propeller aircraft interior noise model utilization study and validation

    NASA Technical Reports Server (NTRS)

    Pope, L. D.

    1984-01-01

    Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.

  2. The internal gravity wave spectrum in two high-resolution global ocean models

    NASA Astrophysics Data System (ADS)

    Arbic, B. K.; Ansong, J. K.; Buijsman, M. C.; Kunze, E. L.; Menemenlis, D.; Müller, M.; Richman, J. G.; Savage, A.; Shriver, J. F.; Wallcraft, A. J.; Zamudio, L.

    2016-02-01

    We examine the internal gravity wave (IGW) spectrum in two sets of high-resolution global ocean simulations that are forced concurrently by atmospheric fields and the astronomical tidal potential. We analyze global 1/12th and 1/25th degree HYCOM simulations, and global 1/12th, 1/24th, and 1/48th degree simulations of the MITgcm. We are motivated by the central role that IGWs play in ocean mixing, by operational considerations of the US Navy, which runs HYCOM as an ocean forecast model, and by the impact of the IGW continuum on the sea surface height (SSH) measurements that will be taken by the planned NASA/CNES SWOT wide-swath altimeter mission. We (1) compute the IGW horizontal wavenumber-frequency spectrum of kinetic energy, and interpret the results with linear dispersion relations computed from the IGW Sturm-Liouville problem, (2) compute and similarly interpret nonlinear spectral kinetic energy transfers in the IGW band, (3) compute and similarly interpret IGW contributions to SSH variance, (4) perform comparisons of modeled IGW kinetic energy frequency spectra with moored current meter observations, and (5) perform comparisons of modeled IGW kinetic energy vertical wavenumber-frequency spectra with moored observations. This presentation builds upon our work in Muller et al. (2015, GRL), who performed tasks (1), (2), and (4) in 1/12th and 1/25th degree HYCOM simulations, for one region of the North Pacific. New for this presentation are tasks (3) and (5), the inclusion of MITgcm solutions, and the analysis of additional ocean regions.

  3. Compute Server Performance Results

    NASA Technical Reports Server (NTRS)

    Stockdale, I. E.; Barton, John; Woodrow, Thomas (Technical Monitor)

    1994-01-01

    Parallel-vector supercomputers have been the workhorses of high performance computing. As expectations of future computing needs have risen faster than projected vector supercomputer performance, much work has been done investigating the feasibility of using Massively Parallel Processor systems as supercomputers. An even more recent development is the availability of high performance workstations which have the potential, when clustered together, to replace parallel-vector systems. We present a systematic comparison of floating point performance and price-performance for various compute server systems. A suite of highly vectorized programs was run on systems including traditional vector systems such as the Cray C90, and RISC workstations such as the IBM RS/6000 590 and the SGI R8000. The C90 system delivers 460 million floating point operations per second (FLOPS), the highest single processor rate of any vendor. However, if the price-performance ration (PPR) is considered to be most important, then the IBM and SGI processors are superior to the C90 processors. Even without code tuning, the IBM and SGI PPR's of 260 and 220 FLOPS per dollar exceed the C90 PPR of 160 FLOPS per dollar when running our highly vectorized suite,

  4. SU (2) lattice gauge theory simulations on Fermi GPUs

    NASA Astrophysics Data System (ADS)

    Cardoso, Nuno; Bicudo, Pedro

    2011-05-01

    In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes for the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200× the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2× slower) than single precision computations.

  5. Magneto Caloric Effect in Ni-Mn-Ga alloys: First Principles and Experimental studies

    NASA Astrophysics Data System (ADS)

    Odbadrakh, Khorgolkhuu; Nicholson, Don; Brown, Gregory; Rusanu, Aurelian; Rios, Orlando; Hodges, Jason; Safa-Sefat, Athena; Ludtka, Gerard; Eisenbach, Markus; Evans, Boyd

    2012-02-01

    Understanding the Magneto-Caloric Effect (MCE) in alloys with real technological potential is important to the development of viable MCE based products. We report results of computational and experimental investigation of a candidate MCE materials Ni-Mn-Ga alloys. The Wang-Landau statistical method is used in tandem with Locally Self-consistent Multiple Scattering (LSMS) method to explore magnetic states of the system. A classical Heisenberg Hamiltonian is parametrized based on these states and used in obtaining the density of magnetic states. The Currie temperature, isothermal entropy change, and adiabatic temperature change are then calculated from the density of states. Experiments to observe the structural and magnetic phase transformations were performed at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) on alloys of Ni-Mn-Ga and Fe-Ni-Mn-Ga-Cu. Data from the observations are discussed in comparison with the computational studies. This work was sponsored by the Laboratory Directed Research and Development Program (ORNL), by the Mathematical, Information, and Computational Sciences Division; Office of Advanced Scientific Computing Research (US DOE), and by the Materials Sciences and Engineering Division; Office of Basic Energy Sciences (US DOE).

  6. Blade loss transient dynamics analysis, volume 2. Task 2: Theoretical and analytical development. Task 3: Experimental verification

    NASA Technical Reports Server (NTRS)

    Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.

    1981-01-01

    The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.

  7. Theoretical Model Images and Spectra for Comparison with HESSI and Microwave Observations of Solar Flares

    NASA Technical Reports Server (NTRS)

    Fisher, Richard R. (Technical Monitor); Holman, G. D.; Sui, L.; McTiernan, J. M.; Petrosian, V.

    2003-01-01

    We have computed bremsstrahlung and gyrosynchrotron images and spectra from a model flare loop. Electrons with a power-law energy distribution are continuously injected at the top of a semi-circular magnetic loop. The Fokker-Planck equation is integrated to obtain the steady-state electron distribution throughout the loop. Coulomb scattering and energy losses and magnetic mirroring are included in the model. The resulting electron distributions are used to compute the radiative emissions. Sample images and spectra are presented. We are developing these models for the interpretation of the High Energy Solar Spectroscopic Imager (HESSI) x-ray/gamma ray data and coordinated microwave observations. The Fokker-Planck and radiation codes are available on the Web at http://hesperia.gsfc.nasa.gov/hessi/modelware.htm This work is supported in part by the NASA Sun-Earth Connection Program.

  8. Luneburg lens and optical matrix algebra research

    NASA Technical Reports Server (NTRS)

    Wood, V. E.; Busch, J. R.; Verber, C. M.; Caulfield, H. J.

    1984-01-01

    Planar, as opposed to channelized, integrated optical circuits (IOCs) were stressed as the basis for computational devices. Both fully-parallel and systolic architectures are considered and the tradeoffs between the two device types are discussed. The Kalman filter approach is a most important computational method for many NASA problems. This approach to deriving a best-fit estimate for the state vector describing a large system leads to matrix sizes which are beyond the predicted capacities of planar IOCs. This problem is overcome by matrix partitioning, and several architectures for accomplishing this are described. The Luneburg lens work has involved development of lens design techniques, design of mask arrangements for producing lenses of desired shape, investigation of optical and chemical properties of arsenic trisulfide films, deposition of lenses both by thermal evaporation and by RF sputtering, optical testing of these lenses, modification of lens properties through ultraviolet irradiation, and comparison of measured lens properties with those expected from ray trace analyses.

  9. Simulations of Effects of Nanophase Iron Space Weather Products on Lunar Regolith Reflectance Spectra

    NASA Astrophysics Data System (ADS)

    Escobar-Cerezo, J.; Penttilä, A.; Kohout, T.; Muñoz, O.; Moreno, F.; Muinonen, K.

    2018-01-01

    Lunar soil spectra differ from pulverized lunar rocks spectra by reddening and darkening effects, and shallower absorption bands. These effects have been described in the past as a consequence of space weathering. In this work, we focus on the effects of nanophase iron (npFe0) inclusions on the experimental reflectance spectra of lunar regolith particles. The reflectance spectra are computed using SIRIS3, a code that combines ray optics with radiative-transfer modeling to simulate light scattering by different types of scatterers. The imaginary part of the refractive index as a function of wavelength of immature lunar soil is derived by comparison with the measured spectra of the corresponding material. Furthermore, the effect of adding nanophase iron inclusions on the reflectance spectra is studied. The computed spectra qualitatively reproduce the observed effects of space weathered lunar regolith.

  10. Web-based encyclopedia on physical effects

    NASA Astrophysics Data System (ADS)

    Papliatseyeu, Andrey; Repich, Maryna; Ilyushonak, Boris; Hurbo, Aliaksandr; Makarava, Katerina; Lutkovski, Vladimir M.

    2004-07-01

    Web-based learning applications open new horizons for educators. In this work we present the computer encyclopedia designed to overcome drawbacks of traditional paper information sources such as awkward search, low update rate, limited copies count and high cost. Moreover, we intended to improve access and search functions in comparison with some Internet sources in order to make it more convenient. The system is developed using modern Java technologies (Jave Servlets, Java Server Pages) and contains systemized information about most important and explored physical effects. It also may be used in other fields of science. The system is accessible via Intranet/Internet networks by means of any up-to-date Internet browser. It may be used for general learning purposes and as a study guide or tutorial for performing laboratory works.

  11. Dynamic Grover search: applications in recommendation systems and optimization problems

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Indranil; Khan, Shahzor; Singh, Vanshdeep

    2017-06-01

    In the recent years, we have seen that Grover search algorithm (Proceedings, 28th annual ACM symposium on the theory of computing, pp. 212-219, 1996) by using quantum parallelism has revolutionized the field of solving huge class of NP problems in comparisons to classical systems. In this work, we explore the idea of extending Grover search algorithm to approximate algorithms. Here we try to analyze the applicability of Grover search to process an unstructured database with a dynamic selection function in contrast to the static selection function used in the original work (Grover in Proceedings, 28th annual ACM symposium on the theory of computing, pp. 212-219, 1996). We show that this alteration facilitates us to extend the application of Grover search to the field of randomized search algorithms. Further, we use the dynamic Grover search algorithm to define the goals for a recommendation system based on which we propose a recommendation algorithm which uses binomial similarity distribution space giving us a quadratic speedup over traditional classical unstructured recommendation systems. Finally, we see how dynamic Grover search can be used to tackle a wide range of optimization problems where we improve complexity over existing optimization algorithms.

  12. A combined three-dimensional in vitro–in silico approach to modelling bubble dynamics in decompression sickness

    PubMed Central

    Stride, E.; Cheema, U.

    2017-01-01

    The growth of bubbles within the body is widely believed to be the cause of decompression sickness (DCS). Dive computer algorithms that aim to prevent DCS by mathematically modelling bubble dynamics and tissue gas kinetics are challenging to validate. This is due to lack of understanding regarding the mechanism(s) leading from bubble formation to DCS. In this work, a biomimetic in vitro tissue phantom and a three-dimensional computational model, comprising a hyperelastic strain-energy density function to model tissue elasticity, were combined to investigate key areas of bubble dynamics. A sensitivity analysis indicated that the diffusion coefficient was the most influential material parameter. Comparison of computational and experimental data revealed the bubble surface's diffusion coefficient to be 30 times smaller than that in the bulk tissue and dependent on the bubble's surface area. The initial size, size distribution and proximity of bubbles within the tissue phantom were also shown to influence their subsequent dynamics highlighting the importance of modelling bubble nucleation and bubble–bubble interactions in order to develop more accurate dive algorithms. PMID:29263127

  13. Formal Solutions for Polarized Radiative Transfer. I. The DELO Family

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janett, Gioele; Carlin, Edgar S.; Steiner, Oskar

    The discussion regarding the numerical integration of the polarized radiative transfer equation is still open and the comparison between the different numerical schemes proposed by different authors in the past is not fully clear. Aiming at facilitating the comprehension of the advantages and drawbacks of the different formal solvers, this work presents a reference paradigm for their characterization based on the concepts of order of accuracy , stability , and computational cost . Special attention is paid to understand the numerical methods belonging to the Diagonal Element Lambda Operator family, in an attempt to highlight their specificities.

  14. Wrist Hypothermia Related to Continuous Work with a Computer Mouse: A Digital Infrared Imaging Pilot Study

    PubMed Central

    Reste, Jelena; Zvagule, Tija; Kurjane, Natalja; Martinsone, Zanna; Martinsone, Inese; Seile, Anita; Vanadzins, Ivars

    2015-01-01

    Computer work is characterized by sedentary static workload with low-intensity energy metabolism. The aim of our study was to evaluate the dynamics of skin surface temperature in the hand during prolonged computer mouse work under different ergonomic setups. Digital infrared imaging of the right forearm and wrist was performed during three hours of continuous computer work (measured at the start and every 15 minutes thereafter) in a laboratory with controlled ambient conditions. Four people participated in the study. Three different ergonomic computer mouse setups were tested on three different days (horizontal computer mouse without mouse pad; horizontal computer mouse with mouse pad and padded wrist support; vertical computer mouse without mouse pad). The study revealed a significantly strong negative correlation between the temperature of the dorsal surface of the wrist and time spent working with a computer mouse. Hand skin temperature decreased markedly after one hour of continuous computer mouse work. Vertical computer mouse work preserved more stable and higher temperatures of the wrist (>30 °C), while continuous use of a horizontal mouse for more than two hours caused an extremely low temperature (<28 °C) in distal parts of the hand. The preliminary observational findings indicate the significant effect of the duration and ergonomics of computer mouse work on the development of hand hypothermia. PMID:26262633

  15. Wrist Hypothermia Related to Continuous Work with a Computer Mouse: A Digital Infrared Imaging Pilot Study.

    PubMed

    Reste, Jelena; Zvagule, Tija; Kurjane, Natalja; Martinsone, Zanna; Martinsone, Inese; Seile, Anita; Vanadzins, Ivars

    2015-08-07

    Computer work is characterized by sedentary static workload with low-intensity energy metabolism. The aim of our study was to evaluate the dynamics of skin surface temperature in the hand during prolonged computer mouse work under different ergonomic setups. Digital infrared imaging of the right forearm and wrist was performed during three hours of continuous computer work (measured at the start and every 15 minutes thereafter) in a laboratory with controlled ambient conditions. Four people participated in the study. Three different ergonomic computer mouse setups were tested on three different days (horizontal computer mouse without mouse pad; horizontal computer mouse with mouse pad and padded wrist support; vertical computer mouse without mouse pad). The study revealed a significantly strong negative correlation between the temperature of the dorsal surface of the wrist and time spent working with a computer mouse. Hand skin temperature decreased markedly after one hour of continuous computer mouse work. Vertical computer mouse work preserved more stable and higher temperatures of the wrist (>30 °C), while continuous use of a horizontal mouse for more than two hours caused an extremely low temperature (<28 °C) in distal parts of the hand. The preliminary observational findings indicate the significant effect of the duration and ergonomics of computer mouse work on the development of hand hypothermia.

  16. Assessment of Radiative Heating Uncertainty for Hyperbolic Earth Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Mazaheri, Alireza; Gnoffo, Peter A.; Kleb, W. L.; Sutton, Kenneth; Prabhu, Dinesh K.; Brandis, Aaron M.; Bose, Deepak

    2011-01-01

    This paper investigates the shock-layer radiative heating uncertainty for hyperbolic Earth entry, with the main focus being a Mars return. In Part I of this work, a baseline simulation approach involving the LAURA Navier-Stokes code with coupled ablation and radiation is presented, with the HARA radiation code being used for the radiation predictions. Flight cases representative of peak-heating Mars or asteroid return are de ned and the strong influence of coupled ablation and radiation on their aerothermodynamic environments are shown. Structural uncertainties inherent in the baseline simulations are identified, with turbulence modeling, precursor absorption, grid convergence, and radiation transport uncertainties combining for a +34% and ..24% structural uncertainty on the radiative heating. A parametric uncertainty analysis, which assumes interval uncertainties, is presented. This analysis accounts for uncertainties in the radiation models as well as heat of formation uncertainties in the flow field model. Discussions and references are provided to support the uncertainty range chosen for each parameter. A parametric uncertainty of +47.3% and -28.3% is computed for the stagnation-point radiative heating for the 15 km/s Mars-return case. A breakdown of the largest individual uncertainty contributors is presented, which includes C3 Swings cross-section, photoionization edge shift, and Opacity Project atomic lines. Combining the structural and parametric uncertainty components results in a total uncertainty of +81.3% and ..52.3% for the Mars-return case. In Part II, the computational technique and uncertainty analysis presented in Part I are applied to 1960s era shock-tube and constricted-arc experimental cases. It is shown that experiments contain shock layer temperatures and radiative ux values relevant to the Mars-return cases of present interest. Comparisons between the predictions and measurements, accounting for the uncertainty in both, are made for a range of experiments. A measure of comparison quality is de ned, which consists of the percent overlap of the predicted uncertainty bar with the corresponding measurement uncertainty bar. For nearly all cases, this percent overlap is greater than zero, and for most of the higher temperature cases (T >13,000 K) it is greater than 50%. These favorable comparisons provide evidence that the baseline computational technique and uncertainty analysis presented in Part I are adequate for Mars-return simulations. In Part III, the computational technique and uncertainty analysis presented in Part I are applied to EAST shock-tube cases. These experimental cases contain wavelength dependent intensity measurements in a wavelength range that covers 60% of the radiative intensity for the 11 km/s, 5 m radius flight case studied in Part I. Comparisons between the predictions and EAST measurements are made for a range of experiments. The uncertainty analysis presented in Part I is applied to each prediction, and comparisons are made using the metrics defined in Part II. The agreement between predictions and measurements is excellent for velocities greater than 10.5 km/s. Both the wavelength dependent and wavelength integrated intensities agree within 30% for nearly all cases considered. This agreement provides confidence in the computational technique and uncertainty analysis presented in Part I, and provides further evidence that this approach is adequate for Mars-return simulations. Part IV of this paper reviews existing experimental data that include the influence of massive ablation on radiative heating. It is concluded that this existing data is not sufficient for the present uncertainty analysis. Experiments to capture the influence of massive ablation on radiation are suggested as future work, along with further studies of the radiative precursor and improvements in the radiation properties of ablation products.

  17. An in silico method to identify computer-based protocols worthy of clinical study: An insulin infusion protocol use case

    PubMed Central

    Wong, Anthony F; Pielmeier, Ulrike; Haug, Peter J; Andreassen, Steen

    2016-01-01

    Objective Develop an efficient non-clinical method for identifying promising computer-based protocols for clinical study. An in silico comparison can provide information that informs the decision to proceed to a clinical trial. The authors compared two existing computer-based insulin infusion protocols: eProtocol-insulin from Utah, USA, and Glucosafe from Denmark. Materials and Methods The authors used eProtocol-insulin to manage intensive care unit (ICU) hyperglycemia with intravenous (IV) insulin from 2004 to 2010. Recommendations accepted by the bedside clinicians directly link the subsequent blood glucose values to eProtocol-insulin recommendations and provide a unique clinical database. The authors retrospectively compared in silico 18 984 eProtocol-insulin continuous IV insulin infusion rate recommendations from 408 ICU patients with those of Glucosafe, the candidate computer-based protocol. The subsequent blood glucose measurement value (low, on target, high) was used to identify if the insulin recommendation was too high, on target, or too low. Results Glucosafe consistently provided more favorable continuous IV insulin infusion rate recommendations than eProtocol-insulin for on target (64% of comparisons), low (80% of comparisons), or high (70% of comparisons) blood glucose. Aggregated eProtocol-insulin and Glucosafe continuous IV insulin infusion rates were clinically similar though statistically significantly different (Wilcoxon signed rank test P = .01). In contrast, when stratified by low, on target, or high subsequent blood glucose measurement, insulin infusion rates from eProtocol-insulin and Glucosafe were statistically significantly different (Wilcoxon signed rank test, P < .001), and clinically different. Discussion This in silico comparison appears to be an efficient nonclinical method for identifying promising computer-based protocols. Conclusion Preclinical in silico comparison analytical framework allows rapid and inexpensive identification of computer-based protocol care strategies that justify expensive and burdensome clinical trials. PMID:26228765

  18. Study of a scanning HIFU therapy protocol, Part II: Experiment and results

    NASA Astrophysics Data System (ADS)

    Andrew, Marilee A.; Kaczkowski, Peter; Cunitz, Bryan W.; Brayman, Andrew A.; Kargl, Steven G.

    2003-04-01

    Instrumentation and protocols for creating scanned HIFU lesions in freshly excised bovine liver were developed in order to study the in vitro HIFU dose response and validate models. Computer-control of the HIFU transducer and 3-axis positioning system provided precise spatial placement of the thermal lesions. Scan speeds were selected in the range of 1 to 8 mm/s, and the applied electrical power was varied from 20 to 60 W. These parameters were chosen to hold the thermal dose constant. A total of six valid scans of 15 mm length were created in each sample; a 3.5 MHz single-element, spherically focused transducer was used. Treated samples were frozen, then sliced in 1.27 mm increments. Digital photographs of slices were downloaded to computer for image processing and analysis. Lesion characteristics, including the depth within the tissue, axial length, and radial width, were computed. Results were compared with those generated from modified KZK and BHTE models, and include a comparison of the statistical variation in the across-scan lesion radial width. [Work supported by USAMRMC.

  19. Comparisons of Computed Mobile Phone Induced SAR in the SAM Phantom to That in Anatomically Correct Models of the Human Head

    PubMed Central

    Beard, Brian B.; Kainz, Wolfgang; Onishi, Teruo; Iyama, Takahiro; Watanabe, Soichi; Fujiwara, Osamu; Wang, Jianqing; Bit-Babik, Giorgi; Faraone, Antonio; Wiart, Joe; Christ, Andreas; Kuster, Niels; Lee, Ae-Kyoung; Kroeze, Hugo; Siegbahn, Martin; Keshvari, Jafar; Abrishamkar, Houman; Simon, Winfried; Manteuffel, Dirk; Nikoloski, Neviana

    2018-01-01

    The specific absorption rates (SAR) determined computationally in the specific anthropomorphic mannequin (SAM) and anatomically correct models of the human head when exposed to a mobile phone model are compared as part of a study organized by IEEE Standards Coordinating Committee 34, SubCommittee 2, and Working Group 2, and carried out by an international task force comprising 14 government, academic, and industrial research institutions. The detailed study protocol defined the computational head and mobile phone models. The participants used different finite-difference time-domain software and independently positioned the mobile phone and head models in accordance with the protocol. The results show that when the pinna SAR is calculated separately from the head SAR, SAM produced a higher SAR in the head than the anatomically correct head models. Also the larger (adult) head produced a statistically significant higher peak SAR for both the 1- and 10-g averages than did the smaller (child) head for all conditions of frequency and position. PMID:29515260

  20. Photoionization microscopy: Hydrogenic theory in semiparabolic coordinates and comparison with experimental results

    NASA Astrophysics Data System (ADS)

    Kalaitzis, P.; Danakas, S.; Lépine, F.; Bordas, C.; Cohen, S.

    2018-05-01

    Photoionization microscopy (PM) is an experimental method allowing for high-resolution measurements of the electron current probability density in the case of photoionization of an atom in an external uniform static electric field. PM is based on high-resolution velocity-map imaging and offers the unique opportunity to observe the quantum oscillatory spatial structure of the outgoing electron flux. We present the basic elements of the quantum-mechanical theoretical framework of PM for hydrogenic systems near threshold. Our development is based on the computationally more convenient semiparabolic coordinate system. Theoretical results are first subjected to a quantitative comparison with hydrogenic images corresponding to quasibound states and a qualitative comparison with nonresonant images of multielectron atoms. Subsequently, particular attention is paid on the structure of the electron's momentum distribution transversely to the static field (i.e., of the angularly integrated differential cross-section as a function of electron energy and radius of impact on the detector). Such 2D maps provide at a glance a complete picture of the peculiarities of the differential cross-section over the entire near-threshold energy range. Hydrogenic transverse momentum distributions are computed for the cases of the ground and excited initial states and single- and two-photon ionization schemes. Their characteristics of general nature are identified by comparing the hydrogenic distributions among themselves, as well as with a presently recorded experimental distribution concerning the magnesium atom. Finally, specificities attributed to different target atoms, initial states, and excitation scenarios are also discussed, along with directions of further work.

  1. Numerical comparisons of ground motion predictions with kinematic rupture modeling

    NASA Astrophysics Data System (ADS)

    Yuan, Y. O.; Zurek, B.; Liu, F.; deMartin, B.; Lacasse, M. D.

    2017-12-01

    Recent advances in large-scale wave simulators allow for the computation of seismograms at unprecedented levels of detail and for areas sufficiently large to be relevant to small regional studies. In some instances, detailed information of the mechanical properties of the subsurface has been obtained from seismic exploration surveys, well data, and core analysis. Using kinematic rupture modeling, this information can be used with a wave propagation simulator to predict the ground motion that would result from an assumed fault rupture. The purpose of this work is to explore the limits of wave propagation simulators for modeling ground motion in different settings, and in particular, to explore the numerical accuracy of different methods in the presence of features that are challenging to simulate such as topography, low-velocity surface layers, and shallow sources. In the main part of this work, we use a variety of synthetic three-dimensional models and compare the relative costs and benefits of different numerical discretization methods in computing the seismograms of realistic-size models. The finite-difference method, the discontinuous-Galerkin method, and the spectral-element method are compared for a range of synthetic models having different levels of complexity such as topography, large subsurface features, low-velocity surface layers, and the location and characteristics of fault ruptures represented as an array of seismic sources. While some previous studies have already demonstrated that unstructured-mesh methods can sometimes tackle complex problems (Moczo et al.), we investigate the trade-off between unstructured-mesh methods and regular-grid methods for a broad range of models and source configurations. Finally, for comparison, our direct simulation results are briefly contrasted with those predicted by a few phenomenological ground-motion prediction equations, and a workflow for accurately predicting ground motion is proposed.

  2. Load Balancing Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearce, Olga Tkachyshyn

    2014-12-01

    The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one atmore » the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.« less

  3. Computational estimation of rainbow trout estrogen receptor binding affinities for environmental estrogens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shyu, Conrad; Cavileer, Timothy D.; Nagler, James J.

    2011-02-01

    Environmental estrogens have been the subject of intense research due to their documented detrimental effects on the health of fish and wildlife and their potential to negatively impact humans. A complete understanding of how these compounds affect health is complicated because environmental estrogens are a structurally heterogeneous group of compounds. In this work, computational molecular dynamics simulations were utilized to predict the binding affinity of different compounds using rainbow trout (Oncorhynchus mykiss) estrogen receptors (ERs) as a model. Specifically, this study presents a comparison of the binding affinity of the natural ligand estradiol-17{beta} to the four rainbow trout ER isoformsmore » with that of three known environmental estrogens 17{alpha}-ethinylestradiol, bisphenol A, and raloxifene. Two additional compounds, atrazine and testosterone, that are known to be very weak or non-binders to ERs were tested. The binding affinity of these compounds to the human ER{alpha} subtype is also included for comparison. The results of this study suggest that, when compared to estradiol-17{beta}, bisphenol A binds less strongly to all four receptors, 17{alpha}-ethinylestradiol binds more strongly, and raloxifene has a high affinity for the {alpha} subtype only. The results also show that atrazine and testosterone are weak or non-binders to the ERs. All of the results are in excellent qualitative agreement with the known in vivo estrogenicity of these compounds in the rainbow trout and other fishes. Computational estimation of binding affinities could be a valuable tool for predicting the impact of environmental estrogens in fish and other animals.« less

  4. Effects of head geometry simplifications on acoustic radiation of vowel sounds based on time-domain finite-element simulations.

    PubMed

    Arnela, Marc; Guasch, Oriol; Alías, Francesc

    2013-10-01

    One of the key effects to model in voice production is that of acoustic radiation of sound waves emanating from the mouth. The use of three-dimensional numerical simulations allows to naturally account for it, as well as to consider all geometrical head details, by extending the computational domain out of the vocal tract. Despite this advantage, many approximations to the head geometry are often performed for simplicity and impedance load models are still used as well to reduce the computational cost. In this work, the impact of some of these simplifications on radiation effects is examined for vowel production in the frequency range 0-10 kHz, by means of comparison with radiation from a realistic head. As a result, recommendations are given on their validity depending on whether high frequency energy (above 5 kHz) should be taken into account or not.

  5. On the competition between weak O-H···F and C-H···F hydrogen bonds, in cooperation with C-H···O contacts, in the difluoromethane – tert-butyl alcohol cluster

    PubMed Central

    Spada, Lorenzo; Tasinato, Nicola; Bosi, Giulio; Vazart, Fanny; Barone, Vincenzo; Puzzarini, Cristina

    2017-01-01

    The 1:1 complex of tert-butyl alcohol with difluoromethane has been characterized by means of a joint experimental-computational investigation. Its rotational spectrum has been recorded by using a pulsed-jet Fourier-Transform microwave spectrometer. The experimental work has been guided and supported by accurate quantum-chemical calculations. In particular, the computed potential energy landscape pointed out the formation of three stable isomers. However, the very low interconversion barriers explain why only one isomer, showing one O-H···F and two C-H···O weak hydrogen bonds, has been experimentally characterized. The effect of the H → tert-butyl- group substitution has been analyzed from the comparison to the difluoromethane-water adduct. PMID:28919646

  6. The Effect of Interchanging the Polarity of the Dense Plasma Focus on Neutron Yield

    NASA Astrophysics Data System (ADS)

    Jiang, Sheng; Higginson, Drew; Link, Anthony; Schmidt, Andrea

    2017-10-01

    The dense plasma focus (DPF) Z-pinch devices can serve as portable neutron sources when deuterium is used as the filling gas. DPF devices are normally operated with the inner electrode as the anode. It has been found that interchanging the polarity of the electrodes can cause orders of magnitude decrease in the neutron yield. Here we use the particle-in-cell (PIC) code LSP to model a DPF with both polarities. We have found the difference in the shape of the sheath, the voltage and current traces, and the electric and magnetic fields in the pinch region due to different polarities. A detailed comparison will be presented. Prepared by LLNL under Contract DE-AC52-07NA27344 and supported by the Laboratory Directed Research and Development Program (15-ERD-034) at LLNL. Computing support for this work came from the LLNL Institutional Computing Grand Challenge program.

  7. Improving limited-projection-angle fluorescence molecular tomography using a co-registered x-ray computed tomography scan.

    PubMed

    Radrich, Karin; Ale, Angelique; Ermolayev, Vladimir; Ntziachristos, Vasilis

    2012-12-01

    We examine the improvement in imaging performance, such as axial resolution and signal localization, when employing limited-projection-angle fluorescence molecular tomography (FMT) together with x-ray computed tomography (XCT) measurements versus stand-alone FMT. For this purpose, we employed living mice, bearing a spontaneous lung tumor model, and imaged them with FMT and XCT under identical geometrical conditions using fluorescent probes for cancer targeting. The XCT data was employed, herein, as structural prior information to guide the FMT reconstruction. Gold standard images were provided by fluorescence images of mouse cryoslices, providing the ground truth in fluorescence bio-distribution. Upon comparison of FMT images versus images reconstructed using hybrid FMT and XCT data, we demonstrate marked improvements in image accuracy. This work relates to currently disseminated FMT systems, using limited projection scans, and can be employed to enhance their performance.

  8. Parametric performance of circumferentially grooved heat pipes with homogeneous and graded-porosity slab wicks at cryogenic temperatures. [methane and ethane working fluids

    NASA Technical Reports Server (NTRS)

    Groll, M.; Pittman, R. B.; Eninger, J. E.

    1976-01-01

    A recently developed, potentially high-performance nonarterial wick was extensively tested. This slab wick has an axially varying porosity which can be tailored to match the local stress imposed on the wick. The purpose of the tests was to establish the usefulness of the graded-porosity slab wick at cryogenic temperatures between 110 and 260 K, with methane and ethane as working fluids. For comparison, a homogeneous (i.e., uniform porosity) slab wick was also tested. The tests included: maximum heat pipe performance as a function of fluid inventory, maximum performance as a function of operating temperature, maximum performance as a function of evaporator elevation, and influence of slab wick orientation on performance. The experimental data were compared with theoretical predictions obtained with the GRADE computer program.

  9. Quantifying the Number of Discriminable Coincident Dendritic Input Patterns through Dendritic Tree Morphology

    PubMed Central

    Zippo, Antonio G.; Biella, Gabriele E. M.

    2015-01-01

    Current developments in neuronal physiology are unveiling novel roles for dendrites. Experiments have shown mechanisms of non-linear synaptic NMDA dependent activations, able to discriminate input patterns through the waveforms of the excitatory postsynaptic potentials. Contextually, the synaptic clustering of inputs is the principal cellular strategy to separate groups of common correlated inputs. Dendritic branches appear to work as independent discriminating units of inputs potentially reflecting an extraordinary repertoire of pattern memories. However, it is unclear how these observations could impact our comprehension of the structural correlates of memory at the cellular level. This work investigates the discrimination capabilities of neurons through computational biophysical models to extract a predicting law for the dendritic input discrimination capability (M). By this rule we compared neurons from a neuron reconstruction repository (neuromorpho.org). Comparisons showed that primate neurons were not supported by an equivalent M preeminence and that M is not uniformly distributed among neuron types. Remarkably, neocortical neurons had substantially less memory capacity in comparison to those from non-cortical regions. In conclusion, the proposed rule predicts the inherent neuronal spatial memory gathering potentially relevant anatomical and evolutionary considerations about the brain cytoarchitecture. PMID:26100354

  10. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  11. Comparison of Computational Results with a Low-g, Nitrogen Slosh and Boiling Experiment

    NASA Technical Reports Server (NTRS)

    Stewart, Mark; Moder, Jeff

    2015-01-01

    The proposed paper will compare a fluid/thermal simulation, in FLUENT, with a low-g, nitrogen slosh experiment. The French Space Agency, CNES, performed cryogenic nitrogen experiments in several zero gravity aircraft campaigns. The computational results have been compared with high-speed photographic data, pressure data, and temperature data from sensors on the axis of the cylindrically shaped tank. The comparison between these experimental and computational results is generally favorable: the initial temperature stratification is in good agreement, and the two-phase fluid motion is qualitatively captured.

  12. Differences in muscle load between computer and non-computer work among office workers.

    PubMed

    Richter, J M; Mathiassen, S E; Slijper, H P; Over, E A B; Frens, M A

    2009-12-01

    Introduction of more non-computer tasks has been suggested to increase exposure variation and thus reduce musculoskeletal complaints (MSC) in computer-intensive office work. This study investigated whether muscle activity did, indeed, differ between computer and non-computer activities. Whole-day logs of input device use in 30 office workers were used to identify computer and non-computer work, using a range of classification thresholds (non-computer thresholds (NCTs)). Exposure during these activities was assessed by bilateral electromyography recordings from the upper trapezius and lower arm. Contrasts in muscle activity between computer and non-computer work were distinct but small, even at the individualised, optimal NCT. Using an average group-based NCT resulted in less contrast, even in smaller subgroups defined by job function or MSC. Thus, computer activity logs should be used cautiously as proxies of biomechanical exposure. Conventional non-computer tasks may have a limited potential to increase variation in muscle activity during computer-intensive office work.

  13. One Task, Divergent Solutions: High- versus Low-Status Sources and Social Comparison Guide Adaptation in a Computer-Supported Socio-Cognitive Conflict Task

    ERIC Educational Resources Information Center

    Baumeister, Antonia E.; Engelmann, Tanja; Hesse, Friedrich W.

    2017-01-01

    This experimental study extends conflict elaboration theory (1) by revealing social influence dynamics for a knowledge-rich computer-supported socio-cognitive conflict task not investigated in the context of this theory before and (2) by showing the impact of individual differences in social comparison orientation. Students in two conditions…

  14. Comparison of measured and computed radial trajectories of plasma focus devices UMDPF1 and UMDPF0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, L. H.; Yap, S. L., E-mail: yapsl@um.edu.my; Lim, L. K.

    In published literature, there has been scant data on radial trajectory of the plasma focus and no comparison of computed with measured radial trajectory. This paper provides the first such comparative study. We compute the trajectories of the inward-moving radial shock and magnetic piston of UMDPF1 plasma focus and compare these with measured data taken from a streak photograph. The comparison shows agreement with the measured radial trajectory in terms of average speeds and general shape of trajectory. This paper also presents the measured trajectory of the radially compressing piston in another machine, the UMDPF0 plasma focus, confirming that themore » computed radial trajectory also shows similar general agreement. Features of divergence between the computed and measured trajectories, towards the end of the radial compression, are discussed. From the measured radial trajectories, an inference is made that the neutron yield mechanism could not be thermonuclear. A second inference is made regarding the speeds of axial post-pinch shocks, which are recently considered as a useful tool for damage testing of fusion-related wall materials.« less

  15. GPM Pre-Launch Algorithm Development for Physically-Based Falling Snow Retrievals

    NASA Technical Reports Server (NTRS)

    Jackson, Gail Skofronick; Tokay, Ali; Kramer, Anne W.; Hudak, David

    2008-01-01

    In this work we compare and correlate the long time series (Nov.-March) neasurements of precipitation rate from the Parsivels and 2DVD to the passive (89, 150, 183+/-1, +/-3, +/-7 GHz) observations of NOAA's AMSU-B radiometer. There are approximately 5-8 AMSU-B overpass views of the CARE site a day. We separate the comparisons into categories of no precipitation, liquid rain and falling snow precipitation. Scatterplots between the Parsivel snowfall rates and AMSU-B brightness temperatures (TBs) did not show an exploitable relationship for retrievals. We further compared and contrasted brightness temperatures to other surface measurements such as temperature and relative humidity with equally unsatisfying results. We found that there are similar TBs (especially at 89 and 150 GHz) for cases with falling snow and for non-precipitating cases. The comparisons indicate that surface emissivity contributions to the satellite observed TB over land can add uncertainty in detecting and estimating falling snow. The newest results show that the cloud icc scattering signal in the AMSU-B data call be detected by computing clear air TBs based on CARE radiosonde data and a rough estimate of surface emissivity. That is the differences in computed TI3 and AMSU-B TB for precipitating and nonprecipitating cases are unique such that the precipitating versus lon-precipitating cases can be identified. These results require that the radiosonde releases are within an hour of the AMSU-B data and allow for three surface types: no snow on the ground, less than 5 cm snow on the ground, and greater than 5 cm on the ground (as given by ground station data). Forest fraction and measured emissivities were combined to calculate the surface emissivities. The above work and future work to incorporate knowledge about falling snow retrievals into the framework of the expected GPM Bayesian retrievals will be described during this presentation.

  16. A Low-Signal-to-Noise-Ratio Sensor Framework Incorporating Improved Nighttime Capabilities in DIRSIG

    NASA Astrophysics Data System (ADS)

    Rizzuto, Anthony P.

    When designing new remote sensing systems, it is difficult to make apples-to-apples comparisons between designs because of the number of sensor parameters that can affect the final image. Using synthetic imagery and a computer sensor model allows for comparisons to be made between widely different sensor designs or between competing design parameters. Little work has been done in fully modeling low-SNR systems end-to-end for these types of comparisons. Currently DIRSIG has limited capability to accurately model nighttime scenes under new moon conditions or near large cities. An improved DIRSIG scene modeling capability is presented that incorporates all significant sources of nighttime radiance, including new models for urban glow and airglow, both taken from the astronomy community. A low-SNR sensor modeling tool is also presented that accounts for sensor components and noise sources to generate synthetic imagery from a DIRSIG scene. The various sensor parameters that affect SNR are discussed, and example imagery is shown with the new sensor modeling tool. New low-SNR detectors have recently been designed and marketed for remote sensing applications. A comparison of system parameters for a state-of-the-art low-SNR sensor is discussed, and a sample design trade study is presented for a hypothetical scene and sensor.

  17. Computer work and musculoskeletal disorders of the neck and upper extremity: A systematic review

    PubMed Central

    2010-01-01

    Background This review examines the evidence for an association between computer work and neck and upper extremity disorders (except carpal tunnel syndrome). Methods A systematic critical review of studies of computer work and musculoskeletal disorders verified by a physical examination was performed. Results A total of 22 studies (26 articles) fulfilled the inclusion criteria. Results show limited evidence for a causal relationship between computer work per se, computer mouse and keyboard time related to a diagnosis of wrist tendonitis, and for an association between computer mouse time and forearm disorders. Limited evidence was also found for a causal relationship between computer work per se and computer mouse time related to tension neck syndrome, but the evidence for keyboard time was insufficient. Insufficient evidence was found for an association between other musculoskeletal diagnoses of the neck and upper extremities, including shoulder tendonitis and epicondylitis, and any aspect of computer work. Conclusions There is limited epidemiological evidence for an association between aspects of computer work and some of the clinical diagnoses studied. None of the evidence was considered as moderate or strong and there is a need for more and better documentation. PMID:20429925

  18. Comparison Study of Regularizations in Spectral Computed Tomography Reconstruction

    NASA Astrophysics Data System (ADS)

    Salehjahromi, Morteza; Zhang, Yanbo; Yu, Hengyong

    2018-12-01

    The energy-resolving photon-counting detectors in spectral computed tomography (CT) can acquire projections of an object in different energy channels. In other words, they are able to reliably distinguish the received photon energies. These detectors lead to the emerging spectral CT, which is also called multi-energy CT, energy-selective CT, color CT, etc. Spectral CT can provide additional information in comparison with the conventional CT in which energy integrating detectors are used to acquire polychromatic projections of an object being investigated. The measurements obtained by X-ray CT detectors are noisy in reality, especially in spectral CT where the photon number is low in each energy channel. Therefore, some regularization should be applied to obtain a better image quality for this ill-posed problem in spectral CT image reconstruction. Quadratic-based regularizations are not often satisfactory as they blur the edges in the reconstructed images. As a result, different edge-preserving regularization methods have been adopted for reconstructing high quality images in the last decade. In this work, we numerically evaluate the performance of different regularizers in spectral CT, including total variation, non-local means and anisotropic diffusion. The goal is to provide some practical guidance to accurately reconstruct the attenuation distribution in each energy channel of the spectral CT data.

  19. Evaluating tablet computers as a survey tool in rural communities.

    PubMed

    Newell, Steve M; Logan, Henrietta L; Guo, Yi; Marks, John G; Shepperd, James A

    2015-01-01

    Although tablet computers offer advantages in data collection over traditional paper-and-pencil methods, little research has examined whether the 2 formats yield similar responses, especially with underserved populations. We compared the 2 survey formats and tested whether participants' responses to common health questionnaires or perceptions of usability differed by survey format. We also tested whether we could replicate established paper-and-pencil findings via tablet computer. We recruited a sample of low-income community members living in the rural southern United States. Participants were 170 residents (black = 49%; white = 36%; other races and missing data = 15%) drawn from 2 counties meeting Florida's state statutory definition of rural with 100 persons or fewer per square mile. We randomly assigned participants to complete scales (Center for Epidemiologic Studies Depression Inventory and Regulatory Focus Questionnaire) along with survey format usability ratings via paper-and-pencil or tablet computer. All participants rated a series of previously validated posters using a tablet computer. Finally, participants completed comparisons of the survey formats and reported survey format preferences. Participants preferred using the tablet computer and showed no significant differences between formats in mean responses, scale reliabilities, or in participants' usability ratings. Overall, participants reported similar scales responses and usability ratings between formats. However, participants reported both preferring and enjoying responding via tablet computer more. Collectively, these findings are among the first data to show that tablet computers represent a suitable substitute among an underrepresented rural sample for paper-and-pencil methodology in survey research. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  20. A Comparison of Computed and Experimental Flowfields of the RAH-66 Helicopter

    NASA Technical Reports Server (NTRS)

    vanDam, C. P.; Budge, A. M.; Duque, E. P. N.

    1996-01-01

    This paper compares and evaluates numerical and experimental flowfields of the RAH-66 Comanche helicopter. The numerical predictions were obtained by solving the Thin-Layer Navier-Stokes equations. The computations use actuator disks to investigate the main and tail rotor effects upon the fuselage flowfield. The wind tunnel experiment was performed in the 14 x 22 foot facility located at NASA Langley. A suite of flow conditions, rotor thrusts and fuselage-rotor-tail configurations were tested. In addition, the tunnel model and the computational geometry were based upon the same CAD definition. Computations were performed for an isolated fuselage configuration and for a rotor on configuration. Comparisons between the measured and computed surface pressures show areas of correlation and some discrepancies. Local areas of poor computational grid-quality and local areas of geometry differences account for the differences. These calculations demonstrate the use of advanced computational fluid dynamic methodologies towards a flight vehicle currently under development. It serves as an important verification for future computed results.

  1. On the inversion-indel distance

    PubMed Central

    2013-01-01

    Background The inversion distance, that is the distance between two unichromosomal genomes with the same content allowing only inversions of DNA segments, can be computed thanks to a pioneering approach of Hannenhalli and Pevzner in 1995. In 2000, El-Mabrouk extended the inversion model to allow the comparison of unichromosomal genomes with unequal contents, thus insertions and deletions of DNA segments besides inversions. However, an exact algorithm was presented only for the case in which we have insertions alone and no deletion (or vice versa), while a heuristic was provided for the symmetric case, that allows both insertions and deletions and is called the inversion-indel distance. In 2005, Yancopoulos, Attie and Friedberg started a new branch of research by introducing the generic double cut and join (DCJ) operation, that can represent several genome rearrangements (including inversions). Among others, the DCJ model gave rise to two important results. First, it has been shown that the inversion distance can be computed in a simpler way with the help of the DCJ operation. Second, the DCJ operation originated the DCJ-indel distance, that allows the comparison of genomes with unequal contents, considering DCJ, insertions and deletions, and can be computed in linear time. Results In the present work we put these two results together to solve an open problem, showing that, when the graph that represents the relation between the two compared genomes has no bad components, the inversion-indel distance is equal to the DCJ-indel distance. We also give a lower and an upper bound for the inversion-indel distance in the presence of bad components. PMID:24564182

  2. Software for Real-Time Analysis of Subsonic Test Shot Accuracy

    DTIC Science & Technology

    2014-03-01

    used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains

  3. Genetic Algorithm for Solving Fuzzy Shortest Path Problem in a Network with mixed fuzzy arc lengths

    NASA Astrophysics Data System (ADS)

    Mahdavi, Iraj; Tajdin, Ali; Hassanzadeh, Reza; Mahdavi-Amiri, Nezam; Shafieian, Hosna

    2011-06-01

    We are concerned with the design of a model and an algorithm for computing a shortest path in a network having various types of fuzzy arc lengths. First, we develop a new technique for the addition of various fuzzy numbers in a path using α -cuts by proposing a linear least squares model to obtain membership functions for the considered additions. Then, using a recently proposed distance function for comparison of fuzzy numbers. we propose a new approach to solve the fuzzy APSPP using of genetic algorithm. Examples are worked out to illustrate the applicability of the proposed model.

  4. A Systematic Investigation of Computation Models for Predicting Adverse Drug Reactions (ADRs)

    PubMed Central

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Background Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. Principal Findings In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Conclusion Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms. PMID:25180585

  5. Boosting antibody developability through rational sequence optimization.

    PubMed

    Seeliger, Daniel; Schulz, Patrick; Litzenburger, Tobias; Spitz, Julia; Hoerer, Stefan; Blech, Michaela; Enenkel, Barbara; Studts, Joey M; Garidel, Patrick; Karow, Anne R

    2015-01-01

    The application of monoclonal antibodies as commercial therapeutics poses substantial demands on stability and properties of an antibody. Therapeutic molecules that exhibit favorable properties increase the success rate in development. However, it is not yet fully understood how the protein sequences of an antibody translates into favorable in vitro molecule properties. In this work, computational design strategies based on heuristic sequence analysis were used to systematically modify an antibody that exhibited a tendency to precipitation in vitro. The resulting series of closely related antibodies showed improved stability as assessed by biophysical methods and long-term stability experiments. As a notable observation, expression levels also improved in comparison with the wild-type candidate. The methods employed to optimize the protein sequences, as well as the biophysical data used to determine the effect on stability under conditions commonly used in the formulation of therapeutic proteins, are described. Together, the experimental and computational data led to consistent conclusions regarding the effect of the introduced mutations. Our approach exemplifies how computational methods can be used to guide antibody optimization for increased stability.

  6. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.

  7. Computations of the Magnus effect for slender bodies in supersonic flow

    NASA Technical Reports Server (NTRS)

    Sturek, W. B.; Schiff, L. B.

    1980-01-01

    A recently reported Parabolized Navier-Stokes code has been employed to compute the supersonic flow field about spinning cone, ogive-cylinder, and boattailed bodies of revolution at moderate incidence. The computations were performed for flow conditions where extensive measurements for wall pressure, boundary layer velocity profiles and Magnus force had been obtained. Comparisons between the computational results and experiment indicate excellent agreement for angles of attack up to six degrees. The comparisons for Magnus effects show that the code accurately predicts the effects of body shape and Mach number for the selected models for Mach numbers in the range of 2-4.

  8. Passive damping enhancement of a two-degree-of-freedom system through a strongly nonlinear two-degree-of-freedom attachment

    NASA Astrophysics Data System (ADS)

    Wierschem, Nicholas E.; Quinn, D. Dane; Hubbard, Sean A.; Al-Shudeifat, Mohammad A.; McFarland, D. Michael; Luo, Jie; Fahnestock, Larry A.; Spencer, Billie F.; Vakakis, Alexander F.; Bergman, Lawrence A.

    2012-12-01

    This work reports on the first experimental study of the broadband targeted energy transfer properties of a two-degree-of-freedom (two-DOF) essentially nonlinear energy absorber. In particular, proper design of the absorber allows for an extended range of energy over which it serves to significantly enhance the damping observed in the structural system to which it is attached. Comparisons of computational and experimental results validate the proposed design as a means of drastically enhancing the damping properties of a structure by passive broadband targeted energy transfers to a strongly nonlinear, multidegree-of-freedom attachment.

  9. Physical validation of a patient-specific contact finite element model of the ankle.

    PubMed

    Anderson, Donald D; Goldsworthy, Jane K; Li, Wendy; James Rudert, M; Tochigi, Yuki; Brown, Thomas D

    2007-01-01

    A validation study was conducted to determine the extent to which computational ankle contact finite element (FE) results agreed with experimentally measured tibio-talar contact stress. Two cadaver ankles were loaded in separate test sessions, during which ankle contact stresses were measured with a high-resolution (Tekscan) pressure sensor. Corresponding contact FE analyses were subsequently performed for comparison. The agreement was good between FE-computed and experimentally measured mean (3.2% discrepancy for one ankle, 19.3% for the other) and maximum (1.5% and 6.2%) contact stress, as well as for contact area (1.7% and 14.9%). There was also excellent agreement between histograms of fractional areas of cartilage experiencing specific ranges of contact stress. Finally, point-by-point comparisons between the computed and measured contact stress distributions over the articular surface showed substantial agreement, with correlation coefficients of 90% for one ankle and 86% for the other. In the past, general qualitative, but little direct quantitative agreement has been demonstrated with articular joint contact FE models. The methods used for this validation enable formal comparison of computational and experimental results, and open the way for objective statistical measures of regional correlation between FE-computed contact stress distributions from comparison articular joint surfaces (e.g., those from an intact versus those with residual intra-articular fracture incongruity).

  10. Application of a single-fluid model for the steam condensing flow prediction

    NASA Astrophysics Data System (ADS)

    Smołka, K.; Dykas, S.; Majkut, M.; Strozik, M.

    2016-10-01

    One of the results of many years of research conducted in the Institute of Power Engineering and Turbomachinery of the Silesian University of Technology are computational algorithms for modelling steam flows with a non-equilibrium condensation process. In parallel with theoretical and numerical research, works were also started on experimental testing of the steam condensing flow. This paper presents a comparison of calculations of a flow field modelled by means of a single-fluid model using both an in-house CFD code and the commercial Ansys CFX v16.2 software package. The calculation results are compared to inhouse experimental testing.

  11. Los Alamos radiation transport code system on desktop computing platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. Themore » current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines.« less

  12. Efficient boundary hunting via vector quantization

    NASA Astrophysics Data System (ADS)

    Diamantini, Claudia; Panti, Maurizio

    2001-03-01

    A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.

  13. Description of the F-16XL Geometry and Computational Grids Used in CAWAPI

    NASA Technical Reports Server (NTRS)

    Boelens, O. J.; Badcock, K. J.; Gortz, S.; Morton, S.; Fritz, W.; Karman, S. L., Jr.; Michal, T.; Lamar, J. E.

    2009-01-01

    The objective of the Cranked-Arrow Wing Aerodynamics Project International (CAWAPI) was to allow a comprehensive validation of Computational Fluid Dynamics methods against the CAWAP flight database. A major part of this work involved the generation of high-quality computational grids. Prior to the grid generation an IGES file containing the air-tight geometry of the F-16XL aircraft was generated by a cooperation of the CAWAPI partners. Based on this geometry description both structured and unstructured grids have been generated. The baseline structured (multi-block) grid (and a family of derived grids) has been generated by the National Aerospace Laboratory NLR. Although the algorithms used by NLR had become available just before CAWAPI and thus only a limited experience with their application to such a complex configuration had been gained, a grid of good quality was generated well within four weeks. This time compared favourably with that required to produce the unstructured grids in CAWAPI. The baseline all-tetrahedral and hybrid unstructured grids has been generated at NASA Langley Research Center and the USAFA, respectively. To provide more geometrical resolution, trimmed unstructured grids have been generated at EADS-MAS, the UTSimCenter, Boeing Phantom Works and KTH/FOI. All grids generated within the framework of CAWAPI will be discussed in the article. Both results obtained on the structured grids and the unstructured grids showed a significant improvement in agreement with flight test data in comparison with those obtained on the structured multi-block grid used during CAWAP.

  14. The Computer as a Teaching Aid for Eleventh Grade Mathematics: A Comparison Study.

    ERIC Educational Resources Information Center

    Kieren, Thomas Ervin

    To determine the effect of learning computer programming and the use of a computer on mathematical achievement of eleventh grade students, for each of two years, average and above average students were randomly assigned to an experimental and control group. The experimental group wrote computer programs and used the output from the computer in…

  15. Numerical comparison of Riemann solvers for astrophysical hydrodynamics

    NASA Astrophysics Data System (ADS)

    Klingenberg, Christian; Schmidt, Wolfram; Waagan, Knut

    2007-11-01

    The idea of this work is to compare a new positive and entropy stable approximate Riemann solver by Francois Bouchut with a state-of the-art algorithm for astrophysical fluid dynamics. We implemented the new Riemann solver into an astrophysical PPM-code, the Prometheus code, and also made a version with a different, more theoretically grounded higher order algorithm than PPM. We present shock tube tests, two-dimensional instability tests and forced turbulence simulations in three dimensions. We find subtle differences between the codes in the shock tube tests, and in the statistics of the turbulence simulations. The new Riemann solver increases the computational speed without significant loss of accuracy.

  16. Numerical study of a scramjet engine flow field

    NASA Technical Reports Server (NTRS)

    Drummond, J. P.; Weidner, E. H.

    1981-01-01

    A computer program has been developed to analyze the turbulent reacting flow field in a two-dimensional scramjet engine configuration. The program numerically solves the full two-dimensional Navier-Stokes and species equations in the engine inlet and combustor, allowing consideration of flow separation and possible inlet-combustor interactions. The current work represents an intermediate step towards development of a three-dimensional program to analyze actual scramjet engine flow fields. Results from the current program are presented that predict the flow field for two inlet-combustor configurations, and comparisons of the program with experiment are given to allow assessment of the modeling that is employed.

  17. Prediction of sound radiated from different practical jet engine inlets

    NASA Technical Reports Server (NTRS)

    Zinn, B. T.; Meyer, W. L.

    1980-01-01

    Existing computer codes for calculating the far field radiation patterns surrounding various practical jet engine inlet configurations under different excitation conditions were upgraded. The computer codes were refined and expanded so that they are now more efficient computationally by a factor of about three and they are now capable of producing accurate results up to nondimensional wave numbers of twenty. Computer programs were also developed to help generate accurate geometrical representations of the inlets to be investigated. This data is required as input for the computer programs which calculate the sound fields. This new geometry generating computer program considerably reduces the time required to generate the input data which was one of the most time consuming steps in the process. The results of sample runs using the NASA-Lewis QCSEE inlet are presented and comparison of run times and accuracy are made between the old and upgraded computer codes. The overall accuracy of the computations is determined by comparison of the results of the computations with simple source solutions.

  18. A Study of Flow Separation in Transonic Flow Using Inviscid and Viscous Computational Fluid Dynamics (CFD) Schemes

    NASA Technical Reports Server (NTRS)

    Rhodes, J. A.; Tiwari, S. N.; Vonlavante, E.

    1988-01-01

    A comparison of flow separation in transonic flows is made using various computational schemes which solve the Euler and the Navier-Stokes equations of fluid mechanics. The flows examined are computed using several simple two-dimensional configurations including a backward facing step and a bump in a channel. Comparison of the results obtained using shock fitting and flux vector splitting methods are presented and the results obtained using the Euler codes are compared to results on the same configurations using a code which solves the Navier-Stokes equations.

  19. Efficient Fourier-based algorithms for time-periodic unsteady problems

    NASA Astrophysics Data System (ADS)

    Gopinath, Arathi Kamath

    2007-12-01

    This dissertation work proposes two algorithms for the simulation of time-periodic unsteady problems via the solution of Unsteady Reynolds-Averaged Navier-Stokes (URANS) equations. These algorithms use a Fourier representation in time and hence solve for the periodic state directly without resolving transients (which consume most of the resources in a time-accurate scheme). In contrast to conventional Fourier-based techniques which solve the governing equations in frequency space, the new algorithms perform all the calculations in the time domain, and hence require minimal modifications to an existing solver. The complete space-time solution is obtained by iterating in a fifth pseudo-time dimension. Various time-periodic problems such as helicopter rotors, wind turbines, turbomachinery and flapping-wings can be simulated using the Time Spectral method. The algorithm is first validated using pitching airfoil/wing test cases. The method is further extended to turbomachinery problems, and computational results verified by comparison with a time-accurate calculation. The technique can be very memory intensive for large problems, since the solution is computed (and hence stored) simultaneously at all time levels. Often, the blade counts of a turbomachine are rescaled such that a periodic fraction of the annulus can be solved. This approximation enables the solution to be obtained at a fraction of the cost of a full-scale time-accurate solution. For a viscous computation over a three-dimensional single-stage rescaled compressor, an order of magnitude savings is achieved. The second algorithm, the reduced-order Harmonic Balance method is applicable only to turbomachinery flows, and offers even larger computational savings than the Time Spectral method. It simulates the true geometry of the turbomachine using only one blade passage per blade row as the computational domain. In each blade row of the turbomachine, only the dominant frequencies are resolved, namely, combinations of neighbor's blade passing. An appropriate set of frequencies can be chosen by the analyst/designer based on a trade-off between accuracy and computational resources available. A cost comparison with a time-accurate computation for an Euler calculation on a two-dimensional multi-stage compressor obtained an order of magnitude savings, and a RANS calculation on a three-dimensional single-stage compressor achieved two orders of magnitude savings, with comparable accuracy.

  20. Two collateral problems in the framework of ground-penetrating radar data inversion: influence of the emitted waveform outline and radargram comparison.

    NASA Astrophysics Data System (ADS)

    Oliveira, Rui Jorge; Caldeira, Bento; Borges, José Fernando

    2017-04-01

    Obtain three-dimensional models of the physical properties of buried structures in the subsurface by inversion of GPR data is an appeal to Archaeology and a challenge to Geophysics. Along the research of solutions to resolve this issue stand out two major problems that need to be solved: 1) Establishment the basis of the computation that allows assign numerically in the synthetic radargrams, the physical conditions at which the GPR wave were generated; and 2) automatic comparison of the computed synthetic radargrams with the correspondent observed ones. The influence of the pulse shape in GPR data processing was a studied topic. The pulse outline emitted by GPR antennas was experimentally acquired and this information has been used in the deconvolution operation, carried out by iterative process, similarly the approach used in seismology to obtain the receiver functions. In order to establish the comparison between real and synthetic radargrams, were tested automatic image adjustment algorithms, which search the best fit between two radargramas and quantify their differences through the calculation of Normalized Root Mean Square Deviation (NRMSD). After the implementation of the last tests, the NRMSD between the synthetic and real data is about 19% (initially it was 29%). These procedures are essential to be able to perform an inversion of GPR data obtained in the field. Acknowledgment: This work is co-funded by the European Union through the European Regional Development Fund, included in the COMPETE 2020 (Operational Program Competitiveness and Internationalization) through the ICT project (UID/GEO/04683/2013) with the reference POCI-01-0145-FEDER-007690.

  1. Validation of a Node-Centered Wall Function Model for the Unstructured Flow Code FUN3D

    NASA Technical Reports Server (NTRS)

    Carlson, Jan-Renee; Vasta, Veer N.; White, Jeffery

    2015-01-01

    In this paper, the implementation of two wall function models in the Reynolds averaged Navier-Stokes (RANS) computational uid dynamics (CFD) code FUN3D is described. FUN3D is a node centered method for solving the three-dimensional Navier-Stokes equations on unstructured computational grids. The first wall function model, based on the work of Knopp et al., is used in conjunction with the one-equation turbulence model of Spalart-Allmaras. The second wall function model, also based on the work of Knopp, is used in conjunction with the two-equation k-! turbulence model of Menter. The wall function models compute the wall momentum and energy flux, which are used to weakly enforce the wall velocity and pressure flux boundary conditions in the mean flow momentum and energy equations. These wall conditions are implemented in an implicit form where the contribution of the wall function model to the Jacobian are also included. The boundary conditions of the turbulence transport equations are enforced explicitly (strongly) on all solid boundaries. The use of the wall function models is demonstrated on four test cases: a at plate boundary layer, a subsonic di user, a 2D airfoil, and a 3D semi-span wing. Where possible, different near-wall viscous spacing tactics are examined. Iterative residual convergence was obtained in most cases. Solution results are compared with theoretical and experimental data for several variations of grid spacing. In general, very good comparisons with data were achieved.

  2. International Space Station Centrifuge Rotor Models A Comparison of the Euler-Lagrange and the Bond Graph Modeling Approach

    NASA Technical Reports Server (NTRS)

    Nguyen, Louis H.; Ramakrishnan, Jayant; Granda, Jose J.

    2006-01-01

    The assembly and operation of the International Space Station (ISS) require extensive testing and engineering analysis to verify that the Space Station system of systems would work together without any adverse interactions. Since the dynamic behavior of an entire Space Station cannot be tested on earth, math models of the Space Station structures and mechanical systems have to be built and integrated in computer simulations and analysis tools to analyze and predict what will happen in space. The ISS Centrifuge Rotor (CR) is one of many mechanical systems that need to be modeled and analyzed to verify the ISS integrated system performance on-orbit. This study investigates using Bond Graph modeling techniques as quick and simplified ways to generate models of the ISS Centrifuge Rotor. This paper outlines the steps used to generate simple and more complex models of the CR using Bond Graph Computer Aided Modeling Program with Graphical Input (CAMP-G). Comparisons of the Bond Graph CR models with those derived from Euler-Lagrange equations in MATLAB and those developed using multibody dynamic simulation at the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) are presented to demonstrate the usefulness of the Bond Graph modeling approach for aeronautics and space applications.

  3. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    PubMed

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  4. A Computational and Experimental Study of Resonators in Three Dimensions

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W.; Ju, H.; Jones, Michael G.; Watson, Willie R.; Parrott, Tony L.

    2009-01-01

    In a previous work by the present authors, a computational and experimental investigation of the acoustic properties of two-dimensional slit resonators was carried out. The present paper reports the results of a study extending the previous work to three dimensions. This investigation has two basic objectives. The first is to validate the computed results from direct numerical simulations of the flow and acoustic fields of slit resonators in three dimensions by comparing with experimental measurements in a normal incidence impedance tube. The second objective is to study the flow physics of resonant liners responsible for sound wave dissipation. Extensive comparisons are provided between computed and measured acoustic liner properties with both discrete frequency and broadband sound sources. Good agreements are found over a wide range of frequencies and sound pressure levels. Direct numerical simulation confirms the previous finding in two dimensions that vortex shedding is the dominant dissipation mechanism at high sound pressure intensity. However, it is observed that the behavior of the shed vortices in three dimensions is quite different from those of two dimensions. In three dimensions, the shed vortices tend to evolve into ring (circular in plan form) vortices, even though the slit resonator opening from which the vortices are shed has an aspect ratio of 2.5. Under the excitation of discrete frequency sound, the shed vortices align themselves into two regularly spaced vortex trains moving away from the resonator opening in opposite directions. This is different from the chaotic shedding of vortices found in two-dimensional simulations. The effect of slit aspect ratio at a fixed porosity is briefly studied. For the range of liners considered in this investigation, it is found that the absorption coefficient of a liner increases when the open area of the single slit is subdivided into multiple, smaller slits.

  5. Differences in ergonomic and workstation factors between computer office workers with and without reported musculoskeletal pain.

    PubMed

    Rodrigues, Mirela Sant'Ana; Leite, Raquel Descie Veraldi; Lelis, Cheila Maira; Chaves, Thaís Cristina

    2017-01-01

    Some studies have suggested a causal relationship between computer work and the development of musculoskeletal disorders. However, studies considering the use of specific tools to assess workplace ergonomics and psychosocial factors in computer office workers with and without reported musculoskeletal pain are scarce. The aim of this study was to compare the ergonomic, physical, and psychosocial factors in computer office workers with and without reported musculoskeletal pain (MSP). Thirty-five computer office workers (aged 18-55 years) participated in the study. The following evaluations were completed: Rapid Upper Limb Assessment (RULA), Rapid Office Strain Assessment (ROSA), and Maastricht Upper Extremity Questionnaire revised Brazilian Portuguese version (MUEQ-Br revised). Student t-tests were used to make comparisons between groups. The computer office workers were divided into two groups: workers with reported MSP (WMSP, n = 17) and workers without positive report (WOMSP, n = 18). Those in the WMSP group showed significantly greater mean values in the total ROSA score (WMSP: 6.71 [CI95% :6.20-7.21] and WOMSP: 5.88 [CI95% :5.37-6.39], p = 0.01). The WMSP group also showed higher scores in the chair section of the ROSA, workstation of MUEQ-Br revised, and in the upper limb RULA score. The chair height and armrest sections from ROSA showed the higher mean values in workers WMSP compared to workers WOMSP. A positive moderate correlation was observed between ROSA and RULA total scores (R = 0.63, p < 0.001). Our results demonstrated that computer office workers who reported MSP had worse ergonomics indexes for chair workstation and worse physical risk related to upper limb (RULA upper limb section) than workers without pain. However, there were no observed differences in workers with and without MSP regarding work-related psychosocial factors. The results suggest that inadequate workstation conditions, specifically the chair height, arm and back rest, are linked to improper upper limb postures and that these factors are contributing to MSP in computer office workers.

  6. Design issues for grid-connected photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Ropp, Michael Eugene

    1998-08-01

    Photovoltaics (PV) is the direct conversion of sunlight to electrical energy. In areas without centralized utility grids, the benefits of PV easily overshadow the present shortcomings of the technology. However, in locations with centralized utility systems, significant technical challenges remain before utility-interactive PV (UIPV) systems can be integrated into the mix of electricity sources. One challenge is that the needed computer design tools for optimal design of PV systems with curved PV arrays are not available, and even those that are available do not facilitate monitoring of the system once it is built. Another arises from the issue of islanding. Islanding occurs when a UIPV system continues to energize a section of a utility system after that section has been isolated from the utility voltage source. Islanding, which is potentially dangerous to both personnel and equipment, is difficult to prevent completely. The work contained within this thesis targets both of these technical challenges. In Task 1, a method for modeling a PV system with a curved PV array using only existing computer software is developed. This methodology also facilitates comparison of measured and modeled data for use in system monitoring. The procedure is applied to the Georgia Tech Aquatic Center (GTAC) FV system. In the work contained under Task 2, islanding prevention is considered. The existing state-of-the- art is thoroughly reviewed. In Subtask 2.1, an analysis is performed which suggests that standard protective relays are in fact insufficient to guarantee protection against islanding. In Subtask 2.2. several existing islanding prevention methods are compared in a novel way. The superiority of this new comparison over those used previously is demonstrated. A new islanding prevention method is the subject under Subtask 2.3. It is shown that it does not compare favorably with other existing techniques. However, in Subtask 2.4, a novel method for dramatically improving this new islanding prevention method is described. It is shown, both by computer modeling and experiment, that this new method is one of the most effective available today. Finally, under Subtask 2.5, the effects of certain types of loads; on the effectiveness of islanding prevention methods are discussed.

  7. A Review of Hemolysis Prediction Models for Computational Fluid Dynamics.

    PubMed

    Yu, Hai; Engel, Sebastian; Janiga, Gábor; Thévenin, Dominique

    2017-07-01

    Flow-induced hemolysis is a crucial issue for many biomedical applications; in particular, it is an essential issue for the development of blood-transporting devices such as left ventricular assist devices, and other types of blood pumps. In order to estimate red blood cell (RBC) damage in blood flows, many models have been proposed in the past. Most models have been validated by their respective authors. However, the accuracy and the validity range of these models remains unclear. In this work, the most established hemolysis models compatible with computational fluid dynamics of full-scale devices are described and assessed by comparing two selected reference experiments: a simple rheometric flow and a more complex hemodialytic flow through a needle. The quantitative comparisons show very large deviations concerning hemolysis predictions, depending on the model and model parameter. In light of the current results, two simple power-law models deliver the best compromise between computational efficiency and obtained accuracy. Finally, hemolysis has been computed in an axial blood pump. The reconstructed geometry of a HeartMate II shows that hemolysis occurs mainly at the tip and leading edge of the rotor blades, as well as at the leading edge of the diffusor vanes. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  8. Numerical Investigation of Different Radial Inlet Forms for Centrifugal Compressor and Influence of the Deflectors Number by Means of Computational Fluid Dynamics Methods with Computational Model Validation

    NASA Astrophysics Data System (ADS)

    Kozhukhov, Y. V.; Yun, V. K.; Reshetnikova, L. V.; Prokopovich, M. V.

    2015-08-01

    The goal of this work is numerical experiments for five different types of the centrifugal compressor's inlet chambers with the help of CFD-methods and comparison of the computational results with the results of the real experiment which was held in the Nevskiy Lenin Plant in Saint-Petersburg. In the context of one of the chambers the influence of deflectors on its characteristics was investigated. The objects of investigation are 5 inlet chambers of different types which differ from each other by deflectors’ existence and by its number. The comparative analyze of the results of numerical and real experiments was held by means of comparison of relative velocity and static pressure coefficient distribution on hub and shroud region, and also by means of loss coefficient values change for all five chambers. As a result of the numerical calculation the quantitative and qualitative departure of CFD- calculations results and real experiment were found out. The investigation of the influence of the number of deflectors on flow parameters was carried out. The results of the study prove that the presence of the deflectors on flow path significantly increases the probability of the flow separations and reversed flows appearance on them. At the same time, the complete absence of the deflectors in the chamber significantly increases circumferential distortion of the flow; however the loss coefficient decreases anyway, the high values of which are caused by the shock flow existence. Thus, the profiling of the deflectors of the inlet chamber should be given a special attention.

  9. A comparison of muscular activity during single and double mouse clicks.

    PubMed

    Thorn, Stefan; Forsman, Mikael; Hallbeck, Susan

    2005-05-01

    Work-related musculoskeletal disorders (WMSDs) in the neck/shoulder region and the upper extremities are a common problem among computer workers. Occurrences of motor unit (MU) double discharges with very short inter-firing intervals (doublets) have been hypothesised as a potential additional risk for overuse of already exhausted fibres during long-term stereotyped activity. Doublets are reported to be present during double-click mouse work tasks. A few comparative studies have been carried out on overall muscle activities for short-term tasks with single types of actions, but none on occurrences of doublets during double versus single clicks. The main purpose of this study was to compare muscle activity levels of single and double mouse clicks during a long-term combined mouse/keyboard work task. Four muscles were studied: left and right upper trapezius, right extensor digitorum communis (EDC) and right flexor carpi ulnaris. Additionally, MU activity was analysed through intramuscular electromyography in the EDC muscle for a selection of subjects. The results indicate that double clicking produces neither higher median or 90th percentile levels in the trapezius and EDC muscles, nor a higher disposition for MU doublets, than does single clicking. Especially for the 90th percentile levels, the indications are rather the opposite (in the EDC significantly higher during single clicks in 8 of 11 subjects, P < 0.05). Although it cannot be concluded from the present study that double clicks are harmless, there were no signs that double clicks during computer work generally constitute a larger risk factor for WMSDs than do single clicks.

  10. What works with worked examples: Extending self-explanation and analogical comparison to synthesis problems

    NASA Astrophysics Data System (ADS)

    Badeau, Ryan; White, Daniel R.; Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.

    2017-12-01

    The ability to solve physics problems that require multiple concepts from across the physics curriculum—"synthesis" problems—is often a goal of physics instruction. Three experiments were designed to evaluate the effectiveness of two instructional methods employing worked examples on student performance with synthesis problems; these instructional techniques, analogical comparison and self-explanation, have previously been studied primarily in the context of single-concept problems. Across three experiments with students from introductory calculus-based physics courses, both self-explanation and certain kinds of analogical comparison of worked examples significantly improved student performance on a target synthesis problem, with distinct improvements in recognition of the relevant concepts. More specifically, analogical comparison significantly improved student performance when the comparisons were invoked between worked synthesis examples. In contrast, similar comparisons between corresponding pairs of worked single-concept examples did not significantly improve performance. On a more complicated synthesis problem, self-explanation was significantly more effective than analogical comparison, potentially due to differences in how successfully students encoded the full structure of the worked examples. Finally, we find that the two techniques can be combined for additional benefit, with the trade-off of slightly more time on task.

  11. Spring assisted cranioplasty: A patient specific computational model.

    PubMed

    Borghi, Alessandro; Rodriguez-Florez, Naiara; Rodgers, Will; James, Gregory; Hayward, Richard; Dunaway, David; Jeelani, Owase; Schievano, Silvia

    2018-03-01

    Implantation of spring-like distractors in the treatment of sagittal craniosynostosis is a novel technique that has proven functionally and aesthetically effective in correcting skull deformities; however, final shape outcomes remain moderately unpredictable due to an incomplete understanding of the skull-distractor interaction. The aim of this study was to create a patient specific computational model of spring assisted cranioplasty (SAC) that can help predict the individual overall final head shape. Pre-operative computed tomography images of a SAC patient were processed to extract a 3D model of the infant skull anatomy and simulate spring implantation. The distractors were modeled based on mechanical experimental data. Viscoelastic bone properties from the literature were tuned using the specific patient procedural information recorded during surgery and from x-ray measurements at follow-up. The model accurately captured spring expansion on-table (within 9% of the measured values), as well as at first and second follow-ups (within 8% of the measured values). Comparison between immediate post-operative 3D head scanning and numerical results for this patient proved that the model could successfully predict the final overall head shape. This preliminary work showed the potential application of computational modeling to study SAC, to support pre-operative planning and guide novel distractor design. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. Deaf individuals who work with computers present a high level of visual attention.

    PubMed

    Ribeiro, Paula Vieira; Ribas, Valdenilson Ribeiro; Ribas, Renata de Melo Guerra; de Melo, Teresinha de Jesus Oliveira Guimarães; Marinho, Carlos Antonio de Sá; Silva, Kátia Karina do Monte; de Albuquerque, Elizabete Elias; Ribas, Valéria Ribeiro; de Lima, Renata Mirelly Silva; Santos, Tuthcha Sandrelle Botelho Tavares

    2011-01-01

    Some studies in the literature indicate that deaf individuals seem to develop a higher level of attention and concentration during the process of constructing of different ways of communicating. The aim of this study was to evaluate the level of attention in individuals deaf from birth that worked with computers. A total of 161 individuals in the 18-25 age group were assessed. Of these, 40 were congenitally deaf individuals that worked with computers, 42 were deaf individuals that did not work, did not know how to use nor used computers (Control 1), 39 individuals with normal hearing that did not work, did not know how to use computers nor used them (Control 2), and 40 individuals with normal hearing that worked with computers (Control 3). The group of subjects deaf from birth that worked with computers (IDWC) presented a higher level of focused attention, sustained attention, mental manipulation capacity and resistance to interference compared to the control groups. This study highlights the relevance sensory to cognitive processing.

  13. A character network study of two Sci-Fi TV series

    NASA Astrophysics Data System (ADS)

    Tan, M. S. A.; Ujum, E. A.; Ratnavelu, K.

    2014-03-01

    This work is an analysis of the character networks in two science fiction television series: Stargate and Star Trek. These networks are constructed on the basis of scene co-occurrence between characters to indicate the presence of a connection. Global network structure measures such as the average path length, graph density, network diameter, average degree, median degree, maximum degree, and average clustering coefficient are computed as well as individual node centrality scores. The two fictional networks constructed are found to be quite similar in structure which is astonishing given that Stargate only ran for 18 years in comparison to the 48 years for Star Trek.

  14. An Evolutionary Comparison of the Handicap Principle and Hybrid Equilibrium Theories of Signaling.

    PubMed

    Kane, Patrick; Zollman, Kevin J S

    2015-01-01

    The handicap principle has come under significant challenge both from empirical studies and from theoretical work. As a result, a number of alternative explanations for honest signaling have been proposed. This paper compares the evolutionary plausibility of one such alternative, the "hybrid equilibrium," to the handicap principle. We utilize computer simulations to compare these two theories as they are instantiated in Maynard Smith's Sir Philip Sidney game. We conclude that, when both types of communication are possible, evolution is unlikely to lead to handicap signaling and is far more likely to result in the partially honest signaling predicted by hybrid equilibrium theory.

  15. The relationship between psychosocial work factors, work stress and computer-related musculoskeletal discomforts among computer users in Malaysia.

    PubMed

    Zakerian, Seyed Abolfazl; Subramaniam, Indra Devi

    2009-01-01

    Increasing numbers of workers use computer for work. So, especially among office workers, there is a high risk of musculoskeletal discomforts. This study examined the associations among 3 factors, psychosocial work factors, work stress and musculoskeletal discomforts. These associations were examined via a questionnaire survey on 30 office workers (at a university in Malaysia), whose jobs required an extensive use of computers. The questionnaire was distributed and collected daily for 20 days. While the results indicated a significant relationship among psychosocial work factors, work stress and musculoskeletal discomfort, 3 psychosocial work factors were found to be more important than others in both work stress and musculoskeletal discomfort: job demands, negative social interaction and computer-related problems. To further develop study design, it is necessary to investigate industrial and other workers who have experienced musculoskeletal discomforts and work stress.

  16. A Clonal Selection Algorithm for Minimizing Distance Travel and Back Tracking of Automatic Guided Vehicles in Flexible Manufacturing System

    NASA Astrophysics Data System (ADS)

    Chawla, Viveak Kumar; Chanda, Arindam Kumar; Angra, Surjit

    2018-03-01

    The flexible manufacturing system (FMS) constitute of several programmable production work centers, material handling systems (MHSs), assembly stations and automatic storage and retrieval systems. In FMS, the automatic guided vehicles (AGVs) play a vital role in material handling operations and enhance the performance of the FMS in its overall operations. To achieve low makespan and high throughput yield in the FMS operations, it is highly imperative to integrate the production work centers schedules with the AGVs schedules. The Production schedule for work centers is generated by application of the Giffler and Thompson algorithm under four kind of priority hybrid dispatching rules. Then the clonal selection algorithm (CSA) is applied for the simultaneous scheduling to reduce backtracking as well as distance travel of AGVs within the FMS facility. The proposed procedure is computationally tested on the benchmark FMS configuration from the literature and findings from the investigations clearly indicates that the CSA yields best results in comparison of other applied methods from the literature.

  17. Visual ergonomics and computer work--is it all about computer glasses?

    PubMed

    Jonsson, Christina

    2012-01-01

    The Swedish Provisions on Work with Display Screen Equipment and the EU Directive on the minimum safety and health requirements for work with display screen equipment cover several important visual ergonomics aspects. But a review of cases and questions to the Swedish Work Environment Authority clearly shows that most attention is given to the demands for eyesight tests and special computer glasses. Other important visual ergonomics factors are at risk of being neglected. Today computers are used everywhere, both at work and at home. Computers can be laptops, PDA's, tablet computers, smart phones, etc. The demands on eyesight tests and computer glasses still apply but the visual demands and the visual ergonomics conditions are quite different compared to the use of a stationary computer. Based on this review, we raise the question if the demand on the employer to provide the employees with computer glasses is outdated.

  18. Kruskal-Wallis test: BASIC computer program to perform nonparametric one-way analysis of variance and multiple comparisons on ranks of several independent samples.

    PubMed

    Theodorsson-Norheim, E

    1986-08-01

    Multiple t tests at a fixed p level are frequently used to analyse biomedical data where analysis of variance followed by multiple comparisons or the adjustment of the p values according to Bonferroni would be more appropriate. The Kruskal-Wallis test is a nonparametric 'analysis of variance' which may be used to compare several independent samples. The present program is written in an elementary subset of BASIC and will perform Kruskal-Wallis test followed by multiple comparisons between the groups on practically any computer programmable in BASIC.

  19. A Three Dimensional Kinematic and Kinetic Study of the Golf Swing

    PubMed Central

    Nesbit, Steven M.

    2005-01-01

    This paper discusses the three-dimensional kinematics and kinetics of a golf swing as performed by 84 male and one female amateur subjects of various skill levels. The analysis was performed using a variable full-body computer model of a human coupled with a flexible model of a golf club. Data to drive the model was obtained from subject swings recorded using a multi-camera motion analysis system. Model output included club trajectories, golfer/club interaction forces and torques, work and power, and club deflections. These data formed the basis for a statistical analysis of all subjects, and a detailed analysis and comparison of the swing characteristics of four of the subjects. The analysis generated much new data concerning the mechanics of the golf swing. It revealed that a golf swing is a highly coordinated and individual motion and subject-to-subject variations were significant. The study highlighted the importance of the wrists in generating club head velocity and orienting the club face. The trajectory of the hands and the ability to do work were the factors most closely related to skill level. Key Points Full-body model of the golf swing. Mechanical description of the golf swing. Statistical analysis of golf swing mechanics. Comparisons of subject swing mechanics PMID:24627665

  20. A three dimensional kinematic and kinetic study of the golf swing.

    PubMed

    Nesbit, Steven M

    2005-12-01

    This paper discusses the three-dimensional kinematics and kinetics of a golf swing as performed by 84 male and one female amateur subjects of various skill levels. The analysis was performed using a variable full-body computer model of a human coupled with a flexible model of a golf club. Data to drive the model was obtained from subject swings recorded using a multi-camera motion analysis system. Model output included club trajectories, golfer/club interaction forces and torques, work and power, and club deflections. These data formed the basis for a statistical analysis of all subjects, and a detailed analysis and comparison of the swing characteristics of four of the subjects. The analysis generated much new data concerning the mechanics of the golf swing. It revealed that a golf swing is a highly coordinated and individual motion and subject-to-subject variations were significant. The study highlighted the importance of the wrists in generating club head velocity and orienting the club face. The trajectory of the hands and the ability to do work were the factors most closely related to skill level. Key PointsFull-body model of the golf swing.Mechanical description of the golf swing.Statistical analysis of golf swing mechanics.Comparisons of subject swing mechanics.

  1. Computer Assistance in Information Work. Part I: Conceptual Framework for Improving the Computer/User Interface in Information Work. Part II: Catalog of Acceleration, Augmentation, and Delegation Functions in Information Work.

    ERIC Educational Resources Information Center

    Paisley, William; Butler, Matilda

    This study of the computer/user interface investigated the role of the computer in performing information tasks that users now perform without computer assistance. Users' perceptual/cognitive processes are to be accelerated or augmented by the computer; a long term goal is to delegate information tasks entirely to the computer. Cybernetic and…

  2. Questioned document workflow for handwriting with automated tools

    NASA Astrophysics Data System (ADS)

    Das, Krishnanand; Srihari, Sargur N.; Srinivasan, Harish

    2012-01-01

    During the last few years many document recognition methods have been developed to determine whether a handwriting specimen can be attributed to a known writer. However, in practice, the work-flow of the document examiner continues to be manual-intensive. Before a systematic or computational, approach can be developed, an articulation of the steps involved in handwriting comparison is needed. We describe the work flow of handwritten questioned document examination, as described in a standards manual, and the steps where existing automation tools can be used. A well-known ransom note case is considered as an example, where one encounters testing for multiple writers of the same document, determining whether the writing is disguised, known writing is formal while questioned writing is informal, etc. The findings for the particular ransom note case using the tools are given. Also observations are made for developing a more fully automated approach to handwriting examination.

  3. Supervisory control based on minimal cuts and Petri net sub-controllers coordination

    NASA Astrophysics Data System (ADS)

    Rezig, Sadok; Achour, Zied; Rezg, Nidhal; Kammoun, Mohamed-Ali

    2016-10-01

    This paper addresses the synthesis of Petri net (PN) controller for the forbidden state transition problem with a new utilisation of the theory of regions. Moreover, as any method of control synthesis based on a reachability graph, the theory of regions suffers from the combinatorial explosion problem. The proposed work minimises the number of equations in the linear system of theory of regions and therefore one can reduce the computation time. In this paper, two different approaches are proposed to select minimal cuts in the reachability graph in order to synthesise a PN controller. Thanks to a switch from one cut to another, one can activate and deactivate the corresponding PNcontroller. An application is implemented in a flexible manufacturing system to illustrate the present method. Finally, comparison with previous works with experimental results in obtaining a maximally permissive controller is presented.

  4. An efficient coordinate transformation technique for unsteady, transonic aerodynamic analysis of low aspect-ratio wings

    NASA Technical Reports Server (NTRS)

    Guruswamy, G. P.; Goorjian, P. M.

    1984-01-01

    An efficient coordinate transformation technique is presented for constructing grids for unsteady, transonic aerodynamic computations for delta-type wings. The original shearing transformation yielded computations that were numerically unstable and this paper discusses the sources of those instabilities. The new shearing transformation yields computations that are stable, fast, and accurate. Comparisons of those two methods are shown for the flow over the F5 wing that demonstrate the new stability. Also, comparisons are made with experimental data that demonstrate the accuracy of the new method. The computations were made by using a time-accurate, finite-difference, alternating-direction-implicit (ADI) algorithm for the transonic small-disturbance potential equation.

  5. Inverse boundary-layer theory and comparison with experiment

    NASA Technical Reports Server (NTRS)

    Carter, J. E.

    1978-01-01

    Inverse boundary layer computational procedures, which permit nonsingular solutions at separation and reattachment, are presented. In the first technique, which is for incompressible flow, the displacement thickness is prescribed; in the second technique, for compressible flow, a perturbation mass flow is the prescribed condition. The pressure is deduced implicitly along with the solution in each of these techniques. Laminar and turbulent computations, which are typical of separated flow, are presented and comparisons are made with experimental data. In both inverse procedures, finite difference techniques are used along with Newton iteration. The resulting procedure is no more complicated than conventional boundary layer computations. These separated boundary layer techniques appear to be well suited for complete viscous-inviscid interaction computations.

  6. Comparisons Between NO PLIF Imaging and CFD Simulations of Mixing Flowfields for High-Speed Fuel Injectors

    NASA Technical Reports Server (NTRS)

    Drozda, Tomasz, G.; Cabell, Karen F.; Ziltz, Austin R.; Hass, Neil E.; Inman, Jennifer A.; Burns, Ross A.; Bathel, Brett F.; Danehy, Paul M.; Abul-Huda, Yasin M.; Gamba, Mirko

    2017-01-01

    The current work compares experimentally and computationally obtained nitric oxide (NO) planar laser-induced fluorescence (PLIF) images of the mixing flowfields for three types of high-speed fuel injectors: a strut, a ramp, and a rectangular flush-wall. These injection devices, which exhibited promising mixing performance at lower flight Mach numbers, are currently being studied as a part of the Enhanced Injection and Mixing Project (EIMP) at the NASA Langley Research Center. The EIMP aims to investigate scramjet fuel injection and mixing physics, and improve the understanding of underlying physical processes relevant to flight Mach numbers greater than eight. In the experiments, conducted in the NASA Langley Arc-Heated Scramjet Test Facility (AHSTF), the injectors are placed downstream of a Mach 6 facility nozzle, which simulates the high Mach number air flow at the entrance of a scramjet combustor. Helium is used as an inert substitute for hydrogen fuel. The PLIF is obtained by using a tunable laser to excite the NO, which is present in the AHSTF air as a direct result of arc-heating. Consequently, the absence of signal is an indication of pure helium (fuel). The PLIF images computed from the computational fluid dynamics (CFD) simulations are obtained by combining a fluorescence model for NO with the Reynolds-Averaged Simulation results carried out using the VULCAN-CFD solver to obtain a computational equivalent of the experimentally measured PLIF signal. The measured NO PLIF signal is mainly a function of NO concentration allowing for semi-quantitative comparisons between the CFD and the experiments. The PLIF signal intensity is also sensitive to pressure and temperature variations in the flow, allowing additional flow features to be identified and compared with the CFD. Good agreement between the PLIF and the CFD results provides increased confidence in the CFD simulations for investigations of injector performance.

  7. Automating slope monitoring in mines with terrestrial lidar scanners

    NASA Astrophysics Data System (ADS)

    Conforti, Dario

    2014-05-01

    Static terrestrial laser scanners (TLS) have been an important component of slope monitoring for some time, and many solutions for monitoring the progress of a slide have been devised over the years. However, all of these solutions have required users to operate the lidar equipment in the field, creating a high cost in time and resources, especially if the surveys must be performed very frequently. This paper presents a new solution for monitoring slides, developed using a TLS and an automated data acquisition, processing and analysis system. In this solution, a TLS is permanently mounted within sight of the target surface and connected to a control computer. The control software on the computer automatically triggers surveys according to a user-defined schedule, parses data into point clouds, and compares data against a baseline. The software can base the comparison against either the original survey of the site or the most recent survey, depending on whether the operator needs to measure the total or recent movement of the slide. If the displacement exceeds a user-defined safety threshold, the control computer transmits alerts via SMS text messaging and/or email, including graphs and tables describing the nature and size of the displacement. The solution can also be configured to trigger the external visual/audio alarm systems. If the survey areas contain high-traffic areas such as roads, the operator can mark them for exclusion in the comparison to prevent false alarms. To improve usability and safety, the control computer can connect to a local intranet and allow remote access through the software's web portal. This enables operators to perform most tasks with the TLS from their office, including reviewing displacement reports, downloading survey data, and adjusting the scan schedule. This solution has proved invaluable in automatically detecting and alerting users to potential danger within the monitored areas while lowering the cost and work required for monitoring. An explanation of the entire system and a post-acquisition data demonstration will be presented.

  8. Academic consortium for the evaluation of computer-aided diagnosis (CADx) in mammography

    NASA Astrophysics Data System (ADS)

    Mun, Seong K.; Freedman, Matthew T.; Wu, Chris Y.; Lo, Shih-Chung B.; Floyd, Carey E., Jr.; Lo, Joseph Y.; Chan, Heang-Ping; Helvie, Mark A.; Petrick, Nicholas; Sahiner, Berkman; Wei, Datong; Chakraborty, Dev P.; Clarke, Laurence P.; Kallergi, Maria; Clark, Bob; Kim, Yongmin

    1995-04-01

    Computer aided diagnosis (CADx) is a promising technology for the detection of breast cancer in screening mammography. A number of different approaches have been developed for CADx research that have achieved significant levels of performance. Research teams now recognize the need for a careful and detailed evaluation study of approaches to accelerate the development of CADx, to make CADx more clinically relevant and to optimize the CADx algorithms based on unbiased evaluations. The results of such a comparative study may provide each of the participating teams with new insights into the optimization of their individual CADx algorithms. This consortium of experienced CADx researchers is working as a group to compare results of the algorithms and to optimize the performance of CADx algorithms by learning from each other. Each institution will be contributing an equal number of cases that will be collected under a standard protocol for case selection, truth determination, and data acquisition to establish a common and unbiased database for the evaluation study. An evaluation procedure for the comparison studies are being developed to analyze the results of individual algorithms for each of the test cases in the common database. Optimization of individual CADx algorithms can be made based on the comparison studies. The consortium effort is expected to accelerate the eventual clinical implementation of CADx algorithms at participating institutions.

  9. A Microworld Approach to the Formalization of Musical Knowledge.

    ERIC Educational Resources Information Center

    Honing, Henkjan

    1993-01-01

    Discusses the importance of applying computational modeling and artificial intelligence techniques to music cognition and computer music research. Recommends three uses of microworlds to trim computational theories to their bare minimum, allowing for better and easier comparison. (CFR)

  10. Blade Displacement Predictions for the Full-Scale UH-60A Airloads Rotor

    NASA Technical Reports Server (NTRS)

    Bledron, Robert T.; Lee-Rausch, Elizabeth M.

    2014-01-01

    An unsteady Reynolds-Averaged Navier-Stokes solver for unstructured grids is loosely coupled to a rotorcraft comprehensive code and used to simulate two different test conditions from a wind-tunnel test of a full-scale UH-60A rotor. Performance data and sectional airloads from the simulation are compared with corresponding tunnel data to assess the level of fidelity of the aerodynamic aspects of the simulation. The focus then turns to a comparison of the blade displacements, both rigid (blade root) and elastic. Comparisons of computed root motions are made with data from three independent measurement systems. Finally, comparisons are made between computed elastic bending and elastic twist, and the corresponding measurements obtained from a photogrammetry system. Overall the correlation between computed and measured displacements was good, especially for the root pitch and lag motions and the elastic bending deformation. The correlation of root lead-lag motion and elastic twist deformation was less favorable.

  11. Parsing partial molar volumes of small molecules: a molecular dynamics study.

    PubMed

    Patel, Nisha; Dubins, David N; Pomès, Régis; Chalikian, Tigran V

    2011-04-28

    We used molecular dynamics (MD) simulations in conjunction with the Kirkwood-Buff theory to compute the partial molar volumes for a number of small solutes of various chemical natures. We repeated our computations using modified pair potentials, first, in the absence of the Coulombic term and, second, in the absence of the Coulombic and the attractive Lennard-Jones terms. Comparison of our results with experimental data and the volumetric results of Monte Carlo simulation with hard sphere potentials and scaled particle theory-based computations led us to conclude that, for small solutes, the partial molar volume computed with the Lennard-Jones potential in the absence of the Coulombic term nearly coincides with the cavity volume. On the other hand, MD simulations carried out with the pair interaction potentials containing only the repulsive Lennard-Jones term produce unrealistically large partial molar volumes of solutes that are close to their excluded volumes. Our simulation results are in good agreement with the reported schemes for parsing partial molar volume data on small solutes. In particular, our determined interaction volumes() and the thickness of the thermal volume for individual compounds are in good agreement with empirical estimates. This work is the first computational study that supports and lends credence to the practical algorithms of parsing partial molar volume data that are currently in use for molecular interpretations of volumetric data.

  12. Computer use and carpal tunnel syndrome: A meta-analysis.

    PubMed

    Shiri, Rahman; Falah-Hassani, Kobra

    2015-02-15

    Studies have reported contradictory results on the role of keyboard or mouse use in carpal tunnel syndrome (CTS). This meta-analysis aimed to assess whether computer use causes CTS. Literature searches were conducted in several databases until May 2014. Twelve studies qualified for a random-effects meta-analysis. Heterogeneity and publication bias were assessed. In a meta-analysis of six studies (N=4964) that compared computer workers with the general population or other occupational populations, computer/typewriter use (pooled odds ratio (OR)=0.72, 95% confidence interval (CI) 0.58-0.90), computer/typewriter use ≥1 vs. <1h/day (OR=0.63, 95% CI 0.38-1.04) and computer/typewriter use ≥4 vs. <4h/day (OR=0.68, 95% CI 0.54-0.87) were inversely associated with CTS. Conversely, in a meta-analysis of six studies (N=5202) conducted among office workers, CTS was positively associated with computer/typewriter use (pooled OR=1.34, 95% CI 1.08-1.65), mouse use (OR=1.93, 95% CI 1.43-2.61), frequent computer use (OR=1.89, 95% CI 1.15-3.09), frequent mouse use (OR=1.84, 95% CI 1.18-2.87) and with years of computer work (OR=1.92, 95% CI 1.17-3.17 for long vs. short). There was no evidence of publication bias for both types of studies. Studies that compared computer workers with the general population or several occupational groups did not control their estimates for occupational risk factors. Thus, office workers with no or little computer use are a more appropriate comparison group than the general population or several occupational groups. This meta-analysis suggests that excessive computer use, particularly mouse usage might be a minor occupational risk factor for CTS. Further prospective studies among office workers with objectively assessed keyboard and mouse use, and CTS symptoms or signs confirmed by a nerve conduction study are needed. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Experimental and simulation flow rate analysis of the 3/2 directional pneumatic valve

    NASA Astrophysics Data System (ADS)

    Blasiak, Slawomir; Takosoglu, Jakub E.; Laski, Pawel A.; Pietrala, Dawid S.; Zwierzchowski, Jaroslaw; Bracha, Gabriel; Nowakowski, Lukasz; Blasiak, Malgorzata

    The work includes a study on the comparative analysis of two test methods. The first method - numerical method, consists in determining the flow characteristics with the use of ANSYS CFX. A modeled poppet directional valve 3/2 3D CAD software - SolidWorks was used for this purpose. Based on the solid model that was developed, simulation studies of the air flow through the way valve in the software for computational fluid dynamics Ansys CFX were conducted. The second method - experimental, entailed conducting tests on a specially constructed test stand. The comparison of the test results obtained on the basis of both methods made it possible to determine the cross-correlation. High compatibility of the results confirms the usefulness of the numerical procedures. Thus, they might serve to determine the flow characteristics of directional valves as an alternative to a costly and time-consuming test stand.

  14. Determination of forces in a magnetic bearing actuator - Numerical computation with comparison to experiment

    NASA Technical Reports Server (NTRS)

    Knight, J. D.; Xia, Z.; Mccaul, E.; Hacker, H., Jr.

    1992-01-01

    Calculations of the forces exerted on a journal by a magnetic bearing actuator are presented, along with comparisons to experimentally measured forces. The calculations are based on two-dimensional solutions for the flux distribution in the metal parts and free space, using finite but constant permeability in the metals. Above a relative permeability of 10,000 the effects of changes in permeability are negligible, but below 10,000 decreases in permeability cause significant decreases in the force. The calculated forces are shown to depend on the metal permeability more strongly when the journal is displaced from its centered position. The predicted forces in the principal attractive direction are in good agreement with experiment when a relatively low value of permeability is chosen. The forces measured normal to the axis of symmetry when the journal is displaced from that axis, however, are significantly higher than predicted by theory, even with a value of relative permeability larger than 5000. These results indicate a need for further work including nonlinear permeability distributions.

  15. Computed atmospheric corrections for satellite data. [in visible and near IR spectra

    NASA Technical Reports Server (NTRS)

    Fraser, R. S.

    1975-01-01

    The corrections are presented for the visible and near infrared spectrum. The specifications of earth-atmosphere models are given. Herman's and Dave's methods of computing the four Stokes parameters are presented. The relative differences between the two sets of values are one percent. The absolute accuracy of the computations can be established only by comparisons with measured data. Suitable observations do not yet exist. Nevertheless, comparisons are made between computed and aircraft and satellite measured radiances. Particulates are the principal atmospheric variable in the window bands. They have a large effect on the radiances when the surface reflectivity is low. When the surface reflectivity exceeds 0.1, only absorbing particulates have a large effect on the reflectivity, unless the atmospheric turbidity is high. The ranges of the Multispectral Scanner responses to atmospheric effects are computed.

  16. Patch forest: a hybrid framework of random forest and patch-based segmentation

    NASA Astrophysics Data System (ADS)

    Xie, Zhongliu; Gillies, Duncan

    2016-03-01

    The development of an accurate, robust and fast segmentation algorithm has long been a research focus in medical computer vision. State-of-the-art practices often involve non-rigidly registering a target image with a set of training atlases for label propagation over the target space to perform segmentation, a.k.a. multi-atlas label propagation (MALP). In recent years, the patch-based segmentation (PBS) framework has gained wide attention due to its advantage of relaxing the strict voxel-to-voxel correspondence to a series of pair-wise patch comparisons for contextual pattern matching. Despite a high accuracy reported in many scenarios, computational efficiency has consistently been a major obstacle for both approaches. Inspired by recent work on random forest, in this paper we propose a patch forest approach, which by equipping the conventional PBS with a fast patch search engine, is able to boost segmentation speed significantly while retaining an equal level of accuracy. In addition, a fast forest training mechanism is also proposed, with the use of a dynamic grid framework to efficiently approximate data compactness computation and a 3D integral image technique for fast box feature retrieval.

  17. Ultrafast Comparison of Personal Genomes via Precomputed Genome Fingerprints.

    PubMed

    Glusman, Gustavo; Mauldin, Denise E; Hood, Leroy E; Robinson, Max

    2017-01-01

    We present an ultrafast method for comparing personal genomes. We transform the standard genome representation (lists of variants relative to a reference) into "genome fingerprints" via locality sensitive hashing. The resulting genome fingerprints can be meaningfully compared even when the input data were obtained using different sequencing technologies, processed using different pipelines, represented in different data formats and relative to different reference versions. Furthermore, genome fingerprints are robust to up to 30% missing data. Because of their reduced size, computation on the genome fingerprints is fast and requires little memory. For example, we could compute all-against-all pairwise comparisons among the 2504 genomes in the 1000 Genomes data set in 67 s at high quality (21 μs per comparison, on a single processor), and achieved a lower quality approximation in just 11 s. Efficient computation enables scaling up a variety of important genome analyses, including quantifying relatedness, recognizing duplicative sequenced genomes in a set, population reconstruction, and many others. The original genome representation cannot be reconstructed from its fingerprint, effectively decoupling genome comparison from genome interpretation; the method thus has significant implications for privacy-preserving genome analytics.

  18. Comparison of Low-Energy Lunar Transfer Trajectories to Invariant Manifolds

    NASA Technical Reports Server (NTRS)

    Anderson, Rodney L.; Parker, Jeffrey S.

    2011-01-01

    In this study, transfer trajectories from the Earth to the Moon that encounter the Moon at various flight path angles are examined, and lunar approach trajectories are compared to the invariant manifolds of selected unstable orbits in the circular restricted three-body problem. Previous work focused on lunar impact and landing trajectories encountering the Moon normal to the surface, and this research extends the problem with different flight path angles in three dimensions. The lunar landing geometry for a range of Jacobi constants are computed, and approaches to the Moon via invariant manifolds from unstable orbits are analyzed for different energy levels.

  19. OEDIPE: a new graphical user interface for fast construction of numerical phantoms and MCNP calculations.

    PubMed

    Franck, D; de Carlan, L; Pierrat, N; Broggio, D; Lamart, S

    2007-01-01

    Although great efforts have been made to improve the physical phantoms used to calibrate in vivo measurement systems, these phantoms represent a single average counting geometry and usually contain a uniform distribution of the radionuclide over the tissue substitute. As a matter of fact, significant corrections must be made to phantom-based calibration factors in order to obtain absolute calibration efficiencies applicable to a given individual. The importance of these corrections is particularly crucial when considering in vivo measurements of low energy photons emitted by radionuclides deposited in the lung such as actinides. Thus, it was desirable to develop a method for calibrating in vivo measurement systems that is more sensitive to these types of variability. Previous works have demonstrated the possibility of such a calibration using the Monte Carlo technique. Our research programme extended such investigations to the reconstruction of numerical anthropomorphic phantoms based on personal physiological data obtained by computed tomography. New procedures based on a new graphical user interface (GUI) for development of computational phantoms for Monte Carlo calculations and data analysis are being developed to take advantage of recent progress in image-processing codes. This paper presents the principal features of this new GUI. Results of calculations and comparison with experimental data are also presented and discussed in this work.

  20. Experimental comparison of two quantum computing architectures.

    PubMed

    Linke, Norbert M; Maslov, Dmitri; Roetteler, Martin; Debnath, Shantanu; Figgatt, Caroline; Landsman, Kevin A; Wright, Kenneth; Monroe, Christopher

    2017-03-28

    We run a selection of algorithms on two state-of-the-art 5-qubit quantum computers that are based on different technology platforms. One is a publicly accessible superconducting transmon device (www. ibm.com/ibm-q) with limited connectivity, and the other is a fully connected trapped-ion system. Even though the two systems have different native quantum interactions, both can be programed in a way that is blind to the underlying hardware, thus allowing a comparison of identical quantum algorithms between different physical systems. We show that quantum algorithms and circuits that use more connectivity clearly benefit from a better-connected system of qubits. Although the quantum systems here are not yet large enough to eclipse classical computers, this experiment exposes critical factors of scaling quantum computers, such as qubit connectivity and gate expressivity. In addition, the results suggest that codesigning particular quantum applications with the hardware itself will be paramount in successfully using quantum computers in the future.

  1. Finite element modelling of crash response of composite aerospace sub-floor structures

    NASA Astrophysics Data System (ADS)

    McCarthy, M. A.; Harte, C. G.; Wiggenraad, J. F. M.; Michielsen, A. L. P. J.; Kohlgrüber, D.; Kamoulakos, A.

    Composite energy-absorbing structures for use in aircraft are being studied within a European Commission research programme (CRASURV - Design for Crash Survivability). One of the aims of the project is to evaluate the current capabilities of crashworthiness simulation codes for composites modelling. This paper focuses on the computational analysis using explicit finite element analysis, of a number of quasi-static and dynamic tests carried out within the programme. It describes the design of the structures, the analysis techniques used, and the results of the analyses in comparison to the experimental test results. It has been found that current multi-ply shell models are capable of modelling the main energy-absorbing processes at work in such structures. However some deficiencies exist, particularly in modelling fabric composites. Developments within the finite element code are taking place as a result of this work which will enable better representation of composite fabrics.

  2. DOE unveils climate model in advance of global test

    NASA Astrophysics Data System (ADS)

    Popkin, Gabriel

    2018-05-01

    The world's growing collection of climate models has a high-profile new entry. Last week, after nearly 4 years of work, the U.S. Department of Energy (DOE) released computer code and initial results from an ambitious effort to simulate the Earth system. The new model is tailored to run on future supercomputers and designed to forecast not just how climate will change, but also how those changes might stress energy infrastructure. Results from an upcoming comparison of global models may show how well the new entrant works. But so far it is getting a mixed reception, with some questioning the need for another model and others saying the $80 million effort has yet to improve predictions of the future climate. Even the project's chief scientist, Ruby Leung of the Pacific Northwest National Laboratory in Richland, Washington, acknowledges that the model is not yet a leader.

  3. Navier-Stokes simulation of the crossflow instability in swept-wing flows

    NASA Technical Reports Server (NTRS)

    Reed, Helen L.

    1989-01-01

    The computational modeling of the transition process characteristic of flows over swept wings are described. Specifically, the crossflow instability and crossflow/T-S wave interactions are analyzed through the numerical solution of the full three-dimensional Navier-Stokes equations including unsteadiness, curvature, and sweep. This approach is chosen because of the complexity of the problem and because it appears that linear stability theory is insufficient to explain the discrepancies between different experiments and between theory and experiments. The leading edge region of a swept wing is considered in a three-dimensional spatial simulation with random disturbances as the initial conditions. The work has been closely coordinated with the experimental program of Professor William Saric, examining the same problem. Comparisons with NASA flight test data and the experiments at Arizona State University were a necessary and an important integral part of this work.

  4. SU (2) lattice gauge theory simulations on Fermi GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardoso, Nuno, E-mail: nunocardoso@cftp.ist.utl.p; Bicudo, Pedro, E-mail: bicudo@ist.utl.p

    2011-05-10

    In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes formore » the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200x the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2x slower) than single precision computations.« less

  5. Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition

    PubMed Central

    Cadieu, Charles F.; Hong, Ha; Yamins, Daniel L. K.; Pinto, Nicolas; Ardila, Diego; Solomon, Ethan A.; Majaj, Najib J.; DiCarlo, James J.

    2014-01-01

    The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds. PMID:25521294

  6. Comparisons Between NO PLIF Imaging and CFD Simulations of Mixing Flowfields for High-Speed Fuel Injectors

    NASA Technical Reports Server (NTRS)

    Drozda, Tomasz G.; Cabell, Karen F.; Ziltz, Austin R.; Hass, Neal E.; Inman, Jennifer A.; Burns, Ross A.; Bathel, Brett F.; Danehy, Paul M.

    2017-01-01

    The current work compares experimentally and computationally obtained nitric oxide (NO) planar laser induced fluorescence (PLIF) images of the mixing flowfields for three types of high-speed fuel injectors: a strut, a ramp, and a rectangular flushwall. These injection devices, which exhibited promising mixing performance at lower flight Mach numbers, are currently being studied as a part of the Enhanced Injection and Mixing Project (EIMP) at the NASA Langley Research Center. The EIMP aims to investigate scramjet fuel injection and mixing physics, and improve the understanding of underlying physical processes relevant to flight Mach numbers greater than eight. In the experiments, conducted in the NASA Langley Arc-Heated Scramjet Test Facility (AHSTF), the injectors are placed downstream of a Mach 6 facility nozzle, which simulates the high Mach number air flow at the entrance of a scramjet combustor. Helium is used as an inert substitute for hydrogen fuel. Both schlieren and PLIF techniques are applied to obtain mixing flowfield flow visualizations. The experimental PLIF is obtained by using a UV laser sheet to interrogate a plane of the flow by exciting fluorescence from the NO molecules, which are present in the AHSTF air. Consequently, the absence of signal in the resulting PLIF images is an indication of pure helium (fuel). The computational PLIF is obtained by applying a fluorescence model for NO to the results of the Reynolds-averaged simulations (RAS) of the mixing flow field carried out using the VULCAN-CFD solver. This approach is required because the PLIF signal is a nonlinear function of not only NO concentration, but also pressure, temperature, and the flow velocity. This complexity allows additional flow features to be identified and compared with those obtained from the computational fluid dynamics (CFD) simulations, however, such comparisons are only semiquantitative. Three-dimensional image reconstruction, similar to that used in magnetic resonance imaging, is also used to obtain images in the streamwise and spanwise planes from select cross-stream PLIF plane data. Synthetic schlieren is also computed from the RAS data. Good agreement between the experimental and computational results provides increased confidence in the CFD simulations for investigations of injector performance.

  7. A quantitative assessment of the Hadoop framework for analyzing massively parallel DNA sequencing data.

    PubMed

    Siretskiy, Alexey; Sundqvist, Tore; Voznesenskiy, Mikhail; Spjuth, Ola

    2015-01-01

    New high-throughput technologies, such as massively parallel sequencing, have transformed the life sciences into a data-intensive field. The most common e-infrastructure for analyzing this data consists of batch systems that are based on high-performance computing resources; however, the bioinformatics software that is built on this platform does not scale well in the general case. Recently, the Hadoop platform has emerged as an interesting option to address the challenges of increasingly large datasets with distributed storage, distributed processing, built-in data locality, fault tolerance, and an appealing programming methodology. In this work we introduce metrics and report on a quantitative comparison between Hadoop and a single node of conventional high-performance computing resources for the tasks of short read mapping and variant calling. We calculate efficiency as a function of data size and observe that the Hadoop platform is more efficient for biologically relevant data sizes in terms of computing hours for both split and un-split data files. We also quantify the advantages of the data locality provided by Hadoop for NGS problems, and show that a classical architecture with network-attached storage will not scale when computing resources increase in numbers. Measurements were performed using ten datasets of different sizes, up to 100 gigabases, using the pipeline implemented in Crossbow. To make a fair comparison, we implemented an improved preprocessor for Hadoop with better performance for splittable data files. For improved usability, we implemented a graphical user interface for Crossbow in a private cloud environment using the CloudGene platform. All of the code and data in this study are freely available as open source in public repositories. From our experiments we can conclude that the improved Hadoop pipeline scales better than the same pipeline on high-performance computing resources, we also conclude that Hadoop is an economically viable option for the common data sizes that are currently used in massively parallel sequencing. Given that datasets are expected to increase over time, Hadoop is a framework that we envision will have an increasingly important role in future biological data analysis.

  8. Analysis of a Multiprocessor Guidance Computer. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Maltach, E. G.

    1969-01-01

    The design of the next generation of spaceborne digital computers is described. It analyzes a possible multiprocessor computer configuration. For the analysis, a set of representative space computing tasks was abstracted from the Lunar Module Guidance Computer programs as executed during the lunar landing, from the Apollo program. This computer performs at this time about 24 concurrent functions, with iteration rates from 10 times per second to once every two seconds. These jobs were tabulated in a machine-independent form, and statistics of the overall job set were obtained. It was concluded, based on a comparison of simulation and Markov results, that the Markov process analysis is accurate in predicting overall trends and in configuration comparisons, but does not provide useful detailed information in specific situations. Using both types of analysis, it was determined that the job scheduling function is a critical one for efficiency of the multiprocessor. It is recommended that research into the area of automatic job scheduling be performed.

  9. Understanding refraction contrast using a comparison of absorption and refraction computed tomographic techniques

    NASA Astrophysics Data System (ADS)

    Wiebe, S.; Rhoades, G.; Wei, Z.; Rosenberg, A.; Belev, G.; Chapman, D.

    2013-05-01

    Refraction x-ray contrast is an imaging modality used primarily in a research setting at synchrotron facilities, which have a biomedical imaging research program. The most common method for exploiting refraction contrast is by using a technique called Diffraction Enhanced Imaging (DEI). The DEI apparatus allows the detection of refraction between two materials and produces a unique ''edge enhanced'' contrast appearance, very different from the traditional absorption x-ray imaging used in clinical radiology. In this paper we aim to explain the features of x-ray refraction contrast as a typical clinical radiologist would understand. Then a discussion regarding what needs to be considered in the interpretation of the refraction image takes place. Finally we present a discussion about the limitations of planar refraction imaging and the potential of DEI Computed Tomography. This is an original work that has not been submitted to any other source for publication. The authors have no commercial interests or conflicts of interest to disclose.

  10. Classical problems in computational aero-acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    In relation to the expected problems in the development of computational aeroacoustics (CAA), the preliminary applications were to classical problems where the known analytical solutions could be used to validate the numerical results. Such comparisons were used to overcome the numerical problems inherent in these calculations. Comparisons were made between the various numerical approaches to the problems such as direct simulations, acoustic analogies and acoustic/viscous splitting techniques. The aim was to demonstrate the applicability of CAA as a tool in the same class as computational fluid dynamics. The scattering problems that occur are considered and simple sources are discussed.

  11. Breaking the computational barriers of pairwise genome comparison.

    PubMed

    Torreno, Oscar; Trelles, Oswaldo

    2015-08-11

    Conventional pairwise sequence comparison software algorithms are being used to process much larger datasets than they were originally designed for. This can result in processing bottlenecks that limit software capabilities or prevent full use of the available hardware resources. Overcoming the barriers that limit the efficient computational analysis of large biological sequence datasets by retrofitting existing algorithms or by creating new applications represents a major challenge for the bioinformatics community. We have developed C libraries for pairwise sequence comparison within diverse architectures, ranging from commodity systems to high performance and cloud computing environments. Exhaustive tests were performed using different datasets of closely- and distantly-related sequences that span from small viral genomes to large mammalian chromosomes. The tests demonstrated that our solution is capable of generating high quality results with a linear-time response and controlled memory consumption, being comparable or faster than the current state-of-the-art methods. We have addressed the problem of pairwise and all-versus-all comparison of large sequences in general, greatly increasing the limits on input data size. The approach described here is based on a modular out-of-core strategy that uses secondary storage to avoid reaching memory limits during the identification of High-scoring Segment Pairs (HSPs) between the sequences under comparison. Software engineering concepts were applied to avoid intermediate result re-calculation, to minimise the performance impact of input/output (I/O) operations and to modularise the process, thus enhancing application flexibility and extendibility. Our computationally-efficient approach allows tasks such as the massive comparison of complete genomes, evolutionary event detection, the identification of conserved synteny blocks and inter-genome distance calculations to be performed more effectively.

  12. Deaf individuals who work with computers present a high level of visual attention

    PubMed Central

    Ribeiro, Paula Vieira; Ribas, Valdenilson Ribeiro; Ribas, Renata de Melo Guerra; de Melo, Teresinha de Jesus Oliveira Guimarães; Marinho, Carlos Antonio de Sá; Silva, Kátia Karina do Monte; de Albuquerque, Elizabete Elias; Ribas, Valéria Ribeiro; de Lima, Renata Mirelly Silva; Santos, Tuthcha Sandrelle Botelho Tavares

    2011-01-01

    Some studies in the literature indicate that deaf individuals seem to develop a higher level of attention and concentration during the process of constructing of different ways of communicating. Objective The aim of this study was to evaluate the level of attention in individuals deaf from birth that worked with computers. Methods A total of 161 individuals in the 18-25 age group were assessed. Of these, 40 were congenitally deaf individuals that worked with computers, 42 were deaf individuals that did not work, did not know how to use nor used computers (Control 1), 39 individuals with normal hearing that did not work, did not know how to use computers nor used them (Control 2), and 40 individuals with normal hearing that worked with computers (Control 3). Results The group of subjects deaf from birth that worked with computers (IDWC) presented a higher level of focused attention, sustained attention, mental manipulation capacity and resistance to interference compared to the control groups. Conclusion This study highlights the relevance sensory to cognitive processing. PMID:29213734

  13. Stirling Analysis Comparison of Commercial vs. High-Order Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2007-01-01

    Recently, three-dimensional Stirling engine simulations have been accomplished utilizing commercial Computational Fluid Dynamics software. The validations reported can be somewhat inconclusive due to the lack of precise time accurate experimental results from engines, export control/ proprietary concerns, and the lack of variation in the methods utilized. The last issue may be addressed by solving the same flow problem with alternate methods. In this work, a comprehensive examination of the methods utilized in the commercial codes is compared with more recently developed high-order methods. Specifically, Lele's Compact scheme and Dyson s Ultra Hi-Fi method will be compared with the SIMPLE and PISO methods currently employed in CFD-ACE, FLUENT, CFX, and STAR-CD (all commercial codes which can in theory solve a three-dimensional Stirling model although sliding interfaces and their moving grids limit the effective time accuracy). We will initially look at one-dimensional flows since the current standard practice is to design and optimize Stirling engines with empirically corrected friction and heat transfer coefficients in an overall one-dimensional model. This comparison provides an idea of the range in which commercial CFD software for modeling Stirling engines may be expected to provide accurate results. In addition, this work provides a framework for improving current one-dimensional analysis codes.

  14. Stirling Analysis Comparison of Commercial Versus High-Order Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2005-01-01

    Recently, three-dimensional Stirling engine simulations have been accomplished utilizing commercial Computational Fluid Dynamics software. The validations reported can be somewhat inconclusive due to the lack of precise time accurate experimental results from engines, export control/proprietary concerns, and the lack of variation in the methods utilized. The last issue may be addressed by solving the same flow problem with alternate methods. In this work, a comprehensive examination of the methods utilized in the commercial codes is compared with more recently developed high-order methods. Specifically, Lele's compact scheme and Dyson's Ultra Hi-Fi method will be compared with the SIMPLE and PISO methods currently employed in CFD-ACE, FLUENT, CFX, and STAR-CD (all commercial codes which can in theory solve a three-dimensional Stirling model with sliding interfaces and their moving grids limit the effective time accuracy). We will initially look at one-dimensional flows since the current standard practice is to design and optimize Stirling engines with empirically corrected friction and heat transfer coefficients in an overall one-dimensional model. This comparison provides an idea of the range in which commercial CFD software for modeling Stirling engines may be expected to provide accurate results. In addition, this work provides a framework for improving current one-dimensional analysis codes.

  15. Characterizing a four-qubit planar lattice for arbitrary error detection

    NASA Astrophysics Data System (ADS)

    Chow, Jerry M.; Srinivasan, Srikanth J.; Magesan, Easwar; Córcoles, A. D.; Abraham, David W.; Gambetta, Jay M.; Steffen, Matthias

    2015-05-01

    Quantum error correction will be a necessary component towards realizing scalable quantum computers with physical qubits. Theoretically, it is possible to perform arbitrarily long computations if the error rate is below a threshold value. The two-dimensional surface code permits relatively high fault-tolerant thresholds at the ~1% level, and only requires a latticed network of qubits with nearest-neighbor interactions. Superconducting qubits have continued to steadily improve in coherence, gate, and readout fidelities, to become a leading candidate for implementation into larger quantum networks. Here we describe characterization experiments and calibration of a system of four superconducting qubits arranged in a planar lattice, amenable to the surface code. Insights into the particular qubit design and comparison between simulated parameters and experimentally determined parameters are given. Single- and two-qubit gate tune-up procedures are described and results for simultaneously benchmarking pairs of two-qubit gates are given. All controls are eventually used for an arbitrary error detection protocol described in separate work [Corcoles et al., Nature Communications, 6, 2015].

  16. A biorobotic model of the human larynx.

    PubMed

    Manti, M; Cianchetti, M; Nacci, A; Ursino, F; Laschi, C

    2015-08-01

    This work focuses on a physical model of the human larynx that replicates its main components and functions. The prototype reproduces the multilayer vocal folds and the ab/adduction movements. In particular, the vocal folds prototype is made with soft materials whose mechanical properties have been obtained to be similar to the natural tissue in terms of viscoelasticity. A computational model was used to study fluid-structure interaction between vocal folds and the airflow. This tool allowed us to make a comparison between theoretical and experimental results. Measurements were performed with this prototype in an experimental platform comprising a controlled air flow, pressure sensors and a high-speed camera for measuring vocal fold vibrations. Data included oscillation frequency at the onset pressure and glottal width. Results show that the combination between vocal fold geometry, mechanical properties and dimensions exhibits an oscillation frequency close to that of the human vocal fold. Moreover, computational results show a high correlation with the experimental one.

  17. Composition of web services using Markov decision processes and dynamic programming.

    PubMed

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.

  18. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Bilsky, A. V.; Lozhkin, V. A.; Markovich, D. M.; Tokarev, M. P.

    2013-04-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART.

  19. Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs

    PubMed Central

    Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang

    2015-01-01

    Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n 2), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k 2 n 2) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results. PMID:26491652

  20. Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs.

    PubMed

    Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang

    2015-01-01

    Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n (2)), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k (2) n (2)) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results.

  1. Modeling of spectral signatures of littoral waters

    NASA Astrophysics Data System (ADS)

    Haltrin, Vladimir I.

    1997-12-01

    The spectral values of remotely obtained radiance reflectance coefficient (RRC) are compared with the values of RRC computed from inherent optical properties measured during the shipborne experiment near the West Florida coast. The model calculations are based on the algorithm developed at the Naval Research Laboratory at Stennis Space Center and presented here. The algorithm is based on the radiation transfer theory and uses regression relationships derived from experimental data. Overall comparison of derived and measured RRCs shows that this algorithm is suitable for processing ground truth data for the purposes of remote data calibration. The second part of this work consists of the evaluation of the predictive visibility model (PVM). The simulated three-dimensional values of optical properties are compared with the measured ones. Preliminary results of comparison are encouraging and show that the PVM can qualitatively predict the evolution of inherent optical properties in littoral waters.

  2. Aligning the unalignable: bacteriophage whole genome alignments.

    PubMed

    Bérard, Sèverine; Chateau, Annie; Pompidor, Nicolas; Guertin, Paul; Bergeron, Anne; Swenson, Krister M

    2016-01-13

    In recent years, many studies focused on the description and comparison of large sets of related bacteriophage genomes. Due to the peculiar mosaic structure of these genomes, few informative approaches for comparing whole genomes exist: dot plots diagrams give a mostly qualitative assessment of the similarity/dissimilarity between two or more genomes, and clustering techniques are used to classify genomes. Multiple alignments are conspicuously absent from this scene. Indeed, whole genome aligners interpret lack of similarity between sequences as an indication of rearrangements, insertions, or losses. This behavior makes them ill-prepared to align bacteriophage genomes, where even closely related strains can accomplish the same biological function with highly dissimilar sequences. In this paper, we propose a multiple alignment strategy that exploits functional collinearity shared by related strains of bacteriophages, and uses partial orders to capture mosaicism of sets of genomes. As classical alignments do, the computed alignments can be used to predict that genes have the same biological function, even in the absence of detectable similarity. The Alpha aligner implements these ideas in visual interactive displays, and is used to compute several examples of alignments of Staphylococcus aureus and Mycobacterium bacteriophages, involving up to 29 genomes. Using these datasets, we prove that Alpha alignments are at least as good as those computed by standard aligners. Comparison with the progressive Mauve aligner - which implements a partial order strategy, but whose alignments are linearized - shows a greatly improved interactive graphic display, while avoiding misalignments. Multiple alignments of whole bacteriophage genomes work, and will become an important conceptual and visual tool in comparative genomics of sets of related strains. A python implementation of Alpha, along with installation instructions for Ubuntu and OSX, is available on bitbucket (https://bitbucket.org/thekswenson/alpha).

  3. Fluid flow and heat transfer characteristics of an enclosure with fin as a top cover of a solar collector

    NASA Astrophysics Data System (ADS)

    Ambarita, H.; Ronowikarto, A. D.; Siregar, R. E. T.; Setyawan, E. Y.

    2018-03-01

    To reduce heat loses in a flat plate solar collector, double glasses cover is employed. Several studies show that the heat loss from the glass cover is still very significant in comparison with other losses. Here, double glasses cover with attached fins is proposed. In the present work, the fluid flow and heat transfer characteristics of the enclosure between the double glass cover are investigated numerically. The objective is to examine the effect of the fin to the heat transfer rate of the cover. Two-dimensional governing equations are developed. The governing equations and the boundary conditions are solved using commercial Computational Fluid Dynamics code. The fluid flow and heat transfer characteristics are plotted, and numerical results are compared with empirical correlation. The results show that the presence of the fin strongly affects the fluid flow and heat transfer characteristics. The fin can reduce the heat transfer rate up to 22.42% in comparison with double glasses cover without fins.

  4. Secured remote health monitoring system

    PubMed Central

    Ganesh Kumar, Pugalendhi

    2017-01-01

    Wireless medical sensor network is used in healthcare applications that have the collections of biosensors connected to a human body or emergency care unit to monitor the patient's physiological vital status. The real-time medical data collected using wearable medical sensors are transmitted to a diagnostic centre. The data generated from the sensors are aggregated at this centre and transmitted further to the doctor's personal digital assistant for diagnosis. The unauthorised access of one's health data may lead to misuse and legal complications while unreliable data transmission or storage may lead to life threatening risk to patients. So, this Letter combines the symmetric algorithm and attribute-based encryption to secure the data transmission and access control system for medical sensor network. In this work, existing systems and their algorithm are compared for identifying the best performance. The work also shows the graphical comparison of encryption time, decryption time and total computation time of the existing and the proposed systems. PMID:29383257

  5. Real-Time Model and Simulation Architecture for Half- and Full-Bridge Modular Multilevel Converters

    NASA Astrophysics Data System (ADS)

    Ashourloo, Mojtaba

    This work presents an equivalent model and simulation architecture for real-time electromagnetic transient analysis of either half-bridge or full-bridge modular multilevel converter (MMC) with 400 sub-modules (SMs) per arm. The proposed CPU/FPGA-based architecture is optimized for the parallel implementation of the presented MMC model on the FPGA and is beneficiary of a high-throughput floating-point computational engine. The developed real-time simulation architecture is capable of simulating MMCs with 400 SMs per arm at 825 nanoseconds. To address the difficulties of the sorting process implementation, a modified Odd-Even Bubble sorting is presented in this work. The comparison of the results under various test scenarios reveals that the proposed real-time simulator is representing the system responses in the same way of its corresponding off-line counterpart obtained from the PSCAD/EMTDC program.

  6. Modeling of anomalous electron mobility in Hall thrusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koo, Justin W.; Boyd, Iain D.

    Accurate modeling of the anomalous electron mobility is absolutely critical for successful simulation of Hall thrusters. In this work, existing computational models for the anomalous electron mobility are used to simulate the UM/AFRL P5 Hall thruster (a 5 kW laboratory model) in a two-dimensional axisymmetric hybrid particle-in-cell Monte Carlo collision code. Comparison to experimental results indicates that, while these computational models can be tuned to reproduce the correct thrust or discharge current, it is very difficult to match all integrated performance parameters (thrust, power, discharge current, etc.) simultaneously. Furthermore, multiple configurations of these computational models can produce reasonable integrated performancemore » parameters. A semiempirical electron mobility profile is constructed from a combination of internal experimental data and modeling assumptions. This semiempirical electron mobility profile is used in the code and results in more accurate simulation of both the integrated performance parameters and the mean potential profile of the thruster. Results indicate that the anomalous electron mobility, while absolutely necessary in the near-field region, provides a substantially smaller contribution to the total electron mobility in the high Hall current region near the thruster exit plane.« less

  7. Affordable Emerging Computer Hardware for Neuromorphic Computing Applications

    DTIC Science & Technology

    2011-09-01

    DATES COVERED (From - To) 4 . TITLE AND SUBTITLE AFFORDABLE EMERGING COMPUTER HARDWARE FOR NEUROMORPHIC COMPUTING APPLICATIONS 5a. CONTRACT NUMBER...speedup over software [3, 4 ]. 3 Table 1 shows a comparison of the computing performance, communication performance, power consumption...time is probably 5 frames per second, corresponding to 5 saccades. III. RESULTS AND DISCUSSION The use of IBM Cell-BE technology (Sony PlayStation

  8. Status and future of the 3D MAFIA group of codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebeling, F.; Klatt, R.; Krawzcyk, F.

    1988-12-01

    The group of fully three dimensional computer codes for solving Maxwell's equations for a wide range of applications, MAFIA, is already well established. Extensive comparisons with measurements have demonstrated the accuracy of the computations. A large numer of components have been designed for accelerators, such as kicker magnets, non cyclindrical cavities, ferrite loaded cavities, vacuum chambers with slots and transitions, etc. The latest additions to the system include a new static solver that can calculate 3D magneto- and electrostatic fields, and a self consistent version of the 2D-BCI that solves the field equations and the equations of motion in parallel.more » Work on new eddy current modules has started, which will allow treatment of laminated and/or solid iron cores excited by low frequency currents. Based on our experience with the present releases 1 and 2, we have started a complete revision of the whole user interface and data structure, which will make the codes even more user-friendly and flexible.« less

  9. A comparison of non-local electron transport models relevant to inertial confinement fusion

    NASA Astrophysics Data System (ADS)

    Sherlock, Mark; Brodrick, Jonathan; Ridgers, Christopher

    2017-10-01

    We compare the reduced non-local electron transport model developed by Schurtz et al. to Vlasov-Fokker-Planck simulations. Two new test cases are considered: the propagation of a heat wave through a high density region into a lower density gas, and a 1-dimensional hohlraum ablation problem. We find the reduced model reproduces the peak heat flux well in the ablation region but significantly over-predicts the coronal preheat. The suitability of the reduced model for computing non-local transport effects other than thermal conductivity is considered by comparing the computed distribution function to the Vlasov-Fokker-Planck distribution function. It is shown that even when the reduced model reproduces the correct heat flux, the distribution function is significantly different to the Vlasov-Fokker-Planck prediction. Two simple modifications are considered which improve agreement between models in the coronal region. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  10. Comparison of ultrasonography, radiography and a single computed tomography slice for the identification of fluid within the tympanic bulla of rabbit cadavers.

    PubMed

    King, A M; Posthumus, J; Hammond, G; Sullivan, M

    2012-08-01

    Evaluation of the tympanic bulla (TB) in cases of otitis media in the rabbit can be a diagnostic challenge, although a feature often associated with the condition is the accumulation of fluid or material within the TB. Randomly selected TB from 40 rabbit cadavers were filled with a water-based, water-soluble jelly lubricant. A dorsoventral radiograph and single computed tomography (CT) slice were taken followed by an ultrasound (US) examination. Image interpretation was performed by blinded operators. The content of each TB was determined (fluid or gas) using each technique and the cadavers were frozen and sectioned for confirmation. CT was the most accurate diagnostic method, but US produced better results than radiography. Given the advantages of US over the other imaging techniques, the results suggest that further work is warranted to determine US applications in the evaluation of the rabbit TB and clinical cases of otitis media in this species. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. What Was Learned in Predicting Slender Airframe Aerodynamics with the F-16XL Aircraft

    NASA Technical Reports Server (NTRS)

    Rizzi, Arthur; Luckring, James M.

    2016-01-01

    The second Cranked-Arrow Wing Aerodynamics Project, International, coordinated project has been underway to improve high-fidelity computational-fluid-dynamics predictions of slender airframe aerodynamics. The work is focused on two flow conditions and leverages a unique flight data set obtained with the F-16XL aircraft for comparison and validation. These conditions, a low-speed high-angle-of-attack case and a transonic low-angle-of-attack case, were selected from a prior prediction campaign wherein the computational fluid dynamics failed to provide acceptable results. In revisiting these two cases, approaches for improved results include better, denser grids using more grid adaptation to local flow features as well as unsteady higher-fidelity physical modeling like hybrid Reynolds-averaged Navier-Stokes/unsteady Reynolds-averaged Navier-Stokes/large-eddy simulation methods. The work embodies predictions from multiple numerical formulations that are contributed from multiple organizations where some authors investigate other possible factors that could explain the discrepancies in agreement (e.g., effects due to deflected control surfaces during the flight tests as well as static aeroelastic deflection of the outer wing). This paper presents the synthesis of all the results and findings and draws some conclusions that lead to an improved understanding of the underlying flow physics, finally making the connections between the physics and aircraft features.

  12. Novel physical constraints on implementation of computational processes

    NASA Astrophysics Data System (ADS)

    Wolpert, David; Kolchinsky, Artemy

    Non-equilibrium statistical physics permits us to analyze computational processes, i.e., ways to drive a physical system such that its coarse-grained dynamics implements some desired map. It is now known how to implement any such desired computation without dissipating work, and what the minimal (dissipationless) work is that such a computation will require (the so-called generalized Landauer bound\\x9D). We consider how these analyses change if we impose realistic constraints on the computational process. First, we analyze how many degrees of freedom of the system must be controlled, in addition to the ones specifying the information-bearing degrees of freedom, in order to avoid dissipating work during a given computation, when local detailed balance holds. We analyze this issue for deterministic computations, deriving a state-space vs. speed trade-off, and use our results to motivate a measure of the complexity of a computation. Second, we consider computations that are implemented with logic circuits, in which only a small numbers of degrees of freedom are coupled at a time. We show that the way a computation is implemented using circuits affects its minimal work requirements, and relate these minimal work requirements to information-theoretic measures of complexity.

  13. Summary of Computer Usage and Inventory of Computer Utilization in Curriculum. FY 1987-88.

    ERIC Educational Resources Information Center

    Tennessee Univ., Chattanooga. Center of Excellence for Computer Applications.

    This report presents the results of a computer usage survey/inventory, the ninth in a series conducted at the University of Tennessee at Chattanooga to obtain information on the changing status of computer usage in the curricula. Data analyses are reported in 11 tables, which include comparisons between annual inventories and demonstrate growth…

  14. A Program To Develop through LOGO the Computer Self-Confidence of Seventh Grade Low-Achieving Girls.

    ERIC Educational Resources Information Center

    Angell, Marion D.

    This practicum report describes the development of a program designed to improve self-confidence in low-achieving seventh grade girls towards computers. The questionnaire "My Feelings Towards Computers" was used for pre- and post-comparisons. Students were introduced to the computer program LOGO, were taught to compose programs using the…

  15. Analysis of reaction cross-section production in neutron induced fission reactions on uranium isotope using computer code COMPLET.

    PubMed

    Asres, Yihunie Hibstie; Mathuthu, Manny; Birhane, Marelgn Derso

    2018-04-22

    This study provides current evidence about cross-section production processes in the theoretical and experimental results of neutron induced reaction of uranium isotope on projectile energy range of 1-100 MeV in order to improve the reliability of nuclear stimulation. In such fission reactions of 235 U within nuclear reactors, much amount of energy would be released as a product that able to satisfy the needs of energy to the world wide without polluting processes as compared to other sources. The main objective of this work is to transform a related knowledge in the neutron-induced fission reactions on 235 U through describing, analyzing and interpreting the theoretical results of the cross sections obtained from computer code COMPLET by comparing with the experimental data obtained from EXFOR. The cross section value of 235 U(n,2n) 234 U, 235 U(n,3n) 233 U, 235 U(n,γ) 236 U, 235 U(n,f) are obtained using computer code COMPLET and the corresponding experimental values were browsed by EXFOR, IAEA. The theoretical results are compared with the experimental data taken from EXFOR Data Bank. Computer code COMPLET has been used for the analysis with the same set of input parameters and the graphs were plotted by the help of spreadsheet & Origin-8 software. The quantification of uncertainties stemming from both experimental data and computer code calculation plays a significant role in the final evaluated results. The calculated results for total cross sections were compared with the experimental data taken from EXFOR in the literature, and good agreement was found between the experimental and theoretical data. This comparison of the calculated data was analyzed and interpreted with tabulation and graphical descriptions, and the results were briefly discussed within the text of this research work. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Forces between permanent magnets: experiments and model

    NASA Astrophysics Data System (ADS)

    González, Manuel I.

    2017-03-01

    This work describes a very simple, low-cost experimental setup designed for measuring the force between permanent magnets. The experiment consists of placing one of the magnets on a balance, attaching the other magnet to a vertical height gauge, aligning carefully both magnets and measuring the load on the balance as a function of the gauge reading. A theoretical model is proposed to compute the force, assuming uniform magnetisation and based on laws and techniques accessible to undergraduate students. A comparison between the model and the experimental results is made, and good agreement is found at all distances investigated. In particular, it is also found that the force behaves as r -4 at large distances, as expected.

  17. Structured water in polyelectrolyte dendrimers: Understanding small angle neutron scattering results through atomistic simulation

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Kerkeni, Boutheïna; Egami, Takeshi; Do, Changwoo; Liu, Yun; Wang, Yongmei; Porcar, Lionel; Hong, Kunlun; Smith, Sean C.; Liu, Emily L.; Smith, Gregory S.; Chen, Wei-Ren

    2012-04-01

    Based on atomistic molecular dynamics (MD) simulations, the small angle neutron scattering (SANS) intensity behavior of a single generation-4 polyelectrolyte polyamidoamine starburst dendrimer is investigated at different levels of molecular protonation. The SANS form factor, P(Q), and Debye autocorrelation function, γ(r), are calculated from the equilibrium MD trajectory based on a mathematical approach proposed in this work. The consistency found in comparison against previously published experimental findings (W.-R. Chen, L. Porcar, Y. Liu, P. D. Butler, and L. J. Magid, Macromolecules 40, 5887 (2007)) leads to a link between the neutron scattering experiment and MD computation, and fresh perspectives. The simulations enable scattering calculations of not only the hydrocarbons but also the contribution from the scattering length density fluctuations caused by structured, confined water within the dendrimer. Based on our computational results, we explore the validity of using radius of gyration RG for microstructure characterization of a polyelectrolyte dendrimer from the scattering perspective.

  18. A computational framework to characterize and compare the geometry of coronary networks.

    PubMed

    Bulant, C A; Blanco, P J; Lima, T P; Assunção, A N; Liberato, G; Parga, J R; Ávila, L F R; Pereira, A C; Feijóo, R A; Lemos, P A

    2017-03-01

    This work presents a computational framework to perform a systematic and comprehensive assessment of the morphometry of coronary arteries from in vivo medical images. The methodology embraces image segmentation, arterial vessel representation, characterization and comparison, data storage, and finally analysis. Validation is performed using a sample of 48 patients. Data mining of morphometric information of several coronary arteries is presented. Results agree to medical reports in terms of basic geometric and anatomical variables. Concerning geometric descriptors, inter-artery and intra-artery correlations are studied. Data reported here can be useful for the construction and setup of blood flow models of the coronary circulation. Finally, as an application example, similarity criterion to assess vasculature likelihood based on geometric features is presented and used to test geometric similarity among sibling patients. Results indicate that likelihood, measured through geometric descriptors, is stronger between siblings compared with non-relative patients. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Computational Fluid Dynamics simulation of hydrothermal liquefaction of microalgae in a continuous plug-flow reactor.

    PubMed

    Ranganathan, Panneerselvam; Savithri, Sivaraman

    2018-06-01

    Computational Fluid Dynamics (CFD) technique is used in this work to simulate the hydrothermal liquefaction of Nannochloropsis sp. microalgae in a lab-scale continuous plug-flow reactor to understand the fluid dynamics, heat transfer, and reaction kinetics in a HTL reactor under hydrothermal condition. The temperature profile in the reactor and the yield of HTL products from the present simulation are obtained and they are validated with the experimental data available in the literature. Furthermore, the parametric study is carried out to study the effect of slurry flow rate, reactor temperature, and external heat transfer coefficient on the yield of products. Though the model predictions are satisfactory in comparison with the experimental results, it still needs to be improved for better prediction of the product yields. This improved model will be considered as a baseline for design and scale-up of large-scale HTL reactor. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Fan Noise Prediction with Applications to Aircraft System Noise Assessment

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Envia, Edmane; Burley, Casey L.

    2009-01-01

    This paper describes an assessment of current fan noise prediction tools by comparing measured and predicted sideline acoustic levels from a benchmark fan noise wind tunnel test. Specifically, an empirical method and newly developed coupled computational approach are utilized to predict aft fan noise for a benchmark test configuration. Comparisons with sideline noise measurements are performed to assess the relative merits of the two approaches. The study identifies issues entailed in coupling the source and propagation codes, as well as provides insight into the capabilities of the tools in predicting the fan noise source and subsequent propagation and radiation. In contrast to the empirical method, the new coupled computational approach provides the ability to investigate acoustic near-field effects. The potential benefits/costs of these new methods are also compared with the existing capabilities in a current aircraft noise system prediction tool. The knowledge gained in this work provides a basis for improved fan source specification in overall aircraft system noise studies.

Top