Sample records for computational test cases

  1. Computational Test Cases for a Rectangular Supercritical Wing Undergoing Pitching Oscillations

    NASA Technical Reports Server (NTRS)

    Bennett, Robert M.; Walker, Charlotte E.

    1999-01-01

    Proposed computational test cases have been selected from the data set for a rectangular wing of panel aspect ratio two with a twelve-percent-thick supercritical airfoil section that was tested in the NASA Langley Transonic Dynamics Tunnel. The test cases include parametric variation of static angle of attack, pitching oscillation frequency, and Mach numbers from subsonic to transonic with strong shocks. Tables and plots of the measured pressures are presented for each case. This report provides an early release of test cases that have been proposed for a document that supplements the cases presented in AGARD Report 702.

  2. Computational Test Cases for a Clipped Delta Wing with Pitching and Trailing-Edge Control Surface Oscillations

    NASA Technical Reports Server (NTRS)

    Bennett, Robert M.; Walker, Charlotte E.

    1999-01-01

    Computational test cases have been selected from the data set for a clipped delta wing with a six-percent-thick circular-arc airfoil section that was tested in the NASA Langley Transonic Dynamics Tunnel. The test cases include parametric variation of static angle of attack, pitching oscillation frequency, trailing-edge control surface oscillation frequency, and Mach numbers from subsonic to low supersonic values. Tables and plots of the measured pressures are presented for each case. This report provides an early release of test cases that have been proposed for a document that supplements the cases presented in AGARD Report 702.

  3. Model-Invariant Hybrid Computations of Separated Flows for RCA Standard Test Cases

    NASA Technical Reports Server (NTRS)

    Woodruff, Stephen

    2016-01-01

    NASA's Revolutionary Computational Aerosciences (RCA) subproject has identified several smooth-body separated flows as standard test cases to emphasize the challenge these flows present for computational methods and their importance to the aerospace community. Results of computations of two of these test cases, the NASA hump and the FAITH experiment, are presented. The computations were performed with the model-invariant hybrid LES-RANS formulation, implemented in the NASA code VULCAN-CFD. The model- invariant formulation employs gradual LES-RANS transitions and compensation for model variation to provide more accurate and efficient hybrid computations. Comparisons revealed that the LES-RANS transitions employed in these computations were sufficiently gradual that the compensating terms were unnecessary. Agreement with experiment was achieved only after reducing the turbulent viscosity to mitigate the effect of numerical dissipation. The stream-wise evolution of peak Reynolds shear stress was employed as a measure of turbulence dynamics in separated flows useful for evaluating computations.

  4. The preparedness level of final year medical students for an adequate medical approach to emergency cases: computer-based medical education in emergency medicine

    PubMed Central

    2014-01-01

    Background We aimed to observe the preparedness level of final year medical students in approaching emergencies by computer-based simulation training and evaluate the efficacy of the program. Methods A computer-based prototype simulation program (Lsim), designed by researchers from the medical education and computer science departments, was used to present virtual cases for medical learning. Fifty-four final year medical students from Ondokuz Mayis University School of Medicine attended an education program on June 20, 2012 and were trained with Lsim. Volunteer attendants completed a pre-test and post-test exam at the beginning and end of the course, respectively, on the same day. Results Twenty-nine of the 54 students who attended the course accepted to take the pre-test and post-test exams; 58.6% (n = 17) were female. In 10 emergency medical cases, an average of 3.9 correct medical approaches were performed in the pre-test and an average of 9.6 correct medical approaches were performed in the post-test (t = 17.18, P = 0.006). Conclusions This study’s results showed that the readiness level of students for an adequate medical approach to emergency cases was very low. Computer-based training could help in the adequate approach of students to various emergency cases. PMID:24386919

  5. Investigation of different modeling approaches for computational fluid dynamics simulation of high-pressure rocket combustors

    NASA Astrophysics Data System (ADS)

    Ivancic, B.; Riedmann, H.; Frey, M.; Knab, O.; Karl, S.; Hannemann, K.

    2016-07-01

    The paper summarizes technical results and first highlights of the cooperation between DLR and Airbus Defence and Space (DS) within the work package "CFD Modeling of Combustion Chamber Processes" conducted in the frame of the Propulsion 2020 Project. Within the addressed work package, DLR Göttingen and Airbus DS Ottobrunn have identified several test cases where adequate test data are available and which can be used for proper validation of the computational fluid dynamics (CFD) tools. In this paper, the first test case, the Penn State chamber (RCM1), is discussed. Presenting the simulation results from three different tools, it is shown that the test case can be computed properly with steady-state Reynolds-averaged Navier-Stokes (RANS) approaches. The achieved simulation results reproduce the measured wall heat flux as an important validation parameter very well but also reveal some inconsistencies in the test data which are addressed in this paper.

  6. Cart3D Simulations for the Second AIAA Sonic Boom Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Anderson, George R.; Aftosmis, Michael J.; Nemec, Marian

    2017-01-01

    Simulation results are presented for all test cases prescribed in the Second AIAA Sonic Boom Prediction Workshop. For each of the four nearfield test cases, we compute pressure signatures at specified distances and off-track angles, using an inviscid, embedded-boundary Cartesian-mesh flow solver with output-based mesh adaptation. The cases range in complexity from an axisymmetric body to a full low-boom aircraft configuration with a powered nacelle. For efficiency, boom carpets are decomposed into sets of independent meshes and computed in parallel. This also facilitates the use of more effective meshing strategies - each off-track angle is computed on a mesh with good azimuthal alignment, higher aspect ratio cells, and more tailored adaptation. The nearfield signatures generally exhibit good convergence with mesh refinement. We introduce a local error estimation procedure to highlight regions of the signatures most sensitive to mesh refinement. Results are also presented for the two propagation test cases, which investigate the effects of atmospheric profiles on ground noise. Propagation is handled with an augmented Burgers' equation method (NASA's sBOOM), and ground noise metrics are computed with LCASB.

  7. Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Brown, Douglas L.

    1994-01-01

    In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.

  8. Evolving binary classifiers through parallel computation of multiple fitness cases.

    PubMed

    Cagnoni, Stefano; Bergenti, Federico; Mordonini, Monica; Adorni, Giovanni

    2005-06-01

    This paper describes two versions of a novel approach to developing binary classifiers, based on two evolutionary computation paradigms: cellular programming and genetic programming. Such an approach achieves high computation efficiency both during evolution and at runtime. Evolution speed is optimized by allowing multiple solutions to be computed in parallel. Runtime performance is optimized explicitly using parallel computation in the case of cellular programming or implicitly taking advantage of the intrinsic parallelism of bitwise operators on standard sequential architectures in the case of genetic programming. The approach was tested on a digit recognition problem and compared with a reference classifier.

  9. A comparison of two central difference schemes for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Maksymiuk, C. M.; Swanson, R. C.; Pulliam, T. H.

    1990-01-01

    Five viscous transonic airfoil cases were computed by two significantly different computational fluid dynamics codes: An explicit finite-volume algorithm with multigrid, and an implicit finite-difference approximate-factorization method with Eigenvector diagonalization. Both methods are described in detail, and their performance on the test cases is compared. The codes utilized the same grids, turbulence model, and computer to provide the truest test of the algorithms. The two approaches produce very similar results, which, for attached flows, also agree well with experimental results; however, the explicit code is considerably faster.

  10. Automated procedure for developing hybrid computer simulations of turbofan engines. Part 1: General description

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Krosel, S. M.; Bruton, W. M.

    1982-01-01

    A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology that is pesented makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all the calculations and data manipulations that are needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self-contained engine model to match specified design-point information. Part I contains a general discussion of the methodology, describes a test case, and presents comparisons between hybrid simulation and specified engine performance data. Part II, a companion document, contains documentation, in the form of computer printouts, for the test case.

  11. Advanced Subsonic Technology (AST) Area of Interest (AOI) 6: Develop and Validate Aeroelastic Codes for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell

    1999-01-01

    AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined for use in aeroelastic code validation.

  12. A Worst-Case Approach for On-Line Flutter Prediction

    NASA Technical Reports Server (NTRS)

    Lind, Rick C.; Brenner, Martin J.

    1998-01-01

    Worst-case flutter margins may be computed for a linear model with respect to a set of uncertainty operators using the structured singular value. This paper considers an on-line implementation to compute these robust margins in a flight test program. Uncertainty descriptions are updated at test points to account for unmodeled time-varying dynamics of the airplane by ensuring the robust model is not invalidated by measured flight data. Robust margins computed with respect to this uncertainty remain conservative to the changing dynamics throughout the flight. A simulation clearly demonstrates this method can improve the efficiency of flight testing by accurately predicting the flutter margin to improve safety while reducing the necessary flight time.

  13. Radiant Energy Measurements from a Scaled Jet Engine Axisymmetric Exhaust Nozzle for a Baseline Code Validation Case

    NASA Technical Reports Server (NTRS)

    Baumeister, Joseph F.

    1994-01-01

    A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.

  14. Can computer-aided diagnosis (CAD) help radiologists find mammographically missed screening cancers?

    NASA Astrophysics Data System (ADS)

    Nishikawa, Robert M.; Giger, Maryellen L.; Schmidt, Robert A.; Papaioannou, John

    2001-06-01

    We present data from a pilot observer study whose goal is design a study to test the hypothesis that computer-aided diagnosis (CAD) can improve radiologists' performance in reading screening mammograms. In a prospective evaluation of our computer detection schemes, we have analyzed over 12,000 clinical exams. Retrospective review of the negative screening mammograms for all cancer cases found an indication of the cancer in 23 of these negative cases. The computer found 54% of these in our prospective testing. We added to these cases normal exams to create a dataset of 75 cases. Four radiologists experienced in mammography read the cases and gave their BI-RADS assessment and their confidence that the patient should be called back for diagnostic mammography. They did so once reading the films only and a second time reading with the computer aid. Three radiologists had no change in area under the ROC curve (mean Az of 0.73) and one improved from 0.73 to 0.78, but this difference failed to reach statistical significance (p equals 0.23). These data are being used to plan a larger more powerful study.

  15. Computer Adaptive Testing, Big Data and Algorithmic Approaches to Education

    ERIC Educational Resources Information Center

    Thompson, Greg

    2017-01-01

    This article critically considers the promise of computer adaptive testing (CAT) and digital data to provide better and quicker data that will improve the quality, efficiency and effectiveness of schooling. In particular, it uses the case of the Australian NAPLAN test that will become an online, adaptive test from 2016. The article argues that…

  16. Enhanced cephalomedullary nail lag screw placement and intraoperative tip-apex distance measurement with a novel computer assisted surgery system.

    PubMed

    Kuhl, Mitchell; Beimel, Claudia

    2016-10-01

    The goal of this study was to evaluate the ability of a novel computer assisted surgery system to guide ideal placement of a lag screw during cephalomedullary nailing and then accurately measure the tip-apex distance (TAD) measurement intraoperatively. Retrospective case review. Level II trauma hospital. The initial 98 consecutive clinical cases treated with a cephalomedullary nail in conjunction with a novel computer assisted surgery system were retrospectively reviewed. A novel computer assisted surgery system was utilized to enhance lag screw placement during cephalomedullary nailing procedures. The computer assisted surgery system calculates the TAD intraoperatively after final lag screw placement. The ideal TAD was considered to be within a range of 5mm-20mm. The ability of the computer assisted surgery system (CASS) to assist in placement of a lag screw within the ideal TAD was evaluated. Intraoperative TAD measurements provided by the computer assisted surgery system were then compared to standard postoperative TAD measurements on PACS (picture archiving and communication system) images to determine whether these measurements are equivalent. 79 cases (80.6%) were available with complete information for a retrospective review. All cases had CASS TAD and PACS TAD measurements >5mm and<20mm. In addition, no significant difference could be detected between the intraoperative CASS TAD and the postoperative PACS TAD (p=0.374, Wilcoxon Test; p=0.174, paired T-Test). A cut-out rate of 0% was observed in all patients who were treated with CASS in this case series (95% CI: 0 - 3.01%). The novel computer assisted surgery system tested here is an effective and reliable adjunct that can be utilized for optimal lag screw placement in cephalomedullary nailing procedures. The computer assisted surgery system provides an accurate intraoperative TAD measurement that is equivalent to the standard postoperative measurement utilizing PACS images. Therapeutic Level IV. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Computer program for Stirling engine performance calculations

    NASA Technical Reports Server (NTRS)

    Tew, R. C., Jr.

    1983-01-01

    The thermodynamic characteristics of the Stirling engine were analyzed and modeled on a computer to support its development as a possible alternative to the automobile spark ignition engine. The computer model is documented. The documentation includes a user's manual, symbols list, a test case, comparison of model predictions with test results, and a description of the analytical equations used in the model.

  18. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...

  19. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...

  20. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...

  1. Effectiveness of a Case-Based Computer Program on Students' Ethical Decision Making.

    PubMed

    Park, Eun-Jun; Park, Mihyun

    2015-11-01

    The aim of this study was to test the effectiveness of a case-based computer program, using an integrative ethical decision-making model, on the ethical decision-making competency of nursing students in South Korea. This study used a pre- and posttest comparison design. Students in the intervention group used a computer program for case analysis assignments, whereas students in the standard group used a traditional paper assignment for case analysis. The findings showed that using the case-based computer program as a complementary tool for the ethics courses offered at the university enhanced students' ethical preparedness and satisfaction with the course. On the basis of the findings, it is recommended that nurse educators use a case-based computer program as a complementary self-study tool in ethics courses to supplement student learning without an increase in course hours, particularly in terms of analyzing ethics cases with dilemma scenarios and exercising ethical decision making. Copyright 2015, SLACK Incorporated.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roper, J; Bradshaw, B; Godette, K

    Purpose: To create a knowledge-based algorithm for prostate LDR brachytherapy treatment planning that standardizes plan quality using seed arrangements tailored to individual physician preferences while being fast enough for real-time planning. Methods: A dataset of 130 prior cases was compiled for a physician with an active prostate seed implant practice. Ten cases were randomly selected to test the algorithm. Contours from the 120 library cases were registered to a common reference frame. Contour variations were characterized on a point by point basis using principle component analysis (PCA). A test case was converted to PCA vectors using the same process andmore » then compared with each library case using a Mahalanobis distance to evaluate similarity. Rank order PCA scores were used to select the best-matched library case. The seed arrangement was extracted from the best-matched case and used as a starting point for planning the test case. Computational time was recorded. Any subsequent modifications were recorded that required input from a treatment planner to achieve an acceptable plan. Results: The computational time required to register contours from a test case and evaluate PCA similarity across the library was approximately 10s. Five of the ten test cases did not require any seed additions, deletions, or moves to obtain an acceptable plan. The remaining five test cases required on average 4.2 seed modifications. The time to complete manual plan modifications was less than 30s in all cases. Conclusion: A knowledge-based treatment planning algorithm was developed for prostate LDR brachytherapy based on principle component analysis. Initial results suggest that this approach can be used to quickly create treatment plans that require few if any modifications by the treatment planner. In general, test case plans have seed arrangements which are very similar to prior cases, and thus are inherently tailored to physician preferences.« less

  3. Application of Metamorphic Testing to Supervised Classifiers

    PubMed Central

    Xie, Xiaoyuan; Ho, Joshua; Kaiser, Gail; Xu, Baowen; Chen, Tsong Yueh

    2010-01-01

    Many applications in the field of scientific computing - such as computational biology, computational linguistics, and others - depend on Machine Learning algorithms to provide important core functionality to support solutions in the particular problem domains. However, it is difficult to test such applications because often there is no “test oracle” to indicate what the correct output should be for arbitrary input. To help address the quality of such software, in this paper we present a technique for testing the implementations of supervised machine learning classification algorithms on which such scientific computing software depends. Our technique is based on an approach called “metamorphic testing”, which has been shown to be effective in such cases. More importantly, we demonstrate that our technique not only serves the purpose of verification, but also can be applied in validation. In addition to presenting our technique, we describe a case study we performed on a real-world machine learning application framework, and discuss how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also discuss how our findings can be of use to other areas outside scientific computing, as well. PMID:21243103

  4. A comparison of fitness-case sampling methods for genetic programming

    NASA Astrophysics Data System (ADS)

    Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel

    2017-11-01

    Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.

  5. Diverticular Disease of the Colon: News From Imaging.

    PubMed

    Flor, Nicola; Soldi, Simone; Zanchetta, Edoardo; Sbaraini, Sara; Pesapane, Filippo

    2016-10-01

    Different scenarios embrace computed tomography imaging and diverticula, including asymptomatic (diverticulosis) and symptomatic patients (acute diverticulitis, follow-up of acute diverticulitis, chronic diverticulitis). If the role of computed tomography is validated and widely supported by evidence in case of acute diverticulitis, this is not the case of patients in their follow-up for acute diverticulitis or with symptoms related to diverticula, but without acute inflammation. In these settings, computed tomography colonography is gaining consensus as the preferred radiologic test.

  6. LAVA Simulations for the AIAA Sonic Boom Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Housman, Jeffrey A.; Sozer, Emre; Moini-Yekta , Shayan; Kiris, Cetin C.

    2014-01-01

    Computational simulations using the Launch Ascent and Vehicle Aerodynamics (LAVA) framework are presented for the First AIAA Sonic Boom Prediction Workshop test cases. The framework is utilized with both structured overset and unstructured meshing approaches. The three workshop test cases include an axisymmetric body, a Delta Wing-Body model, and a complete low-boom supersonic transport concept. Solution sensitivity to mesh type and sizing, and several numerical convective flux discretization choices are presented and discussed. Favorable comparison between the computational simulations and experimental data of nearand mid-field pressure signatures were obtained.

  7. Twenty Years On!: Updating the IEA BESTEST Building Thermal Fabric Test Cases for ASHRAE Standard 140

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, R.; Neymark, J.

    2013-07-01

    ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs applies the IEA BESTEST building thermal fabric test cases and example simulation results originally published in 1995. These software accuracy test cases and their example simulation results, which comprise the first test suite adapted for the initial 2001 version of Standard 140, are approaching their 20th anniversary. In response to the evolution of the state of the art in building thermal fabric modeling since the test cases and example simulation results were developed, work is commencing to update the normative test specification and themore » informative example results.« less

  8. Twenty Years On!: Updating the IEA BESTEST Building Thermal Fabric Test Cases for ASHRAE Standard 140: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, R.; Neymark, J.

    2013-07-01

    ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs applies the IEA BESTEST building thermal fabric test cases and example simulation results originally published in 1995. These software accuracy test cases and their example simulation results, which comprise the first test suite adapted for the initial 2001 version of Standard 140, are approaching their 20th anniversary. In response to the evolution of the state of the art in building thermal fabric modeling since the test cases and example simulation results were developed, work is commencing to update the normative test specification and themore » informative example results.« less

  9. Case management for frail older adults through tablet computers and Skype.

    PubMed

    Berner, Jessica; Anderberg, Peter; Rennemark, Mikael; Berglund, Johan

    2016-12-01

    Frail older adults are high consumers of medical care due to their age and multiple chronic conditions. Regular contact with a case manager has been proven to increase well-being of frail older adults and reduce their number of health-care visits. Skype calls through tablet PCs can offer easier communication. This paper examines frail older adults' use of tablet computers and Skype, with their case managers. Interviews were conducted on 15 frail older adults. A content analysis was used to structure and analyze the data. The results indicate that tablet computers were experienced in a positive way for most frail older adults. Conflicting feelings did emerge, however, as to whether the frail elderly would adopt this in the long run. Skype needs to be tested further as to whether this is a good solution for communication with their case managers. Strong technical support and well-functioning technology are important elements to facilitate use. Using Skype and tablet PCs do have potential for frail older adults, but need to be tested further.

  10. Airside HVAC BESTEST. Adaptation of ASHRAE RP 865 Airside HVAC Equipment Modeling Test Cases for ASHRAE Standard 140. Volume 1, Cases AE101-AE445

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neymark, J.; Kennedy, M.; Judkoff, R.

    This report documents a set of diagnostic analytical verification cases for testing the ability of whole building simulation software to model the air distribution side of typical heating, ventilating and air conditioning (HVAC) equipment. These cases complement the unitary equipment cases included in American National Standards Institute (ANSI)/American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs, which test the ability to model the heat-transfer fluid side of HVAC equipment.

  11. A robust, finite element model for hydrostatic surface water flows

    USGS Publications Warehouse

    Walters, R.A.; Casulli, V.

    1998-01-01

    A finite element scheme is introduced for the 2-dimensional shallow water equations using semi-implicit methods in time. A semi-Lagrangian method is used to approximate the effects of advection. A wave equation is formed at the discrete level such that the equations decouple into an equation for surface elevation and a momentum equation for the horizontal velocity. The convergence rates and relative computational efficiency are examined with the use of three test cases representing various degrees of difficulty. A test with a polar-quadrant grid investigates the response to local grid-scale forcing and the presence of spurious modes, a channel test case establishes convergence rates, and a field-scale test case examines problems with highly irregular grids.A finite element scheme is introduced for the 2-dimensional shallow water equations using semi-implicit methods in time. A semi-Lagrangian method is used to approximate the effects of advection. A wave equation is formed at the discrete level such that the equations decouple into an equation for surface elevation and a momentum equation for the horizontal velocity. The convergence rates and relative computational efficiency are examined with the use of three test cases representing various degrees of difficulty. A test with a polar-quadrant grid investigates the response to local grid-scale forcing and the presence of spurious modes, a channel test case establishes convergence rates, and a field-scale test case examines problems with highly irregular grids.

  12. New Computational Methods for the Prediction and Analysis of Helicopter Noise

    NASA Technical Reports Server (NTRS)

    Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    This paper describes several new methods to predict and analyze rotorcraft noise. These methods are: 1) a combined computational fluid dynamics and Kirchhoff scheme for far-field noise predictions, 2) parallel computer implementation of the Kirchhoff integrations, 3) audio and visual rendering of the computed acoustic predictions over large far-field regions, and 4) acoustic tracebacks to the Kirchhoff surface to pinpoint the sources of the rotor noise. The paper describes each method and presents sample results for three test cases. The first case consists of in-plane high-speed impulsive noise and the other two cases show idealized parallel and oblique blade-vortex interactions. The computed results show good agreement with available experimental data but convey much more information about the far-field noise propagation. When taken together, these new analysis methods exploit the power of new computer technologies and offer the potential to significantly improve our prediction and understanding of rotorcraft noise.

  13. The diagnostic value and accuracy of conjunctival impression cytology, dry eye symptomatology, and routine tear function tests in computer users.

    PubMed

    Bhargava, Rahul; Kumar, Prachi; Kaur, Avinash; Kumar, Manjushri; Mishra, Anurag

    2014-07-01

    To compare the diagnostic value and accuracy of dry eye scoring system (DESS), conjunctival impression cytology (CIC), tear film breakup time (TBUT), and Schirmer's test in computer users. A case-control study was done at two referral eye centers. Eyes of 344 computer users were compared to 371 eyes of age and sex matched controls. Dry eye questionnaire (DESS) was administered to both groups and they further underwent measurement of TBUT, Schirmer's, and CIC. Correlation analysis was performed between DESS, CIC, TBUT, and Schirmer's test scores. A Pearson's coefficient of the linear expression (R (2)) of 0.5 or more was statistically significant. The mean age in cases (26.05 ± 4.06 years) was comparable to controls (25.67 ± 3.65 years) (P = 0.465). The mean symptom score in computer users was significantly higher as compared to controls (P < 0.001). Mean TBUT, Schirmer's test values, and goblet cell density were significantly reduced in computer users (P < 0.001). TBUT, Schirmer's, and CIC were abnormal in 48.5%, 29.1%, and 38.4% symptomatic computer users respectively as compared to 8%, 6.7%, and 7.3% symptomatic controls respectively. On correlation analysis, there was a significant (inverse) association of dry eye symptoms (DESS) with TBUT and CIC scores (R (2) > 0.5), in contrast to Schirmer's scores (R(2) < 0.5). Duration of computer usage had a significant effect on dry eye symptoms severity, TBUT, and CIC scores as compared to Schirmer's test. DESS should be used in combination with TBUT and CIC for dry eye evaluation in computer users.

  14. Generalized likelihood ratios for quantitative diagnostic test scores.

    PubMed

    Tandberg, D; Deely, J J; O'Malley, A J

    1997-11-01

    The reduction of quantitative diagnostic test scores to the dichotomous case is a wasteful and unnecessary simplification in the era of high-speed computing. Physicians could make better use of the information embedded in quantitative test results if modern generalized curve estimation techniques were applied to the likelihood functions of Bayes' theorem. Hand calculations could be completely avoided and computed graphical summaries provided instead. Graphs showing posttest probability of disease as a function of pretest probability with confidence intervals (POD plots) would enhance acceptance of these techniques if they were immediately available at the computer terminal when test results were retrieved. Such constructs would also provide immediate feedback to physicians when a valueless test had been ordered.

  15. Adaptive computations of multispecies mixing between scramjet nozzle flows and hypersonic freestream

    NASA Technical Reports Server (NTRS)

    Baysa, Oktay; Engelund, Walter C.; Eleshaky, Mohamed E.; Pittman, James L.

    1989-01-01

    The objective of this paper is to compute the expansion of a supersonic flow through an internal-external nozzle and its viscous mixing with the hypersonic flow of air. The supersonic jet may be that of a multispecies gas other than air. Calculations are performed for one case where both flows are those of air, and another case where a mixture of freon-12 and argon is discharged supersonically to mix with the hypersonic airflow. Comparisons are made between these two cases with respect to gas compositions, and fixed versus flow-adaptive grids. All the computational results are compared successfully with the wind-tunnel tests results.

  16. Solving groundwater flow problems by conjugate-gradient methods and the strongly implicit procedure

    USGS Publications Warehouse

    Hill, Mary C.

    1990-01-01

    The performance of the preconditioned conjugate-gradient method with three preconditioners is compared with the strongly implicit procedure (SIP) using a scalar computer. The preconditioners considered are the incomplete Cholesky (ICCG) and the modified incomplete Cholesky (MICCG), which require the same computer storage as SIP as programmed for a problem with a symmetric matrix, and a polynomial preconditioner (POLCG), which requires less computer storage than SIP. Although POLCG is usually used on vector computers, it is included here because of its small storage requirements. In this paper, published comparisons of the solvers are evaluated, all four solvers are compared for the first time, and new test cases are presented to provide a more complete basis by which the solvers can be judged for typical groundwater flow problems. Based on nine test cases, the following conclusions are reached: (1) SIP is actually as efficient as ICCG for some of the published, linear, two-dimensional test cases that were reportedly solved much more efficiently by ICCG; (2) SIP is more efficient than other published comparisons would indicate when common convergence criteria are used; and (3) for problems that are three-dimensional, nonlinear, or both, and for which common convergence criteria are used, SIP is often more efficient than ICCG, and is sometimes more efficient than MICCG.

  17. Proceedings of the 2004 Workshop on CFD Validation of Synthetic Jets and Turbulent Separation Control

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L. (Compiler)

    2007-01-01

    The papers presented here are from the Langley Research Center Workshop on Computational Fluid Dynamics (CFD) Validation of Synthetic Jets and Turbulent Separation Control (nicknamed "CFDVAL2004"), held March 2004 in Williamsburg, Virginia. The goal of the workshop was to bring together an international group of CFD practitioners to assess the current capabilities of different classes of turbulent flow solution methodologies to predict flow fields induced by synthetic jets and separation control geometries. The workshop consisted of three flow-control test cases of varying complexity, and participants could contribute to any number of the cases. Along with their workshop submissions, each participant included a short write-up describing their method for computing the particular case(s). These write-ups are presented as received from the authors with no editing. Descriptions of each of the test cases and experiments are also included.

  18. Training Teachers to Use Computers: A Case Study of the Summer Training Component of the IBM/ETS Secondary School Computer Education Program. Research Report.

    ERIC Educational Resources Information Center

    Stecher, Brian

    A training program in computer educationtTested in 89 secondary schools focused on the use of computers as tools in all subject areas. Each school received enough computers and software from IBM to equip a full computer laboratory. The schools were organized into local networks in eight regions and received training and continuing support in these…

  19. DSMC Simulations of Hypersonic Flows and Comparison With Experiments

    NASA Technical Reports Server (NTRS)

    Moss, James N.; Bird, Graeme A.; Markelov, Gennady N.

    2004-01-01

    This paper presents computational results obtained with the direct simulation Monte Carlo (DSMC) method for several biconic test cases in which shock interactions and flow separation-reattachment are key features of the flow. Recent ground-based experiments have been performed for several biconic configurations, and surface heating rate and pressure measurements have been proposed for code validation studies. The present focus is to expand on the current validating activities for a relatively new DSMC code called DS2V that Bird (second author) has developed. Comparisons with experiments and other computations help clarify the agreement currently being achieved between computations and experiments and to identify the range of measurement variability of the proposed validation data when benchmarked with respect to the current computations. For the test cases with significant vibrational nonequilibrium, the effect of the vibrational energy surface accommodation on heating and other quantities is demonstrated.

  20. WE-DE-201-04: Cross Validation of Knowledge-Based Treatment Planning for Prostate LDR Brachytherapy Using Principle Component Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roper, J; Ghavidel, B; Godette, K

    Purpose: To validate a knowledge-based algorithm for prostate LDR brachytherapy treatment planning. Methods: A dataset of 100 cases was compiled from an active prostate seed implant service. Cases were randomized into 10 subsets. For each subset, the 90 remaining library cases were registered to a common reference frame and then characterized on a point by point basis using principle component analysis (PCA). Each test case was converted to PCA vectors using the same process and compared with each library case using a Mahalanobis distance to evaluate similarity. Rank order PCA scores were used to select the best-matched library case. Themore » seed arrangement was extracted from the best-matched case and used as a starting point for planning the test case. Any subsequent modifications were recorded that required input from a treatment planner to achieve V100>95%, V150<60%, V200<20%. To simulate operating-room planning constraints, seed activity was held constant, and the seed count could not increase. Results: The computational time required to register test-case contours and evaluate PCA similarity across the library was 10s. Preliminary analysis of 2 subsets shows that 9 of 20 test cases did not require any seed modifications to obtain an acceptable plan. Five test cases required fewer than 10 seed modifications or a grid shift. Another 5 test cases required approximately 20 seed modifications. An acceptable plan was not achieved for 1 outlier, which was substantially larger than its best match. Modifications took between 5s and 6min. Conclusion: A knowledge-based treatment planning algorithm for prostate LDR brachytherapy is being cross validated using 100 prior cases. Preliminary results suggest that for this size library, acceptable plans can be achieved without planner input in about half of the cases while varying amounts of planner input are needed in remaining cases. Computational time and planning time are compatible with clinical practice.« less

  1. Proof test of the computer program BUCKY for plasticity problems

    NASA Technical Reports Server (NTRS)

    Smith, James P.

    1994-01-01

    A theoretical equation describing the elastic-plastic deformation of a cantilever beam subject to a constant pressure is developed. The theoretical result is compared numerically to the computer program BUCKY for the case of an elastic-perfectly plastic specimen. It is shown that the theoretical and numerical results compare favorably in the plastic range. Comparisons are made to another research code to further validate the BUCKY results. This paper serves as a quality test for the computer program BUCKY developed at NASA Johnson Space Center.

  2. Aerothermodynamics of Blunt Body Entry Vehicles. Chapter 3

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Borrelli, Salvatore

    2011-01-01

    In this chapter, the aerothermodynamic phenomena of blunt body entry vehicles are discussed. Four topics will be considered that present challenges to current computational modeling techniques for blunt body environments: turbulent flow, non-equilibrium flow, rarefied flow, and radiation transport. Examples of comparisons between computational tools to ground and flight-test data will be presented in order to illustrate the challenges existing in the numerical modeling of each of these phenomena and to provide test cases for evaluation of Computational Fluid Dynamics (CFD) code predictions.

  3. Aerothermodynamics of blunt body entry vehicles

    NASA Astrophysics Data System (ADS)

    Hollis, Brian R.; Borrelli, Salvatore

    2012-01-01

    In this chapter, the aerothermodynamic phenomena of blunt body entry vehicles are discussed. Four topics will be considered that present challenges to current computational modeling techniques for blunt body environments: turbulent flow, non-equilibrium flow, rarefied flow, and radiation transport. Examples of comparisons between computational tools to ground and flight-test data will be presented in order to illustrate the challenges existing in the numerical modeling of each of these phenomena and to provide test cases for evaluation of computational fluid dynamics (CFD) code predictions.

  4. Study of the integration of wind tunnel and computational methods for aerodynamic configurations

    NASA Technical Reports Server (NTRS)

    Browne, Lindsey E.; Ashby, Dale L.

    1989-01-01

    A study was conducted to determine the effectiveness of using a low-order panel code to estimate wind tunnel wall corrections. The corrections were found by two computations. The first computation included the test model and the surrounding wind tunnel walls, while in the second computation the wind tunnel walls were removed. The difference between the force and moment coefficients obtained by comparing these two cases allowed the determination of the wall corrections. The technique was verified by matching the test-section, wall-pressure signature from a wind tunnel test with the signature predicted by the panel code. To prove the viability of the technique, two cases were considered. The first was a two-dimensional high-lift wing with a flap that was tested in the 7- by 10-foot wind tunnel at NASA Ames Research Center. The second was a 1/32-scale model of the F/A-18 aircraft which was tested in the low-speed wind tunnel at San Diego State University. The panel code used was PMARC (Panel Method Ames Research Center). Results of this study indicate that the proposed wind tunnel wall correction method is comparable to other methods and that it also inherently includes the corrections due to model blockage and wing lift.

  5. Neuromuscular assessment in elderly workers with and without work related shoulder/neck trouble: the NEW-study design and physiological findings.

    PubMed

    Sjøgaard, G; Søgaard, K; Hermens, H J; Sandsjö, L; Läubli, T; Thorn, S; Vollenbroek-Hutten, M M R; Sell, L; Christensen, H; Klipstein, A; Kadefors, R; Merletti, R

    2006-01-01

    Musculoskeletal disorders in the neck and shoulder area are a major occupational concern in the European countries especially among elderly females. The aim was to assess these disorders based on quantitative EMG indicators and functional tests. 252 female computer users (45-68 years) were recruited from four European countries in two contrast groups: (1) 88 neck/shoulder (NS) cases reporting trouble in the neck and/or shoulder region for more than 30 days during the last year, and (2) 164 NS-controls reporting such trouble for no more than 7 days. Questionnaires, functional/clinical tests, and physiological recordings were performed in workplace related field studies. The results showed no differences in anthropometrics but NS-cases reported more strained head positions and more eye problems than controls. The psychosocial working factors were similar, although, NS-controls had slightly better scores on working conditions, general health, and vitality compared to cases. The NS-cases had lower maximal voluntary contraction (MVC) during shoulder elevation (mean (SD) 310 (122) N) compared to the controls (364 (122) N). During 30% MVC electromyography (EMGrms) in the trapezius muscle was lower in NS-cases (194 (105) muV) than in controls (256 (169) muV), while no differences were found regarding endurance time. Estimated conduction velocity was not different between NS-cases and -controls. Four functional computer tests were performed equally well by NS-cases and -controls, and the corresponding EMG variables also did not differ. A major finding in this large-scale epidemiological study is the significantly lower MVC in NS-cases compared with NS-controls together with lower EMGrms value at 30% MVC, while computer tasks were performed at similar relative muscle activation. The study was unable to reveal quantitative EMG indicators and functional tests that could objectively assess disorders in NS-cases.

  6. Religious Studies as a Test-Case For Computer-Assisted Instruction In The Humanities.

    ERIC Educational Resources Information Center

    Jones, Bruce William

    Experiences with computer-assisted instructional (CAI) programs written for religious studies indicate that CAI has contributions to offer the humanities and social sciences. The usefulness of the computer for presentation, drill and review of factual material and its applicability to quantifiable data is well accepted. There now exist…

  7. Remote control missile model test

    NASA Technical Reports Server (NTRS)

    Allen, Jerry M.; Shaw, David S.; Sawyer, Wallace C.

    1989-01-01

    An extremely large, systematic, axisymmetric body/tail fin data base was gathered through tests of an innovative missile model design which is described herein. These data were originally obtained for incorporation into a missile aerodynamics code based on engineering methods (Program MISSILE3), but can also be used as diagnostic test cases for developing computational methods because of the individual-fin data included in the data base. Detailed analysis of four sample cases from these data are presented to illustrate interesting individual-fin force and moment trends. These samples quantitatively show how bow shock, fin orientation, fin deflection, and body vortices can produce strong, unusual, and computationally challenging effects on individual fin loads. Comparisons between these data and calculations from the SWINT Euler code are also presented.

  8. Computer programs for calculating two-dimensional potential flow through deflected nozzles

    NASA Technical Reports Server (NTRS)

    Hawk, J. D.; Stockman, N. O.

    1979-01-01

    Computer programs to calculate the incompressible potential flow, corrected for compressibility, in two-dimensional nozzles at arbitrary operating conditions are presented. A statement of the problem to be solved, a description of each of the computer programs, and sufficient documentation, including a test case, to enable a user to run the program are included.

  9. Reading Teachers' Beliefs and Utilization of Computer and Technology: A Case Study

    ERIC Educational Resources Information Center

    Remetio, Jessica Espinas

    2014-01-01

    Many researchers believe that computers have the ability to help improve the reading skills of students. In an effort to improve the poor reading scores of students on state tests, as well as improve students' overall academic performance, computers and other technologies have been installed in Frozen Bay School classrooms. As the success of these…

  10. Relative efficiency and accuracy of two Navier-Stokes codes for simulating attached transonic flow over wings

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Wornom, Stephen F.

    1991-01-01

    Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.

  11. Testing For EM Upsets In Aircraft Control Computers

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste M.

    1994-01-01

    Effects of transient electrical signals evaluated in laboratory tests. Method of evaluating nominally fault-tolerant, aircraft-type digital-computer-based control system devised. Provides for evaluation of susceptibility of system to upset and evaluation of integrity of control when system subjected to transient electrical signals like those induced by electromagnetic (EM) source, in this case lightning. Beyond aerospace applications, fault-tolerant control systems becoming more wide-spread in industry; such as in automobiles. Method supports practical, systematic tests for evaluation of designs of fault-tolerant control systems.

  12. Algorithm and code development for unsteady three-dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru

    1991-01-01

    A streamwise upwind algorithm for solving the unsteady 3-D Navier-Stokes equations was extended to handle the moving grid system. It is noted that the finite volume concept is essential to extend the algorithm. The resulting algorithm is conservative for any motion of the coordinate system. Two extensions to an implicit method were considered and the implicit extension that makes the algorithm computationally efficient is implemented into Ames's aeroelasticity code, ENSAERO. The new flow solver has been validated through the solution of test problems. Test cases include three-dimensional problems with fixed and moving grids. The first test case shown is an unsteady viscous flow over an F-5 wing, while the second test considers the motion of the leading edge vortex as well as the motion of the shock wave for a clipped delta wing. The resulting algorithm has been implemented into ENSAERO. The upwind version leads to higher accuracy in both steady and unsteady computations than the previously used central-difference method does, while the increase in the computational time is small.

  13. Evaluation of Tag Attachments on Small Cetaceans

    DTIC Science & Technology

    2012-09-30

    Computers incorporated these suggestions into an experimental design for field tests. They also recommended a silicon-based antifouling coating...Wildlife Computers (Figure 1). Half of these were treated with Propspeed antifouling coating, and the other half were left uncoated. In three cases, both

  14. Fan Noise Source Diagnostic Test Computation of Rotor Wake Turbulence Noise

    NASA Technical Reports Server (NTRS)

    Nallasamy, M.; Envia, E.; Thorp, S. A.; Shabbir, A.

    2002-01-01

    An important source mechanism of fan broadband noise is the interaction of rotor wake turbulence with the fan outlet guide vanes. A broadband noise model that utilizes computed rotor flow turbulence from a RANS code is used to predict fan broadband noise spectra. The noise model is employed to examine the broadband noise characteristics of the 22-inch Source Diagnostic Test fan rig for which broadband noise data were obtained in wind tunnel tests at the NASA Glenn Research Center. A 9-case matrix of three outlet guide vane configurations at three representative fan tip speeds are considered. For all cases inlet and exhaust acoustic power spectra are computed and compared with the measured spectra where possible. In general, the acoustic power levels and shape of the predicted spectra are in good agreement with the measured data. The predicted spectra show the experimentally observed trends with fan tip speed, vane count, and vane sweep. The results also demonstrate the validity of using CFD-based turbulence information for fan broadband noise calculations.

  15. Concussion

    MedlinePlus

    ... symptoms. They may test your senses, balance, reflexes, memory, and thinking. In some cases, the doctor will order tests to scan your brain. These include a computed tomography (CT) or magnetic resonance imaging (MRI) scan. They take a picture of your ...

  16. A comparison of turbulence models in computing multi-element airfoil flows

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Menter, Florian; Durbin, Paul A.; Mansour, Nagi N.

    1994-01-01

    Four different turbulence models are used to compute the flow over a three-element airfoil configuration. These models are the one-equation Baldwin-Barth model, the one-equation Spalart-Allmaras model, a two-equation k-omega model, and a new one-equation Durbin-Mansour model. The flow is computed using the INS2D two-dimensional incompressible Navier-Stokes solver. An overset Chimera grid approach is utilized. Grid resolution tests are presented, and manual solution-adaptation of the grid was performed. The performance of each of the models is evaluated for test cases involving different angles-of-attack, Reynolds numbers, and flap riggings. The resulting surface pressure coefficients, skin friction, velocity profiles, and lift, drag, and moment coefficients are compared with experimental data. The models produce very similar results in most cases. Excellent agreement between computational and experimental surface pressures was observed, but only moderately good agreement was seen in the velocity profile data. In general, the difference between the predictions of the different models was less than the difference between the computational and experimental data.

  17. Visual ergonomics and computer work--is it all about computer glasses?

    PubMed

    Jonsson, Christina

    2012-01-01

    The Swedish Provisions on Work with Display Screen Equipment and the EU Directive on the minimum safety and health requirements for work with display screen equipment cover several important visual ergonomics aspects. But a review of cases and questions to the Swedish Work Environment Authority clearly shows that most attention is given to the demands for eyesight tests and special computer glasses. Other important visual ergonomics factors are at risk of being neglected. Today computers are used everywhere, both at work and at home. Computers can be laptops, PDA's, tablet computers, smart phones, etc. The demands on eyesight tests and computer glasses still apply but the visual demands and the visual ergonomics conditions are quite different compared to the use of a stationary computer. Based on this review, we raise the question if the demand on the employer to provide the employees with computer glasses is outdated.

  18. Airfoil Vibration Dampers program

    NASA Technical Reports Server (NTRS)

    Cook, Robert M.

    1991-01-01

    The Airfoil Vibration Damper program has consisted of an analysis phase and a testing phase. During the analysis phase, a state-of-the-art computer code was developed, which can be used to guide designers in the placement and sizing of friction dampers. The use of this computer code was demonstrated by performing representative analyses on turbine blades from the High Pressure Oxidizer Turbopump (HPOTP) and High Pressure Fuel Turbopump (HPFTP) of the Space Shuttle Main Engine (SSME). The testing phase of the program consisted of performing friction damping tests on two different cantilever beams. Data from these tests provided an empirical check on the accuracy of the computer code developed in the analysis phase. Results of the analysis and testing showed that the computer code can accurately predict the performance of friction dampers. In addition, a valuable set of friction damping data was generated, which can be used to aid in the design of friction dampers, as well as provide benchmark test cases for future code developers.

  19. A ridge tracking algorithm and error estimate for efficient computation of Lagrangian coherent structures.

    PubMed

    Lipinski, Doug; Mohseni, Kamran

    2010-03-01

    A ridge tracking algorithm for the computation and extraction of Lagrangian coherent structures (LCS) is developed. This algorithm takes advantage of the spatial coherence of LCS by tracking the ridges which form LCS to avoid unnecessary computations away from the ridges. We also make use of the temporal coherence of LCS by approximating the time dependent motion of the LCS with passive tracer particles. To justify this approximation, we provide an estimate of the difference between the motion of the LCS and that of tracer particles which begin on the LCS. In addition to the speedup in computational time, the ridge tracking algorithm uses less memory and results in smaller output files than the standard LCS algorithm. Finally, we apply our ridge tracking algorithm to two test cases, an analytically defined double gyre as well as the more complicated example of the numerical simulation of a swimming jellyfish. In our test cases, we find up to a 35 times speedup when compared with the standard LCS algorithm.

  20. Ischemic stroke enhancement in computed tomography scans using a computational approach

    NASA Astrophysics Data System (ADS)

    Alves, Allan F. F.; Pavan, Ana L. M.; Jennane, Rachid; Miranda, José R. A.; Freitas, Carlos C. M.; Abdala, Nitamar; Pina, Diana R.

    2018-03-01

    In this work, a novel approach was proposed to enhance the visual perception of ischemic stroke in computed tomography scans. Through different image processing techniques, we enabled less experienced physicians, to reliably detect early signs of stroke. A set of 40 retrospective CT scans of patients were used, divided into two groups: 25 cases of acute ischemic stroke and 15 normal cases used as control group. All cases were obtained within 4 hours of symptoms onset. Our approach was based on the variational decomposition model and three different segmentation methods. A test determined observers' performance to correctly diagnose stroke cases. The Expectation Maximization method provided the best results among all observers. The overall sensitivity of the observer's analysis was 64% and increased to 79%. The overall specificity was 67% and increased to 78%. These results show the importance of a computational tool to assist neuroradiology decisions, especially in critical situations such as the diagnosis of ischemic stroke.

  1. Potential Flow Theory and Operation Guide for the Panel Code PMARC. Version 14

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1999-01-01

    The theoretical basis for PMARC, a low-order panel code for modeling complex three-dimensional bodies, in potential flow, is outlined. PMARC can be run on a wide variety of computer platforms, including desktop machines, workstations, and supercomputers. Execution times for PMARC vary tremendously depending on the computer resources used, but typically range from several minutes for simple or moderately complex cases to several hours for very large complex cases. Several of the advanced features currently included in the code, such as internal flow modeling, boundary layer analysis, and time-dependent flow analysis, including problems involving relative motion, are discussed in some detail. The code is written in Fortran77, using adjustable-size arrays so that it can be easily redimensioned to match problem requirements and computer hardware constraints. An overview of the program input is presented. A detailed description of the input parameters is provided in the appendices. PMARC results for several test cases are presented along with analytic or experimental data, where available. The input files for these test cases are given in the appendices. PMARC currently supports plotfile output formats for several commercially available graphics packages. The supported graphics packages are Plot3D, Tecplot, and PmarcViewer.

  2. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    NASA Astrophysics Data System (ADS)

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor; Chanzy, Quentin; Coon, Ethan; Collier, Nathaniel; Costard, François; Ferry, Michel; Frampton, Andrew; Frederick, Jennifer; Gonçalvès, Julio; Holmén, Johann; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Mouche, Emmanuel; Orgogozo, Laurent; Pannetier, Romain; Rivière, Agnès; Roux, Nicolas; Rühaak, Wolfram; Scheidegger, Johanna; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik; Voss, Clifford

    2018-04-01

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. This issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatial and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.

  3. A system for aerodynamic design and analysis of supersonic aircraft. Part 4: Test cases

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.

    1980-01-01

    An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. Representative test cases and associated program output are presented.

  4. Improving Students' Self-Efficacy in Strategic Management: The Relative Impact of Cases and Simulations.

    ERIC Educational Resources Information Center

    Tompson, George H.; Dass, Parshotam

    2000-01-01

    Investigates the relative contribution of computer simulations and case studies for improving undergraduate students' self-efficacy in strategic management courses. Results of pre-and post-test data, regression analysis, and analysis of variance show that simulations result in significantly higher improvement in self-efficacy than case studies.…

  5. Quantum-Theoretical Methods and Studies Relating to Properties of Materials

    DTIC Science & Technology

    1989-12-19

    particularly sensitive to the behavior of the electron distribution close to the nuclei, which contributes only to E(l). Although the above results were...other condensed phases. So it was a useful test case to test the behavior of the theoretical computations for the gas phase relative to that in the...increasingly complicated and time- comsuming electron-correlation approximations should assure a small error in the theoret- ically computed enthalpy for a

  6. Pulmonary involvement of secondary syphilis.

    PubMed

    Ogawa, Yoshihiko; Imai, Yuichiro; Yoshihara, Shingo; Fujikura, Hiroyuki; Hirai, Nobuyasu; Sato, Masatoshi; Ogawa, Taku; Uno, Kenji; Kasahara, Kei; Yano, Hisakazu; Mikasa, Keiichi

    2018-01-01

    Pulmonary involvement in secondary syphilis is considered a rare occurrence; however, the number of cases has increased in the 2000s. This is likely due to the increased use of computed tomography scans and molecular diagnostic testing. We report a case of an HIV-positive man with pleural chest pain and bilateral subpleural nodules on chest computed tomography. His rapid plasma reagin and Treponema pallidum hemagglutination tests were positive, and the specimen of one of the pulmonary nodules obtained by transthoracic biopsy was positive for the polA gene of Treponema pallidum. Since clinical manifestations of syphilis are highly variable, clinicians should bear in mind that pleural chest pain with bilateral subpleural nodules can be caused by pulmonary syphilis.

  7. A Comparison of Computational Aeroacoustic Prediction Methods for Transonic Rotor Noise

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Lyrintzis, Anastasios; Koutsavdis, Evangelos K.

    1996-01-01

    This paper compares two methods for predicting transonic rotor noise for helicopters in hover and forward flight. Both methods rely on a computational fluid dynamics (CFD) solution as input to predict the acoustic near and far fields. For this work, the same full-potential rotor code has been used to compute the CFD solution for both acoustic methods. The first method employs the acoustic analogy as embodied in the Ffowcs Williams-Hawkings (FW-H) equation, including the quadrupole term. The second method uses a rotating Kirchhoff formulation. Computed results from both methods are compared with one other and with experimental data for both hover and advancing rotor cases. The results are quite good for all cases tested. The sensitivity of both methods to CFD grid resolution and to the choice of the integration surface/volume is investigated. The computational requirements of both methods are comparable; in both cases these requirements are much less than the requirements for the CFD solution.

  8. Two degree-of-freedom flutter solution for a personal computer

    NASA Technical Reports Server (NTRS)

    Turnock, D. L.

    1985-01-01

    A computer programmed flutter solution has been written in the BASIC language for a personal computer. The program is for two degree-of-freedom bending torsion flutter applications and utilizes two dimensional Theodorsen aerodynamics. The aerodynamics were modified to include approximations for Mach number (compressibility) effects and aspect ratio (finite span) effects. Input options, user instructions, program listing, and a test case application are included.

  9. Modeling of unit operating considerations in generating-capacity reliability evaluation. Volume 2. Computer-program documentation. Final report. [GENESIS, OPCON and OPPLAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, A.D.; Ayoub, A.K.; Singh, C.

    1982-07-01

    This report describes the structure and operation of prototype computer programs developed for a Monte Carlo simulation model, GENESIS, and for two analytical models, OPCON and OPPLAN. It includes input data requirements and sample test cases.

  10. Anxiety in Language Testing: The APTIS Case

    ERIC Educational Resources Information Center

    Valencia Robles, Jeannette de Fátima

    2017-01-01

    The requirement of holding a diploma which certifies proficiency level in a foreign language is constantly increasing in academic and working environments. Computer-based testing has become a prevailing tendency for these and other educational purposes. Each year large numbers of students take online language tests everywhere in the world. In…

  11. Computer-Aided Diagnostic (CAD) Scheme by Use of Contralateral Subtraction Technique

    NASA Astrophysics Data System (ADS)

    Nagashima, Hiroyuki; Harakawa, Tetsumi

    We developed a computer-aided diagnostic (CAD) scheme for detection of subtle image findings of acute cerebral infarction in brain computed tomography (CT) by using a contralateral subtraction technique. In our computerized scheme, the lateral inclination of image was first corrected automatically by rotating and shifting. The contralateral subtraction image was then derived by subtraction of reversed image from original image. Initial candidates for acute cerebral infarctions were identified using the multiple-thresholding and image filtering techniques. As the 1st step for removing false positive candidates, fourteen image features were extracted in each of the initial candidates. Halfway candidates were detected by applying the rule-based test with these image features. At the 2nd step, five image features were extracted using the overlapping scale with halfway candidates in interest slice and upper/lower slice image. Finally, acute cerebral infarction candidates were detected by applying the rule-based test with five image features. The sensitivity in the detection for 74 training cases was 97.4% with 3.7 false positives per image. The performance of CAD scheme for 44 testing cases had an approximate result to training cases. Our CAD scheme using the contralateral subtraction technique can reveal suspected image findings of acute cerebral infarctions in CT images.

  12. Calculation of two-dimensional inlet flow fields in a supersonic free stream: Program documentation and test cases

    NASA Technical Reports Server (NTRS)

    Biringen, S. H.; Mcmillan, O. J.

    1980-01-01

    The use of a computer code for the calculation of two dimensional inlet flow fields in a supersonic free stream and a nonorthogonal mesh-generation code are illustrated by specific examples. Input, output, and program operation and use are given and explained for the case of supercritical inlet operation at a subdesign Mach number (M Mach free stream = 2.09) for an isentropic-compression, drooped-cowl inlet. Source listings of the computer codes are also provided.

  13. A supersonic three-dimensional code for flow over blunt bodies: Program documentation and test cases

    NASA Technical Reports Server (NTRS)

    Chaussee, D. S.; Mcmillan, O. J.

    1980-01-01

    The use of a computer code for the calculation of steady, supersonic, three dimensional, inviscid flow over blunt bodies is illustrated. Input and output are given and explained for two cases: a pointed code of 20 deg half angle at 15 deg angle of attack in a free stream with M sub infinite = 7, and a cone-ogive-cylinder at 10 deg angle of attack with M sub infinite = 2.86. A source listing of the computer code is provided.

  14. Predicted thermal response of a cryogenic fuel tank exposed to simulated aerodynamic heating profiles with different cryogens and fill levels

    NASA Technical Reports Server (NTRS)

    Hanna, Gregory J.; Stephens, Craig A.

    1991-01-01

    A two dimensional finite difference thermal model was developed to predict the effects of heating profile, fill level, and cryogen type prior to experimental testing the Generic Research Cryogenic Tank (GRCT). These numerical predictions will assist in defining test scenarios, sensor locations, and venting requirements for the GRCT experimental tests. Boiloff rates, tank-wall and fluid temperatures, and wall heat fluxes were determined for 20 computational test cases. The test cases spanned three discrete fill levels and three heating profiles for hydrogen and nitrogen.

  15. Representing nursing guideline with unified modeling language to facilitate development of a computer system: a case study.

    PubMed

    Choi, Jeeyae; Choi, Jeungok E

    2014-01-01

    To provide best recommendations at the point of care, guidelines have been implemented in computer systems. As a prerequisite, guidelines are translated into a computer-interpretable guideline format. Since there are no specific tools to translate nursing guidelines, only a few nursing guidelines are translated and implemented in computer systems. Unified modeling language (UML) is a software writing language and is known to well and accurately represent end-users' perspective, due to the expressive characteristics of the UML. In order to facilitate the development of computer systems for nurses' use, the UML was used to translate a paper-based nursing guideline, and its ease of use and the usefulness were tested through a case study of a genetic counseling guideline. The UML was found to be a useful tool to nurse informaticians and a sufficient tool to model a guideline in a computer program.

  16. Are Case Studies a Good Teaching Tool for CS1?

    DTIC Science & Technology

    1995-01-01

    old AP/CS tests to compare our students’ performance against the results obtained by ETS. Currently, the introductory courses at CMU are taught using...Carrasquel, J., Goldenson, D. & Miller, P. L. (1985). Competency Testing in Introductory Computer Science: The Mastery Examination at Carnegie Mellon... courses is that many places do not have enough facilities (or the necessary time) required for long programming assignments. In our opinion, using case

  17. [Acquiring skills in malignant hyperthermia crisis management: comparison of high-fidelity simulation versus computer-based case study].

    PubMed

    Mejía, Vilma; Gonzalez, Carlos; Delfino, Alejandro E; Altermatt, Fernando R; Corvetto, Marcia A

    The primary purpose of this study was to compare the effect of high fidelity simulation versus a computer-based case solving self-study, in skills acquisition about malignant hyperthermia on first year anesthesiology residents. After institutional ethical committee approval, 31 first year anesthesiology residents were enrolled in this prospective randomized single-blinded study. Participants were randomized to either a High Fidelity Simulation Scenario or a computer-based Case Study about malignant hyperthermia. After the intervention, all subjects' performance in was assessed through a high fidelity simulation scenario using a previously validated assessment rubric. Additionally, knowledge tests and a satisfaction survey were applied. Finally, a semi-structured interview was done to assess self-perception of reasoning process and decision-making. 28 first year residents finished successfully the study. Resident's management skill scores were globally higher in High Fidelity Simulation versus Case Study, however they were significant in 4 of the 8 performance rubric elements: recognize signs and symptoms (p = 0.025), prioritization of initial actions of management (p = 0.003), recognize complications (p = 0.025) and communication (p = 0.025). Average scores from pre- and post-test knowledge questionnaires improved from 74% to 85% in the High Fidelity Simulation group, and decreased from 78% to 75% in the Case Study group (p = 0.032). Regarding the qualitative analysis, there was no difference in factors influencing the student's process of reasoning and decision-making with both teaching strategies. Simulation-based training with a malignant hyperthermia high-fidelity scenario was superior to computer-based case study, improving knowledge and skills in malignant hyperthermia crisis management, with a very good satisfaction level in anesthesia residents. Copyright © 2018 Sociedade Brasileira de Anestesiologia. Publicado por Elsevier Editora Ltda. All rights reserved.

  18. RELAP5-3D Resolution of Known Restart/Backup Issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mesina, George L.; Anderson, Nolan A.

    2014-12-01

    The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less

  19. The Complementary Use of Audience Response Systems and Online Tests to Implement Repeat Testing: A Case Study

    ERIC Educational Resources Information Center

    Stratling, Rebecca

    2017-01-01

    Although learning theories suggest that repeat testing can be highly beneficial for students' retention and understanding of material, there is, so far, little guidance on how to implement repeat testing in higher education. This paper introduces one method for implementing a three-stage model of repeat testing via computer-aided formative…

  20. A Multi-center Milestone Study of Clinical Vertebral CT Segmentation

    PubMed Central

    Yao, Jianhua; Burns, Joseph E.; Forsberg, Daniel; Seitel, Alexander; Rasoulian, Abtin; Abolmaesumi, Purang; Hammernik, Kerstin; Urschler, Martin; Ibragimov, Bulat; Korez, Robert; Vrtovec, Tomaž; Castro-Mateos, Isaac; Pozo, Jose M.; Frangi, Alejandro F.; Summers, Ronald M.; Li, Shuo

    2017-01-01

    A multiple center milestone study of clinical vertebra segmentation is presented in this paper. Vertebra segmentation is a fundamental step for spinal image analysis and intervention. The first half of the study was conducted in the spine segmentation challenge in 2014 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) Workshop on Computational Spine Imaging (CSI 2014). The objective was to evaluate the performance of several state-of-the-art vertebra segmentation algorithms on computed tomography (CT) scans using ten training and five testing dataset, all healthy cases; the second half of the study was conducted after the challenge, where additional 5 abnormal cases are used for testing to evaluate the performance under abnormal cases. Dice coefficients and absolute surface distances were used as evaluation metrics. Segmentation of each vertebra as a single geometric unit, as well as separate segmentation of vertebra substructures, was evaluated. Five teams participated in the comparative study. The top performers in the study achieved Dice coefficient of 0.93 in the upper thoracic, 0.95 in the lower thoracic and 0.96 in the lumbar spine for healthy cases, and 0.88 in the upper thoracic, 0.89 in the lower thoracic and 0.92 in the lumbar spine for osteoporotic and fractured cases. The strengths and weaknesses of each method as well as future suggestion for improvement are discussed. This is the first multi-center comparative study for vertebra segmentation methods, which will provide an up-to-date performance milestone for the fast growing spinal image analysis and intervention. PMID:26878138

  1. Design Of Computer Based Test Using The Unified Modeling Language

    NASA Astrophysics Data System (ADS)

    Tedyyana, Agus; Danuri; Lidyawati

    2017-12-01

    The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.

  2. An Explanatory Item Response Theory Approach for a Computer-Based Case Simulation Test

    ERIC Educational Resources Information Center

    Kahraman, Nilüfer

    2014-01-01

    Problem: Practitioners working with multiple-choice tests have long utilized Item Response Theory (IRT) models to evaluate the performance of test items for quality assurance. The use of similar applications for performance tests, however, is often encumbered due to the challenges encountered in working with complicated data sets in which local…

  3. Numerical Simulation of 3-D Supersonic Viscous Flow in an Experimental MHD Channel

    NASA Technical Reports Server (NTRS)

    Kato, Hiromasa; Tannehill, John C.; Gupta, Sumeet; Mehta, Unmeel B.

    2004-01-01

    The 3-D supersonic viscous flow in an experimental MHD channel has been numerically simulated. The experimental MHD channel is currently in operation at NASA Ames Research Center. The channel contains a nozzle section, a center section, and an accelerator section where magnetic and electric fields can be imposed on the flow. In recent tests, velocity increases of up to 40% have been achieved in the accelerator section. The flow in the channel is numerically computed using a new 3-D parabolized Navier-Stokes (PNS) algorithm that has been developed to efficiently compute MHD flows in the low magnetic Reynolds number regime. The MHD effects are modeled by introducing source terms into the PNS equations which can then be solved in a very e5uent manner. To account for upstream (elliptic) effects, the flowfield can be computed using multiple streamwise sweeps with an iterated PNS algorithm. The new algorithm has been used to compute two test cases that match the experimental conditions. In both cases, magnetic and electric fields are applied to the flow. The computed results are in good agreement with the available experimental data.

  4. Computation of the Molenaar Sijtsma Statistic

    NASA Astrophysics Data System (ADS)

    Andries van der Ark, L.

    The Molenaar Sijtsma statistic is an estimate of the reliability of a test score. In some special cases, computation of the Molenaar Sijtsma statistic requires provisional measures. These provisional measures have not been fully described in the literature, and we show that they have not been implemented in the software. We describe the required provisional measures as to allow the computation of the Molenaar Sijtsma statistic for all data sets.

  5. A real-time digital program for estimating aircraft stability and control parameters from flight test data by using the maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Mayhew, S. C.

    1973-01-01

    A computer program (Langley program C1123) has been developed for estimating aircraft stability and control parameters from flight test data. These parameters are estimated by the maximum likelihood estimation procedure implemented on a real-time digital simulation system, which uses the Control Data 6600 computer. This system allows the investigator to interact with the program in order to obtain satisfactory results. Part of this system, the control and display capabilities, is described for this program. This report also describes the computer program by presenting the program variables, subroutines, flow charts, listings, and operational features. Program usage is demonstrated with a test case using pseudo or simulated flight data.

  6. Transonic Drag Prediction Using an Unstructured Multigrid Solver

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.; Levy, David W.

    2001-01-01

    This paper summarizes the results obtained with the NSU-3D unstructured multigrid solver for the AIAA Drag Prediction Workshop held in Anaheim, CA, June 2001. The test case for the workshop consists of a wing-body configuration at transonic flow conditions. Flow analyses for a complete test matrix of lift coefficient values and Mach numbers at a constant Reynolds number are performed, thus producing a set of drag polars and drag rise curves which are compared with experimental data. Results were obtained independently by both authors using an identical baseline grid and different refined grids. Most cases were run in parallel on commodity cluster-type machines while the largest cases were run on an SGI Origin machine using 128 processors. The objective of this paper is to study the accuracy of the subject unstructured grid solver for predicting drag in the transonic cruise regime, to assess the efficiency of the method in terms of convergence, cpu time, and memory, and to determine the effects of grid resolution on this predictive ability and its computational efficiency. A good predictive ability is demonstrated over a wide range of conditions, although accuracy was found to degrade for cases at higher Mach numbers and lift values where increasing amounts of flow separation occur. The ability to rapidly compute large numbers of cases at varying flow conditions using an unstructured solver on inexpensive clusters of commodity computers is also demonstrated.

  7. Towards a precise test for malaria diagnosis in the Brazilian Amazon: comparison among field microscopy, a rapid diagnostic test, nested PCR, and a computational expert system based on artificial neural networks

    PubMed Central

    2010-01-01

    Background Accurate malaria diagnosis is mandatory for the treatment and management of severe cases. Moreover, individuals with asymptomatic malaria are not usually screened by health care facilities, which further complicates disease control efforts. The present study compared the performances of a malaria rapid diagnosis test (RDT), the thick blood smear method and nested PCR for the diagnosis of symptomatic malaria in the Brazilian Amazon. In addition, an innovative computational approach was tested for the diagnosis of asymptomatic malaria. Methods The study was divided in two parts. For the first part, passive case detection was performed in 311 individuals with malaria-related symptoms from a recently urbanized community in the Brazilian Amazon. A cross-sectional investigation compared the diagnostic performance of the RDT Optimal-IT, nested PCR and light microscopy. The second part of the study involved active case detection of asymptomatic malaria in 380 individuals from riverine communities in Rondônia, Brazil. The performances of microscopy, nested PCR and an expert computational system based on artificial neural networks (MalDANN) using epidemiological data were compared. Results Nested PCR was shown to be the gold standard for diagnosis of both symptomatic and asymptomatic malaria because it detected the major number of cases and presented the maximum specificity. Surprisingly, the RDT was superior to microscopy in the diagnosis of cases with low parasitaemia. Nevertheless, RDT could not discriminate the Plasmodium species in 12 cases of mixed infections (Plasmodium vivax + Plasmodium falciparum). Moreover, the microscopy presented low performance in the detection of asymptomatic cases (61.25% of correct diagnoses). The MalDANN system using epidemiological data was worse that the light microscopy (56% of correct diagnoses). However, when information regarding plasma levels of interleukin-10 and interferon-gamma were inputted, the MalDANN performance sensibly increased (80% correct diagnoses). Conclusions An RDT for malaria diagnosis may find a promising use in the Brazilian Amazon integrating a rational diagnostic approach. Despite the low performance of the MalDANN test using solely epidemiological data, an approach based on neural networks may be feasible in cases where simpler methods for discriminating individuals below and above threshold cytokine levels are available. PMID:20459613

  8. Towards a precise test for malaria diagnosis in the Brazilian Amazon: comparison among field microscopy, a rapid diagnostic test, nested PCR, and a computational expert system based on artificial neural networks.

    PubMed

    Andrade, Bruno B; Reis-Filho, Antonio; Barros, Austeclino M; Souza-Neto, Sebastião M; Nogueira, Lucas L; Fukutani, Kiyoshi F; Camargo, Erney P; Camargo, Luís M A; Barral, Aldina; Duarte, Angelo; Barral-Netto, Manoel

    2010-05-06

    Accurate malaria diagnosis is mandatory for the treatment and management of severe cases. Moreover, individuals with asymptomatic malaria are not usually screened by health care facilities, which further complicates disease control efforts. The present study compared the performances of a malaria rapid diagnosis test (RDT), the thick blood smear method and nested PCR for the diagnosis of symptomatic malaria in the Brazilian Amazon. In addition, an innovative computational approach was tested for the diagnosis of asymptomatic malaria. The study was divided in two parts. For the first part, passive case detection was performed in 311 individuals with malaria-related symptoms from a recently urbanized community in the Brazilian Amazon. A cross-sectional investigation compared the diagnostic performance of the RDT Optimal-IT, nested PCR and light microscopy. The second part of the study involved active case detection of asymptomatic malaria in 380 individuals from riverine communities in Rondônia, Brazil. The performances of microscopy, nested PCR and an expert computational system based on artificial neural networks (MalDANN) using epidemiological data were compared. Nested PCR was shown to be the gold standard for diagnosis of both symptomatic and asymptomatic malaria because it detected the major number of cases and presented the maximum specificity. Surprisingly, the RDT was superior to microscopy in the diagnosis of cases with low parasitaemia. Nevertheless, RDT could not discriminate the Plasmodium species in 12 cases of mixed infections (Plasmodium vivax + Plasmodium falciparum). Moreover, the microscopy presented low performance in the detection of asymptomatic cases (61.25% of correct diagnoses). The MalDANN system using epidemiological data was worse that the light microscopy (56% of correct diagnoses). However, when information regarding plasma levels of interleukin-10 and interferon-gamma were inputted, the MalDANN performance sensibly increased (80% correct diagnoses). An RDT for malaria diagnosis may find a promising use in the Brazilian Amazon integrating a rational diagnostic approach. Despite the low performance of the MalDANN test using solely epidemiological data, an approach based on neural networks may be feasible in cases where simpler methods for discriminating individuals below and above threshold cytokine levels are available.

  9. Improved computer programs for calculating potential flow in propulsion system inlets

    NASA Technical Reports Server (NTRS)

    Stockman, N. O.; Farrell, C. A., Jr.

    1977-01-01

    Computer programs to calculate the incompressible potential flow corrected for compressibility in axisymmetric inlets at arbitrary operating conditions are presented. Included are a statement of the problem to be solved, a description of each of the programs and sufficient documentation, including a test case, to enable a user to run the programs.

  10. Developing Instructional Applications at the Secondary Level. The Computer as a Tool.

    ERIC Educational Resources Information Center

    McManus, Jack; And Others

    Case studies are presented for seven Los Angeles area (California) high schools that worked with Pepperdine University in the IBM/ETS (International Business Machines/Educational Testing Service) Model Schools program, a project which provided training for selected secondary school teachers in the use of personal computers and selected software as…

  11. Simulation of 3-D Nonequilibrium Seeded Air Flow in the NASA-Ames MHD Channel

    NASA Technical Reports Server (NTRS)

    Gupta, Sumeet; Tannehill, John C.; Mehta, Unmeel B.

    2004-01-01

    The 3-D nonequilibrium seeded air flow in the NASA-Ames experimental MHD channel has been numerically simulated. The channel contains a nozzle section, a center section, and an accelerator section where magnetic and electric fields can be imposed on the flow. In recent tests, velocity increases of up to 40% have been achieved in the accelerator section. The flow in the channel is numerically computed us ing a 3-D parabolized Navier-Stokes (PNS) algorithm that has been developed to efficiently compute MHD flows in the low magnetic Reynolds number regime: The MHD effects are modeled by introducing source terms into the PNS equations which can then be solved in a very efficient manner. The algorithm has been extended in the present study to account for nonequilibrium seeded air flows. The electrical conductivity of the flow is determined using the program of Park. The new algorithm has been used to compute two test cases that match the experimental conditions. In both cases, magnetic and electric fields are applied to the seeded flow. The computed results are in good agreement with the experimental data.

  12. Five Data Validation Cases

    ERIC Educational Resources Information Center

    Simkin, Mark G.

    2008-01-01

    Data-validation routines enable computer applications to test data to ensure their accuracy, completeness, and conformance to industry or proprietary standards. This paper presents five programming cases that require students to validate five different types of data: (1) simple user data entries, (2) UPC codes, (3) passwords, (4) ISBN numbers, and…

  13. Applying Bayesian Item Selection Approaches to Adaptive Tests Using Polytomous Items

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2006-01-01

    This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…

  14. Variability in the Propagation Phase of CFD-Based Noise Prediction: Summary of Results From Category 8 of the BANC-III Workshop

    NASA Technical Reports Server (NTRS)

    Lopes, Leonard; Redonnet, Stephane; Imamura, Taro; Ikeda, Tomoaki; Zawodny, Nikolas; Cunha, Guilherme

    2015-01-01

    The usage of Computational Fluid Dynamics (CFD) in noise prediction typically has been a two part process: accurately predicting the flow conditions in the near-field and then propagating the noise from the near-field to the observer. Due to the increase in computing power and the cost benefit when weighed against wind tunnel testing, the usage of CFD to estimate the local flow field of complex geometrical structures has become more routine. Recently, the Benchmark problems in Airframe Noise Computation (BANC) workshops have provided a community focus on accurately simulating the local flow field near the body with various CFD approaches. However, to date, little effort has been given into assessing the impact of the propagation phase of noise prediction. This paper includes results from the BANC-III workshop which explores variability in the propagation phase of CFD-based noise prediction. This includes two test cases: an analytical solution of a quadrupole source near a sphere and a computational solution around a nose landing gear. Agreement between three codes was very good for the analytic test case, but CFD-based noise predictions indicate that the propagation phase can introduce 3dB or more of variability in noise predictions.

  15. Computer tomography of flows external to test models

    NASA Technical Reports Server (NTRS)

    Prikryl, I.; Vest, C. M.

    1982-01-01

    Computer tomographic techniques for reconstruction of three-dimensional aerodynamic density fields, from interferograms recorded from several different viewing directions were studied. Emphasis is on the case in which an opaque object such as a test model in a wind tunnel obscures significant regions of the interferograms (projection data). A method called the Iterative Convolution Method (ICM), existing methods in which the field is represented by a series expansions, and analysis of real experimental data in the form of aerodynamic interferograms are discussed.

  16. A hybrid model for combining case-control and cohort studies in systematic reviews of diagnostic tests

    PubMed Central

    Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao

    2014-01-01

    Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179

  17. Total variation-based neutron computed tomography

    NASA Astrophysics Data System (ADS)

    Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick

    2018-05-01

    We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.

  18. [Clinical and magnetic resonance imaging characteristics of isolated congenital anosmia].

    PubMed

    Liu, Jian-feng; Wang, Jian; You, Hui; Ni, Dao-feng; Yang, Da-zhang

    2010-05-25

    To report a series of patients with isolated congenital anosmia and summarize their clinical and magnetic resonance imaging (MRI) characteristics. Twenty patients with isolated congenital anosmia were reviewed retrospectively. A thorough medical and chemosensory history, physical examination, nasal endoscopy, T&T olfactory testing, olfactory event-related potentials, sinonasal computed tomography scan and magnetic resonance image of olfactory pathway were performed in all patients. Neither ENT physical examination nor nasal endoscopy was remarkable. Subjective olfactory testing indicated all of them were of anosmia. No olfactory event-related potentials to maximal stimulus were obtained. Computed tomography scan was normal. MRI revealed the absence of olfactory bulbs and tracts in all cases. And hypoplasia or aplasia of olfactory sulcus was found in all cases. All the patients had normal sex hormone level. The diagnosis of isolated congenital anosmia is established on chief complaints, physical examination, olfactory testing and olfactory imaging. MRI of olfactory pathway is indispensable.

  19. Classification of hyperspectral imagery using MapReduce on a NVIDIA graphics processing unit (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ramirez, Andres; Rahnemoonfar, Maryam

    2017-04-01

    A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.

  20. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    DOE PAGES

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor; ...

    2018-02-26

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less

  1. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less

  2. Investigation of a computer virus outbreak in the pharmacy of a tertiary care teaching hospital.

    PubMed

    Bailey, T C; Reichley, R M

    1992-10-01

    A computer virus outbreak was recognized, verified, defined, investigated, and controlled using an infection control approach. The pathogenesis and epidemiology of computer virus infection are reviewed. Case-control study. Pharmacy of a tertiary care teaching institution. On October 28, 1991, 2 personal computers in the drug information center manifested symptoms consistent with the "Jerusalem" virus infection. The same day, a departmental personal computer began playing "Yankee Doodle," a sign of "Doodle" virus infection. An investigation of all departmental personal computers identified the "Stoned" virus in an additional personal computer. Controls were functioning virus-free personal computers within the department. Cases were associated with users who brought diskettes from outside the department (5/5 cases versus 5/13 controls, p = .04) and with College of Pharmacy student users (3/5 cases versus 0/13 controls, p = .012). The detection of a virus-infected diskette or personal computer was associated with the number of 5 1/4-inch diskettes in the files of personal computers, a surrogate for rate of media exchange (mean = 17.4 versus 152.5, p = .018, Wilcoxon rank sum test). After education of departmental personal computer users regarding appropriate computer hygiene and installation of virus protection software, no further spread of personal computer viruses occurred, although 2 additional Stoned-infected and 1 Jerusalem-infected diskettes were detected. We recommend that virus detection software be installed on personal computers where the interchange of diskettes among computers is necessary, that write-protect tabs be placed on all program master diskettes and data diskettes where data are being read and not written, that in the event of a computer virus outbreak, all available diskettes be quarantined and scanned by virus detection software, and to facilitate quarantine and scanning in an outbreak, that diskettes be stored in organized files.

  3. A Computational Study of an Oscillating VR-12 Airfoil with a Gurney Flap

    NASA Technical Reports Server (NTRS)

    Rhee, Myung

    2004-01-01

    Computations of the flow over an oscillating airfoil with a Gurney-flap are performed using a Reynolds Averaged Navier-Stokes code and compared with recent experimental data. The experimental results have been generated for different sizes of the Gurney flaps. The computations are focused mainly on a configuration. The baseline airfoil without a Gurney flap is computed and compared with the experiments in both steady and unsteady cases for the purpose of initial testing of the code performance. The are carried out with different turbulence models. Effects of the grid refinement are also examined and unsteady cases, in addition to the assessment of solver effects. The results of the comparisons of steady lift and drag computations indicate that the code is reasonably accurate in the attached flow of the steady condition but largely overpredicts the lift and underpredicts the drag in the higher angle steady flow.

  4. Performance Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis with Different Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel

    Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less

  5. OCCAM: a flexible, multi-purpose and extendable HPC cluster

    NASA Astrophysics Data System (ADS)

    Aldinucci, M.; Bagnasco, S.; Lusso, S.; Pasteris, P.; Rabellino, S.; Vallero, S.

    2017-10-01

    The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multipurpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Sezione di Torino of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible, reconfigurable and extendable infrastructure to cater to a wide range of different scientific computing use cases, including ones from solid-state chemistry, high-energy physics, computer science, big data analytics, computational biology, genomics and many others. Furthermore, it will serve as a platform for R&D activities on computational technologies themselves, with topics ranging from GPU acceleration to Cloud Computing technologies. A heterogeneous and reconfigurable system like this poses a number of challenges related to the frequency at which heterogeneous hardware resources might change their availability and shareability status, which in turn affect methods and means to allocate, manage, optimize, bill, monitor VMs, containers, virtual farms, jobs, interactive bare-metal sessions, etc. This work describes some of the use cases that prompted the design and construction of the HPC cluster, its architecture and resource provisioning model, along with a first characterization of its performance by some synthetic benchmark tools and a few realistic use-case tests.

  6. Computer generated maps from digital satellite data - A case study in Florida

    NASA Technical Reports Server (NTRS)

    Arvanitis, L. G.; Reich, R. M.; Newburne, R.

    1981-01-01

    Ground cover maps are important tools to a wide array of users. Over the past three decades, much progress has been made in supplementing planimetric and topographic maps with ground cover details obtained from aerial photographs. The present investigation evaluates the feasibility of using computer maps of ground cover from satellite input tapes. Attention is given to the selection of test sites, a satellite data processing system, a multispectral image analyzer, general purpose computer-generated maps, the preliminary evaluation of computer maps, a test for areal correspondence, the preparation of overlays and acreage estimation of land cover types on the Landsat computer maps. There is every indication to suggest that digital multispectral image processing systems based on Landsat input data will play an increasingly important role in pattern recognition and mapping land cover in the years to come.

  7. HAL/S-360 compiler test activity report

    NASA Technical Reports Server (NTRS)

    Helmers, C. T.

    1974-01-01

    The levels of testing employed in verifying the HAL/S-360 compiler were as follows: (1) typical applications program case testing; (2) functional testing of the compiler system and its generated code; and (3) machine oriented testing of compiler implementation on operational computers. Details of the initial test plan and subsequent adaptation are reported, along with complete test results for each phase which examined the production of object codes for every possible source statement.

  8. Optimization and large scale computation of an entropy-based moment closure

    NASA Astrophysics Data System (ADS)

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  9. Optimization and large scale computation of an entropy-based moment closure

    DOE PAGES

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, M N, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P N, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which aremore » used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M N algorithm that do not appear for the P N algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M N to P N decreases.« less

  10. Computer vision applied to herbarium specimens of German trees: testing the future utility of the millions of herbarium specimen images for automated identification.

    PubMed

    Unger, Jakob; Merhof, Dorit; Renner, Susanne

    2016-11-16

    Global Plants, a collaborative between JSTOR and some 300 herbaria, now contains about 2.48 million high-resolution images of plant specimens, a number that continues to grow, and collections that are digitizing their specimens at high resolution are allocating considerable recourses to the maintenance of computer hardware (e.g., servers) and to acquiring digital storage space. We here apply machine learning, specifically the training of a Support-Vector-Machine, to classify specimen images into categories, ideally at the species level, using the 26 most common tree species in Germany as a test case. We designed an analysis pipeline and classification system consisting of segmentation, normalization, feature extraction, and classification steps and evaluated the system in two test sets, one with 26 species, the other with 17, in each case using 10 images per species of plants collected between 1820 and 1995, which simulates the empirical situation that most named species are represented in herbaria and databases, such as JSTOR, by few specimens. We achieved 73.21% accuracy of species assignments in the larger test set, and 84.88% in the smaller test set. The results of this first application of a computer vision algorithm trained on images of herbarium specimens shows that despite the problem of overlapping leaves, leaf-architectural features can be used to categorize specimens to species with good accuracy. Computer vision is poised to play a significant role in future rapid identification at least for frequently collected genera or species in the European flora.

  11. Computer programs for calculating two-dimensional potential flow in and about propulsion system inlets

    NASA Technical Reports Server (NTRS)

    Hawk, J. D.; Stockman, N. O.; Farrell, C. A., Jr.

    1978-01-01

    Incompressible potential flow calculations are presented that were corrected for compressibility in two-dimensional inlets at arbitrary operating conditions. Included are a statement of the problem to be solved, a description of each of the computer programs, and sufficient documentation, including a test case, to enable a user to run the program.

  12. Computer-Assisted School Facility Planning with ONPASS.

    ERIC Educational Resources Information Center

    Urban Decision Systems, Inc., Los Angeles, CA.

    The analytical capabilities of ONPASS, an on-line computer-aided school facility planning system, are described by its developers. This report describes how, using the Canoga Park-Winnetka-Woodland Hills Planning Area as a test case, the Department of City Planning of the city of Los Angeles employed ONPASS to demonstrate how an on-line system can…

  13. Critical Thinking Outcomes of Computer-Assisted Instruction versus Written Nursing Process.

    ERIC Educational Resources Information Center

    Saucier, Bonnie L.; Stevens, Kathleen R.; Williams, Gail B.

    2000-01-01

    Nursing students (n=43) who used clinical case studies via computer-assisted instruction (CAI) were compared with 37 who used the written nursing process (WNP). California Critical Thinking Skills Test results did not show significant increases in critical thinking. The WNP method was more time consuming; the CAI group was more satisfied. Use of…

  14. IMPLEMENTATION OF THE IMPROVED QUASI-STATIC METHOD IN RATTLESNAKE/MOOSE FOR TIME-DEPENDENT RADIATION TRANSPORT MODELLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zachary M. Prince; Jean C. Ragusa; Yaqi Wang

    Because of the recent interest in reactor transient modeling and the restart of the Transient Reactor (TREAT) Facility, there has been a need for more efficient, robust methods in computation frameworks. This is the impetus of implementing the Improved Quasi-Static method (IQS) in the RATTLESNAKE/MOOSE framework. IQS has implemented with CFEM diffusion by factorizing flux into time-dependent amplitude and spacial- and weakly time-dependent shape. The shape evaluation is very similar to a flux diffusion solve and is computed at large (macro) time steps. While the amplitude evaluation is a PRKE solve where the parameters are dependent on the shape andmore » is computed at small (micro) time steps. IQS has been tested with a custom one-dimensional example and the TWIGL ramp benchmark. These examples prove it to be a viable and effective method for highly transient cases. More complex cases are intended to be applied to further test the method and its implementation.« less

  15. Development of an Axisymmetric Afterbody Test Case for Turbulent Flow Separation Validation

    NASA Technical Reports Server (NTRS)

    Disotell, Kevin J.; Rumsey, Christopher L.

    2017-01-01

    As identified in the CFD Vision 2030 Study commissioned by NASA, validation of advanced RANS models and scale-resolving methods for computing turbulent flows must be supported by improvements in high-quality experiments designed specifically for CFD implementation. A new test platform referred to as the Axisymmetric Afterbody allows for a range of flow behaviors to be studied on interchangeable afterbodies while facilitating access to higher Reynolds number facilities. A priori RANS computations are reported for a risk-reduction configuration to demonstrate critical variation among turbulence model results for a given afterbody, ranging from barely-attached to mild separated flow. The effects of body nose geometry and tunnel-wall boundary condition on the computed afterbody flow are explored to inform the design of an experimental test program.

  16. Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities

    NASA Astrophysics Data System (ADS)

    Garzoglio, Gabriele

    2012-12-01

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.

  17. Investigation of storage options for scientific computing on Grid and Cloud facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storagemore » server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on bare metal nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.« less

  18. An automated procedure for developing hybrid computer simulations of turbofan engines

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Krosel, S. M.

    1980-01-01

    A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all of the calculations and date manipulations needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self contained engine model to match specified design point information. A test case is described and comparisons between hybrid simulation and specified engine performance data are presented.

  19. Use Case Analysis for Adopting Cloud Computing in Army Test and Evaluation

    DTIC Science & Technology

    2010-09-01

    FedRAMP) .............................................................................27 c. National Aeronautics and Space Administration (NASA) Nebula ...25 Figure 11. NASA Nebula Container from...NASA Flagship Initiatives: Nebula ,” 2010

  20. Analysis of selected data from the triservice missile data base

    NASA Technical Reports Server (NTRS)

    Allen, Jerry M.; Shaw, David S.; Sawyer, Wallace C.

    1989-01-01

    An extremely large, systematic, axisymmetric-body/tail-fin data base has been gathered through tests of an innovative missile model design which is described herein. These data were originally obtained for incorporation into a missile aerodynamics code based on engineering methods (Program MISSILE3), but these data are also valuable as diagnostic test cases for developing computational methods because of the individual-fin data included in the data base. Detailed analyses of four sample cases from these data are presented to illustrate interesting individual-fin force and moment trends. These samples quantitatively show how bow shock, fin orientation, fin deflection, and body vortices can produce strong, unusual, and computationally challenging effects on individual fin loads. Flow-visualization photographs are examined to provide physical insight into the cause of these effects.

  1. Comparison of PLIF and CFD Results for the Orion CEV RCS Jets

    NASA Technical Reports Server (NTRS)

    Ivey, Christopher B.; Danehy, Paul M.; Bathel, Brett F.; Dyakonov, Artem A.; Inman, Jennifer A.; Jones, Stephen B.

    2011-01-01

    Nitric-oxide planar laser-induced fluorescence (NO PLIF) was used to visualize and measure centerline streamwise velocity of the Orion Crew Exploration Vehicle (CEV) Reaction Control System (RCS) Jets at NASA Langley Research Center's 31-Inch Mach 10 Air wind tunnel. Fluorescence flow visualizations of pitch, roll, and yaw RCS jets were obtained using different plenum pressures and wind tunnel operating stagnation pressures. For two yaw RCS jet test cases, the PLIF visualizations were compared to computational flow imaging (CFI) images based on Langley Aerothermal Upwind Relaxation Algorithm (LAURA) computational fluid dynamics (CFD) simulations of the flowfield. For the same test cases, the streamwise velocity measurements were compared to CFD. The CFD solution, while showing some unphysical artifacts, generally agree with the experimental measurements.

  2. Computation of multi-dimensional viscous supersonic flow

    NASA Technical Reports Server (NTRS)

    Buggeln, R. C.; Kim, Y. N.; Mcdonald, H.

    1986-01-01

    A method has been developed for two- and three-dimensional computations of viscous supersonic jet flows interacting with an external flow. The approach employs a reduced form of the Navier-Stokes equations which allows solution as an initial-boundary value problem in space, using an efficient noniterative forward marching algorithm. Numerical instability associated with forward marching algorithms for flows with embedded subsonic regions is avoided by approximation of the reduced form of the Navier-Stokes equations in the subsonic regions of the boundary layers. Supersonic and subsonic portions of the flow field are simultaneously calculated by a consistently split linearized block implicit computational algorithm. The results of computations for a series of test cases associated with supersonic jet flow is presented and compared with other calculations for axisymmetric cases. Demonstration calculations indicate that the computational technique has great promise as a tool for calculating a wide range of supersonic flow problems including jet flow. Finally, a User's Manual is presented for the computer code used to perform the calculations.

  3. Supporting High School Student Accomplishment of Biology Content Using Interactive Computer-Based Curricular Case Studies

    NASA Astrophysics Data System (ADS)

    Oliver, Joseph Steve; Hodges, Georgia W.; Moore, James N.; Cohen, Allan; Jang, Yoonsun; Brown, Scott A.; Kwon, Kyung A.; Jeong, Sophia; Raven, Sara P.; Jurkiewicz, Melissa; Robertson, Tom P.

    2017-11-01

    Research into the efficacy of modules featuring dynamic visualizations, case studies, and interactive learning environments is reported here. This quasi-experimental 2-year study examined the implementation of three interactive computer-based instructional modules within a curricular unit covering cellular biology concepts in an introductory high school biology course. The modules featured dynamic visualizations and focused on three processes that underlie much of cellular biology: diffusion, osmosis, and filtration. Pre-tests and post-tests were used to assess knowledge growth across the unit. A mixture Rasch model analysis of the post-test data revealed two groups of students. In both years of the study, a large proportion of the students were classified as low-achieving based on their pre-test scores. The use of the modules in the Cell Unit in year 2 was associated with a much larger proportion of the students having transitioned to the high-achieving group than in year 1. In year 2, the same teachers taught the same concepts as year 1 but incorporated the interactive computer-based modules into the cell biology unit of the curriculum. In year 2, 67% of students initially classified as low-achieving were classified as high-achieving at the end of the unit. Examination of responses to assessments embedded within the modules as well as post-test items linked transition to the high-achieving group with correct responses to items that both referenced the visualization and the contextualization of that visualization within the module. This study points to the importance of dynamic visualization within contextualized case studies as a means to support student knowledge acquisition in biology.

  4. Relative value of computed tomography scanning and venous sampling in establishing the cause of primary hyperaldosteronism.

    PubMed

    Sheaves, R; Goldin, J; Reznek, R H; Chew, S L; Dacie, J E; Lowe, D G; Ross, R J; Wass, J A; Besser, G M; Grossman, A B

    1996-03-01

    The purpose of this study was to evaluate the relative merits of the postural stimulation test, adrenal computed tomography (CT) and venous sampling in the differential diagnosis of patients presenting with primary hyperaldosteronism. The records of 20 patients presenting with primary hyperaldosteronism were reviewed retrospectively. There were 15 patients with a unilateral aldosterone-producing adenoma (APA), four patients with idiopathic hyperaldosteronism (IHA) and one patient with primary adrenal hyperplasia (PAH). The postural stimulation test was based on measurements of plasma aldosterone and renin activity at 08.00 h and at noon after 4 h of ambulation. The CT scans of the adrenals were reviewed by a single radiologist. Bilateral venous sampling of adrenal veins was attempted in all patients and blood collected for aldosterone and cortisol assay. Plasma aldosterone concentration increased after 4 h of standing in all cases of hyperplasia but was also demonstrated in 10/15 patients with a surgically-proven APA. If one defines a significant postural rise as being greater than 30%, then 8/15 patients with APA can be considered as being posturally responsive. Computed tomography scanning correctly identified all 15 cases of APA and also classified correctly the remaining five cases of hyperplasia (four cases of IHA and one case of PAH). Venous sampling failed technically in 4/15 cases of APA and in one case of IHA: a total of 5/20 (25%,). A correct diagnosis of APA or IHA was established in all the remaining cases. However, the one case of PAH was treated successfully by adrenalectomy following venous sampling, which suggested a unilateral adrenal lesion: this one result was the only instance where venous sampling altered clinical decision-making. Computed tomography scanning may be used alone to confirm the cause of hyperaldosteronism where postural studies suggest an adrenal adenoma, and such patients may be considered for early surgery. Venous catheterization studies are not necessary routinely. but may still be useful in selected patients, particularly when CT scanning shows no clear lesion.

  5. Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F Landis

    2014-01-01

    This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.

  6. An open source software for fast grid-based data-mining in spatial epidemiology (FGBASE).

    PubMed

    Baker, David M; Valleron, Alain-Jacques

    2014-10-30

    Examining whether disease cases are clustered in space is an important part of epidemiological research. Another important part of spatial epidemiology is testing whether patients suffering from a disease are more, or less, exposed to environmental factors of interest than adequately defined controls. Both approaches involve determining the number of cases and controls (or population at risk) in specific zones. For cluster searches, this often must be done for millions of different zones. Doing this by calculating distances can lead to very lengthy computations. In this work we discuss the computational advantages of geographical grid-based methods, and introduce an open source software (FGBASE) which we have created for this purpose. Geographical grids based on the Lambert Azimuthal Equal Area projection are well suited for spatial epidemiology because they preserve area: each cell of the grid has the same area. We describe how data is projected onto such a grid, as well as grid-based algorithms for spatial epidemiological data-mining. The software program (FGBASE), that we have developed, implements these grid-based methods. The grid based algorithms perform extremely fast. This is particularly the case for cluster searches. When applied to a cohort of French Type 1 Diabetes (T1D) patients, as an example, the grid based algorithms detected potential clusters in a few seconds on a modern laptop. This compares very favorably to an equivalent cluster search using distance calculations instead of a grid, which took over 4 hours on the same computer. In the case study we discovered 4 potential clusters of T1D cases near the cities of Le Havre, Dunkerque, Toulouse and Nantes. One example of environmental analysis with our software was to study whether a significant association could be found between distance to vineyards with heavy pesticide. None was found. In both examples, the software facilitates the rapid testing of hypotheses. Grid-based algorithms for mining spatial epidemiological data provide advantages in terms of computational complexity thus improving the speed of computations. We believe that these methods and this software tool (FGBASE) will lower the computational barriers to entry for those performing epidemiological research.

  7. Computer-Aided Grading of Lymphangioleiomyomatosis (LAM) using HRCT

    PubMed Central

    Yao, Jianhua; Avila, Nilo; Dwyer, Andrew; Taveira-DaSilva, Angelo M.; Hathaway, Olanda M.; Moss, Joel

    2010-01-01

    Lymphangioleiomyomatosis (LAM) is a multisystem disorder associated with proliferation of smooth muscle-like cells, which leads to destruction of lung parenchyma. Subjective grading of LAM on HRCT is imprecise and can be arduous especially in cases with severe involvement. We propose a computer-aided evaluation system that grades LAM involvement based on analysis of lung texture patterns. A committee of support vector machines is employed for classification. The system was tested on 36 patients. The computer grade demonstrates good correlation with subjective radiologist grade (R=0.91, p<0.0001) and pulmonary functional tests (R=0.85, p<0.0001). The grade also provides precise progression assessment of disease over time. PMID:21625320

  8. EM-ANIMATE: A Computer Program for Displaying and Animating Electromagnetic Near-Field and Surface-Current Solutions: Video Supplement to NASA Technical Memorandum 4539

    NASA Technical Reports Server (NTRS)

    Hom, Kam W.

    1994-01-01

    In this video, several examples of electromagnetic field and surface-current animation sequences are shown to demonstrate the visualization capabilities of the EM-ANIMATE computer program. These examples show the animation of total and scattered electric near fields from test bodies of a flat plate, a corner reflector, and a sphere. These test cases show the electric-field behavior caused by different scattering mechanisms through the animation of electromagnetic data from the EM-ANIMATE routine.

  9. Validation of High-Speed Turbulent Boundary Layer and Shock-Boundary Layer Interaction Computations with the OVERFLOW Code

    NASA Technical Reports Server (NTRS)

    Oliver, A. B.; Lillard, R. P.; Blaisdell, G. A.; Lyrintizis, A. S.

    2006-01-01

    The capability of the OVERFLOW code to accurately compute high-speed turbulent boundary layers and turbulent shock-boundary layer interactions is being evaluated. Configurations being investigated include a Mach 2.87 flat plate to compare experimental velocity profiles and boundary layer growth, a Mach 6 flat plate to compare experimental surface heat transfer,a direct numerical simulation (DNS) at Mach 2.25 for turbulent quantities, and several Mach 3 compression ramps to compare computations of shock-boundary layer interactions to experimental laser doppler velocimetry (LDV) data and hot-wire data. The present paper describes outlines the study and presents preliminary results for two of the flat plate cases and two small-angle compression corner test cases.

  10. Computer and laboratory simulation of interactions between spacecraft surfaces and charged-particle environments

    NASA Technical Reports Server (NTRS)

    Stevens, N. J.

    1979-01-01

    Cases where the charged-particle environment acts on the spacecraft (e.g., spacecraft charging phenomena) and cases where a system on the spacecraft causes the interaction (e.g., high voltage space power systems) are considered. Both categories were studied in ground simulation facilities to understand the processes involved and to measure the pertinent parameters. Computer simulations are based on the NASA Charging Analyzer Program (NASCAP) code. Analytical models are developed in this code and verified against the experimental data. Extrapolation from the small test samples to space conditions are made with this code. Typical results from laboratory and computer simulations are presented for both types of interactions. Extrapolations from these simulations to performance in space environments are discussed.

  11. Testing under the Microscope: How Common Core-Aligned Assessments Place Demands on Time, Technology, and Connectivity

    ERIC Educational Resources Information Center

    Stephens, Wendy

    2014-01-01

    School librarians and other specialists should take note of the new wave of computer-based assessments required by the Common Core State Standards (CCSS), testing that is only now beginning to get off the ground. The new testing necessitates that districts possess the right hardware, install the requisite software, and, in some cases, create…

  12. NASA/MSFC's Calculation for Test Case 1a of ATAC-FSDC Workshop on After-body and Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Ruf, Joseph H.

    2006-01-01

    Mr. Ruf of NASA/MSFC executed the CHEM computational fluid dynamics (CFD) code to provide a prediction of the test case 1 a for the ATAC-FSDC Workshop on After-body and Nozzle Flows. CHEM is used extensively at MSFC for a wide variety of fluid dynamic problems. These problems include; injector element flows, nozzle flows, feed line flows, turbomachinery flows, solid rocket motor internal flows, plume vehicle flow interactions, etc.

  13. Exact Green's function method of solar force-free magnetic-field computations with constant alpha. I - Theory and basic test cases

    NASA Technical Reports Server (NTRS)

    Chiu, Y. T.; Hilton, H. H.

    1977-01-01

    Exact closed-form solutions to the solar force-free magnetic-field boundary-value problem are obtained for constant alpha in Cartesian geometry by a Green's function approach. The uniqueness of the physical problem is discussed. Application of the exact results to practical solar magnetic-field calculations is free of series truncation errors and is at least as economical as the approximate methods currently in use. Results of some test cases are presented.

  14. Air Intakes for High Speed Vehicles (Prises d’Air pour Vehicules a Grande Vitesse)

    DTIC Science & Technology

    1991-09-01

    contributors. The a number of test cases for which rather Working Group wishes to express its detailed experimental data were available sincere thanks to those...larger group . Figs. 3.6.2 and 3.6.3 shows a 3.3.6.3 CFD TECHNIQUES comparison of the computed and experimental static pressure This test case was attempted...by six distributions on the ramp and cowl of different research groups , using seven the intake. Experimental data is shown different codes, as noted

  15. Orthopaedic resident preparedness for closed reduction and pinning of pediatric supracondylar fractures is improved by e-learning: a multisite randomized controlled study.

    PubMed

    Hearty, Thomas; Maizels, Max; Pring, Maya; Mazur, John; Liu, Raymond; Sarwark, John; Janicki, Joseph

    2013-09-04

    There is a need to provide more efficient surgical training methods for orthopaedic residents. E-learning could possibly increase resident surgical preparedness, confidence, and comfort for surgery. Using closed reduction and pinning of pediatric supracondylar humeral fractures as the index case, we hypothesized that e-learning could increase resident knowledge acquisition for case preparation in the operating room. An e-learning surgical training module was created on the Computer Enhanced Visual Learning platform. The module provides a detailed and focused road map of the procedure utilizing a multimedia format. A multisite prospective randomized controlled study design compared residents who used a textbook for case preparation (control group) with residents who used the same textbook plus completed the e-learning module (test group). All subjects completed a sixty-question test on the theory and methods of the case. After completion of the test, the control group then completed the module as well. All subjects were surveyed on their opinion regarding the effectiveness of the module after performing an actual surgical case. Twenty-eight subjects with no previous experience in this surgery were enrolled at four academic centers. Subjects were randomized into two equal groups. The test group scored significantly better (p < 0.001) and demonstrated competence on the test compared with the control group; the mean correct test score (and standard deviation) was 90.9% ± 6.8% for the test group and 73.5% ± 6.4% for the control group. All residents surveyed (n = 27) agreed that the module is a useful supplement to traditional methods for case preparation and twenty-two of twenty-seven residents agreed that it reduced their anxiety during the case and improved their attention to surgical detail. E-learning using the Computer Enhanced Visual Learning platform significantly improved preparedness, confidence, and comfort with percutaneous closed reduction and pinning of a pediatric supracondylar humeral fracture. We believe that adapting such methods into residency training programs will improve efficiency in surgical training.

  16. Flight-Time Identification of a UH-60A Helicopter and Slung Load

    NASA Technical Reports Server (NTRS)

    Cicolani, Luigi S.; McCoy, Allen H.; Tischler, Mark B.; Tucker, George E.; Gatenio, Pinhas; Marmar, Dani

    1998-01-01

    This paper describes a flight test demonstration of a system for identification of the stability and handling qualities parameters of a helicopter-slung load configuration simultaneously with flight testing, and the results obtained.Tests were conducted with a UH-60A Black Hawk at speeds from hover to 80 kts. The principal test load was an instrumented 8 x 6 x 6 ft cargo container. The identification used frequency domain analysis in the frequency range to 2 Hz, and focussed on the longitudinal and lateral control axes since these are the axes most affected by the load pendulum modes in the frequency range of interest for handling qualities. Results were computed for stability margins, handling qualities parameters and load pendulum stability. The computations took an average of 4 minutes before clearing the aircraft to the next test point. Important reductions in handling qualities were computed in some cases, depending, on control axis and load-slung combination. A database, including load dynamics measurements, was accumulated for subsequent simulation development and validation.

  17. Proof of live birth using postmortem multislice computed tomography (pmMSCT) in cases of suspected neonaticide: advantages of diagnostic imaging compared to conventional autopsy.

    PubMed

    Guddat, Saskia S; Gapert, René; Tsokos, Michael; Oesterhelweg, Lars

    2013-03-01

    Proof of live birth is of major importance in suspected neonaticide cases. Although not without controversy the lung flotation test is the main method used to asses this in different jurisdictions worldwide. The present study examines the usefulness of postmortem multislice computed tomography (pmMSCT) in the detection of live birth signs. Body scans were conducted on four infants, one was stillborn, another died a day after birth and the other two were classified as neonaticides. The appearance of the lungs, gastrointestinal tract and vascular system of the liver was compared in these cases. Clear differences were discernable between the lungs of the stillborn and the 1 day old infant. The aerated lungs and air in the stomach and duodenum were clearly visible in the latter case while the stillborn infant lacked these signs. The two neonaticide cases demonstrated similarly aerated lung tissue to the 1 day old infant. The hepatic vessels did not show any putrefactive gas changes in any of the cases. The extent of aeration of the peripheral alveoli was easily observable on the pmMSCT, thus making it a useful tool in the possible differentiation between artificially and naturally aerated lungs. During the four autopsies the classic flotation tests were performed and similar positive aeration of the lungs in the two neonaticides was shown. The stillborn's tests, on the other hand were negative for aeration. The results of this study clearly demonstrate the advantages of using pmMSCT before commencing a conventional autopsy in cases of suspected neonaticide.

  18. Effect of Turbulence Models on Two Massively-Separated Benchmark Flow Cases

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2003-01-01

    Two massively-separated flow cases (the 2-D hill and the 3-D Ahmed body) were computed with several different turbulence models in the Reynolds-averaged Navier-Stokes code CFL3D as part of participation in a turbulence modeling workshop held in Poitiers, France in October, 2002. Overall, results were disappointing, but were consistent with results from other RANS codes and other turbulence models at the workshop. For the 2-D hill case, those turbulence models that predicted separation location accurately ended up yielding a too-long separation extent downstream. The one model that predicted a shorter separation extent in better agreement with LES data did so only by coincidence: its prediction of earlier reattachment was due to a too-late prediction of the separation location. For the Ahmed body, two slant angles were computed, and CFD performed fairly well for one of the cases (the larger slant angle). Both turbulence models tested in this case were very similar to each other. For the smaller slant angle, CFD predicted massive separation, whereas the experiment showed reattachment about half-way down the center of the face. These test cases serve as reminders that state- of-the-art CFD is currently not a reliable predictor of massively-separated flow physics, and that further validation studies in this area would be beneficial.

  19. Floating-point geometry: toward guaranteed geometric computations with approximate arithmetics

    NASA Astrophysics Data System (ADS)

    Bajard, Jean-Claude; Langlois, Philippe; Michelucci, Dominique; Morin, Géraldine; Revol, Nathalie

    2008-08-01

    Geometric computations can fail because of inconsistencies due to floating-point inaccuracy. For instance, the computed intersection point between two curves does not lie on the curves: it is unavoidable when the intersection point coordinates are non rational, and thus not representable using floating-point arithmetic. A popular heuristic approach tests equalities and nullities up to a tolerance ɛ. But transitivity of equality is lost: we can have A approx B and B approx C, but A not approx C (where A approx B means ||A - B|| < ɛ for A,B two floating-point values). Interval arithmetic is another, self-validated, alternative; the difficulty is to limit the swell of the width of intervals with computations. Unfortunately interval arithmetic cannot decide equality nor nullity, even in cases where it is decidable by other means. A new approach, developed in this paper, consists in modifying the geometric problems and algorithms, to account for the undecidability of the equality test and unavoidable inaccuracy. In particular, all curves come with a non-zero thickness, so two curves (generically) cut in a region with non-zero area, an inner and outer representation of which is computable. This last approach no more assumes that an equality or nullity test is available. The question which arises is: which geometric problems can still be solved with this last approach, and which cannot? This paper begins with the description of some cases where every known arithmetic fails in practice. Then, for each arithmetic, some properties of the problems they can solve are given. We end this work by proposing the bases of a new approach which aims to fulfill the geometric computations requirements.

  20. Validation of CFD/Heat Transfer Software for Turbine Blade Analysis

    NASA Technical Reports Server (NTRS)

    Kiefer, Walter D.

    2004-01-01

    I am an intern in the Turbine Branch of the Turbomachinery and Propulsion Systems Division. The division is primarily concerned with experimental and computational methods of calculating heat transfer effects of turbine blades during operation in jet engines and land-based power systems. These include modeling flow in internal cooling passages and film cooling, as well as calculating heat flux and peak temperatures to ensure safe and efficient operation. The branch is research-oriented, emphasizing the development of tools that may be used by gas turbine designers in industry. The branch has been developing a computational fluid dynamics (CFD) and heat transfer code called GlennHT to achieve the computational end of this analysis. The code was originally written in FORTRAN 77 and run on Silicon Graphics machines. However the code has been rewritten and compiled in FORTRAN 90 to take advantage of more modem computer memory systems. In addition the branch has made a switch in system architectures from SGI's to Linux PC's. The newly modified code therefore needs to be tested and validated. This is the primary goal of my internship. To validate the GlennHT code, it must be run using benchmark fluid mechanics and heat transfer test cases, for which there are either analytical solutions or widely accepted experimental data. From the solutions generated by the code, comparisons can be made to the correct solutions to establish the accuracy of the code. To design and create these test cases, there are many steps and programs that must be used. Before a test case can be run, pre-processing steps must be accomplished. These include generating a grid to describe the geometry, using a software package called GridPro. Also various files required by the GlennHT code must be created including a boundary condition file, a file for multi-processor computing, and a file to describe problem and algorithm parameters. A good deal of this internship will be to become familiar with these programs and the structure of the GlennHT code. Additional information is included in the original extended abstract.

  1. Acoustic Source Bearing Estimation (ASBE) computer program development

    NASA Technical Reports Server (NTRS)

    Wiese, Michael R.

    1987-01-01

    A new bearing estimation algorithm (Acoustic Source Analysis Technique - ASAT) and an acoustic analysis computer program (Acoustic Source Bearing Estimation - ASBE) are described, which were developed by Computer Sciences Corporation for NASA Langley Research Center. The ASBE program is used by the Acoustics Division/Applied Acoustics Branch and the Instrument Research Division/Electro-Mechanical Instrumentation Branch to analyze acoustic data and estimate the azimuths from which the source signals radiated. Included are the input and output from a benchmark test case.

  2. A Comparative Study of Multi-material Data Structures for Computational Physics Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garimella, Rao Veerabhadra; Robey, Robert W.

    The data structures used to represent the multi-material state of a computational physics application can have a drastic impact on the performance of the application. We look at efficient data structures for sparse applications where there may be many materials, but only one or few in most computational cells. We develop simple performance models for use in selecting possible data structures and programming patterns. We verify the analytic models of performance through a small test program of the representative cases.

  3. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    NASA Technical Reports Server (NTRS)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  4. Validation Test Report For The CRWMS Analysis and Logistics Visually Interactive Model Calvin Version 3.0, 10074-Vtr-3.0-00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Gillespie

    2000-07-27

    This report describes the tests performed to validate the CRWMS ''Analysis and Logistics Visually Interactive'' Model (CALVIN) Version 3.0 (V3.0) computer code (STN: 10074-3.0-00). To validate the code, a series of test cases was developed in the CALVIN V3.0 Validation Test Plan (CRWMS M&O 1999a) that exercises the principal calculation models and options of CALVIN V3.0. Twenty-five test cases were developed: 18 logistics test cases and 7 cost test cases. These cases test the features of CALVIN in a sequential manner, so that the validation of each test case is used to demonstrate the accuracy of the input to subsequentmore » calculations. Where necessary, the test cases utilize reduced-size data tables to make the hand calculations used to verify the results more tractable, while still adequately testing the code's capabilities. Acceptance criteria, were established for the logistics and cost test cases in the Validation Test Plan (CRWMS M&O 1999a). The Logistics test cases were developed to test the following CALVIN calculation models: Spent nuclear fuel (SNF) and reactivity calculations; Options for altering reactor life; Adjustment of commercial SNF (CSNF) acceptance rates for fiscal year calculations and mid-year acceptance start; Fuel selection, transportation cask loading, and shipping to the Monitored Geologic Repository (MGR); Transportation cask shipping to and storage at an Interim Storage Facility (ISF); Reactor pool allocation options; and Disposal options at the MGR. Two types of cost test cases were developed: cases to validate the detailed transportation costs, and cases to validate the costs associated with the Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) and Regional Servicing Contractors (RSCs). For each test case, values calculated using Microsoft Excel 97 worksheets were compared to CALVIN V3.0 scenarios with the same input data and assumptions. All of the test case results compare with the CALVIN V3.0 results within the bounds of the acceptance criteria. Therefore, it is concluded that the CALVIN V3.0 calculation models and options tested in this report are validated.« less

  5. [Clinical observation of isolated congenital anosmia].

    PubMed

    Li, Li; Wei, Yong-xiang; Wang, Ning-yu; Miao, Xu-tao; Yang, Ling; Ge, Xiao-hui; Wu, Ying; Liu, Jia; Tian, Jun; Li, Kun-yan; Liu, Chun-li

    2013-12-01

    To introduce 8 patients with isolated congenital anosmia and to discuss the clinical manifestations, imaging characteristics and family characteristics of this rarely seen disorder. Eight patients with isolated congenital anosmia treated between April 2007 and April 2012 were reviewed retrospectively. There were 4 males and 4 females. A detailed medical history collection, physical examination, nasal endoscopy, T&T and Sniffin'Sticks subjective olfactory function tests, olfactory event-related potentials sinonasal computed tomography scan and sex hormones level monitoring were performed in all patients. Seven cases underwent magnetic resonance image of olfactory pathway examination. All patients were anosmia without evidence of other defects. ENT physical examination, nasal endoscopy and computed tomography scan were normal except 4 cases with obvious nasal septum deviation, 2 cases with concha bullosa. Subjective olfactory test indicated all of them were anosmia. Olfactory event-related potentials were obtained in only 1 patient. Magnetic resonance imaging revealed the smaller or atrophy olfactory bulb and olfactory tract in five cases, the absence of olfactory bulbs and tracts in two case. A female patient did not have MRI examination because of wearing IUDs. Detection of 8 patients of sex hormones were normal. Family characteristics: 3 patients showed family inheritance pattern. The diagnosis of isolated congenital anosmia should be based on chief complaint, medical history, physical examination, olfactory test, nasal endoscopy, olfactory testing, olfactory imaging and olfactory event-related potentials. Magnetic resonance image of olfactory pathway and olfactory event-related potentials have important value for the diagnosis. More attention should be paid to the genetic susceptibility of the family.

  6. Developments and Validations of Fully Coupled CFD and Practical Vortex Transport Method for High-Fidelity Wake Modeling in Fixed and Rotary Wing Applications

    NASA Technical Reports Server (NTRS)

    Anusonti-Inthra, Phuriwat

    2010-01-01

    A novel Computational Fluid Dynamics (CFD) coupling framework using a conventional Reynolds-Averaged Navier-Stokes (BANS) solver to resolve the near-body flow field and a Particle-based Vorticity Transport Method (PVTM) to predict the evolution of the far field wake is developed, refined, and evaluated for fixed and rotary wing cases. For the rotary wing case, the RANS/PVTM modules are loosely coupled to a Computational Structural Dynamics (CSD) module that provides blade motion and vehicle trim information. The PVTM module is refined by the addition of vortex diffusion, stretching, and reorientation models as well as an efficient memory model. Results from the coupled framework are compared with several experimental data sets (a fixed-wing wind tunnel test and a rotary-wing hover test).

  7. Study of tethered satellite active attitude control

    NASA Technical Reports Server (NTRS)

    Colombo, G.

    1982-01-01

    Existing software was adapted for the study of tethered subsatellite rotational dynamics, an analytic solution for a stable configuration of a tethered subsatellite was developed, the analytic and numerical integrator (computer) solutions for this "test case' was compared in a two mass tether model program (DUMBEL), the existing multiple mass tether model (SKYHOOK) was modified to include subsatellite rotational dynamics, the analytic "test case,' was verified, and the use of the SKYHOOK rotational dynamics capability with a computer run showing the effect of a single off axis thruster on the behavior of the subsatellite was demonstrated. Subroutines for specific attitude control systems are developed and applied to the study of the behavior of the tethered subsatellite under realistic on orbit conditions. The effect of all tether "inputs,' including pendular oscillations, air drag, and electrodynamic interactions, on the dynamic behavior of the tether are included.

  8. Test Cases for the Benchmark Active Controls: Spoiler and Control Surface Oscillations and Flutter

    NASA Technical Reports Server (NTRS)

    Bennett, Robert M.; Scott, Robert C.; Wieseman, Carol D.

    2000-01-01

    As a portion of the Benchmark Models Program at NASA Langley, a simple generic model was developed for active controls research and was called BACT for Benchmark Active Controls Technology model. This model was based on the previously-tested Benchmark Models rectangular wing with the NACA 0012 airfoil section that was mounted on the Pitch and Plunge Apparatus (PAPA) for flutter testing. The BACT model had an upper surface spoiler, a lower surface spoiler, and a trailing edge control surface for use in flutter suppression and dynamic response excitation. Previous experience with flutter suppression indicated a need for measured control surface aerodynamics for accurate control law design. Three different types of flutter instability boundaries had also been determined for the NACA 0012/PAPA model, a classical flutter boundary, a transonic stall flutter boundary at angle of attack, and a plunge instability near M = 0.9. Therefore an extensive set of steady and control surface oscillation data was generated spanning the range of the three types of instabilities. This information was subsequently used to design control laws to suppress each flutter instability. There have been three tests of the BACT model. The objective of the first test, TDT Test 485, was to generate a data set of steady and unsteady control surface effectiveness data, and to determine the open loop dynamic characteristics of the control systems including the actuators. Unsteady pressures, loads, and transfer functions were measured. The other two tests, TDT Test 502 and TDT Test 5 18, were primarily oriented towards active controls research, but some data supplementary to the first test were obtained. Dynamic response of the flexible system to control surface excitation and open loop flutter characteristics were determined during Test 502. Loads were not measured during the last two tests. During these tests, a database of over 3000 data sets was obtained. A reasonably extensive subset of the data sets from the first two tests have been chosen for Test Cases for computational comparisons concentrating on static conditions and cases with harmonically oscillating control surfaces. Several flutter Test Cases from both tests have also been included. Some aerodynamic comparisons with the BACT data have been made using computational fluid dynamics codes at the Navier-Stokes level (and in the accompanying chapter SC). Some mechanical and active control studies have been presented. In this report several Test Cases are selected to illustrate trends for a variety of different conditions with emphasis on transonic flow effects. Cases for static angles of attack, static trailing-edge and upper-surface spoiler deflections are included for a range of conditions near those for the oscillation cases. Cases for trailing-edge control and upper-surface spoiler oscillations for a range of Mach numbers, angle of attack, and static control deflections are included. Cases for all three types of flutter instability are selected. In addition some cases are included for dynamic response measurements during forced oscillations of the controls on the flexible mount. An overview of the model and tests is given, and the standard formulary for these data is listed. Some sample data and sample results of calculations are presented. Only the static pressures and the first harmonic real and imaginary parts of the pressures are included in the data for the Test Cases, but digitized time histories have been archived. The data for the Test Cases are also available as separate electronic files.

  9. Rocket Combustion Modelling Test Case RCM-3. Numerical Calculation of MASCOTTE 60 bar Case with THESEE

    DTIC Science & Technology

    2001-03-01

    flame length is about 230 mm. Figure 10 shows three characteristic structures of a cryogenic flame : "* A first expansion cone of length L1 = 15xDlox...correctly represented. However, the computed flame length is longer than the experimental data. This phenomenon is due to the droplets injection

  10. Experimental Evidence on the Effectiveness of Automated Essay Scoring in Teacher Education Cases

    ERIC Educational Resources Information Center

    Riedel, Eric; Dexter, Sara L.; Scharber, Cassandra; Doering, Aaron

    2006-01-01

    Research on computer-based writing evaluation has only recently focused on the potential for providing formative feedback rather than summative assessment. This study tests the impact of an automated essay scorer (AES) that provides formative feedback on essay drafts written as part of a series of online teacher education case studies. Seventy…

  11. Seismic waveform inversion best practices: regional, global and exploration test cases

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan; Tromp, Jeroen

    2016-09-01

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.

  12. Sex and race determination of crania by calipers and computer : a test of the Giles and Elliot discriminant functions in 52 forensic cases.

    DOT National Transportation Integrated Search

    1979-01-01

    The Giles and Elliot discriminant functions diagnosing sex and race from cranial measurements were tested on a series of forensically examined crania of known sex and race. Of 52 crania of known sex, 46 (88%) were correctly diagnosed. Racial diagnose...

  13. Using hybrid implicit Monte Carlo diffusion to simulate gray radiation hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Gentile, Nick

    This work describes how to couple a hybrid Implicit Monte Carlo Diffusion (HIMCD) method with a Lagrangian hydrodynamics code to evaluate the coupled radiation hydrodynamics equations. This HIMCD method dynamically applies Implicit Monte Carlo Diffusion (IMD) [1] to regions of a problem that are opaque and diffusive while applying standard Implicit Monte Carlo (IMC) [2] to regions where the diffusion approximation is invalid. We show that this method significantly improves the computational efficiency as compared to a standard IMC/Hydrodynamics solver, when optically thick diffusive material is present, while maintaining accuracy. Two test cases are used to demonstrate the accuracy andmore » performance of HIMCD as compared to IMC and IMD. The first is the Lowrie semi-analytic diffusive shock [3]. The second is a simple test case where the source radiation streams through optically thin material and heats a thick diffusive region of material causing it to rapidly expand. We found that HIMCD proves to be accurate, robust, and computationally efficient for these test problems.« less

  14. Charliecloud

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priedhorsky, Reid; Randles, Tim

    Charliecloud is a set of scripts to let users run a virtual cluster of virtual machines (VMs) on a desktop or supercomputer. Key functions include: 1. Creating (typically by installing an operating system from vendor media) and updating VM images; 2. Running a single VM; 3. Running multiple VMs in a virtual cluster. The virtual machines can talk to one another over the network and (in some cases) the outside world. This is accomplished by calling external programs such as QEMU and the Virtual Distributed Ethernet (VDE) suite. The goal is to let users have a virtual cluster containing nodesmore » where they have privileged access, while isolating that privilege within the virtual cluster so it cannot affect the physical compute resources. Host configuration enforces security; this is not included in Charliecloud, though security guidelines are included in its documentation and Charliecloud is designed to facilitate such configuration. Charliecloud manages passing information from host computers into and out of the virtual machines, such as parameters of the virtual cluster, input data specified by the user, output data from virtual compute jobs, VM console display, and network connections (e.g., SSH or X11). Parameters for the virtual cluster (number of VMs, RAM and disk per VM, etc.) are specified by the user or gathered from the environment (e.g., SLURM environment variables). Example job scripts are included. These include computation examples (such as a "hello world" MPI job) as well as performance tests. They also include a security test script to verify that the virtual cluster is appropriately sandboxed. Tests include: 1. Pinging hosts inside and outside the virtual cluster to explore connectivity; 2. Port scans (again inside and outside) to see what services are available; 3. Sniffing tests to see what traffic is visible to running VMs; 4. IP address spoofing to test network functionality in this case; 5. File access tests to make sure host access permissions are enforced. This test script is not a comprehensive scanner and does not test for specific vulnerabilities. Importantly, no information about physical hosts or network topology is included in this script (or any of Charliecloud); while part of a sensible test, such information is specified by the user when the test is run. That is, one cannot learn anything about the LANL network or computing infrastructure by examining Charliecloud code.« less

  15. Numerical simulations of the flow in the HYPULSE expansion tube

    NASA Technical Reports Server (NTRS)

    Wilson, Gregory J.; Sussman, Myles A.; Bakos, Robert J.

    1995-01-01

    Axisymmetric numerical simulations with finite-rate chemistry are presented for two operating conditions in the HYPULSE expansion tube. The operating gas for these two cases is nitrogen and the computations are compared to experimental data. One test condition is at a total enthalpy of 15.2 MJ/Kg and a relatively low static pressure of 2 kPa. This case is characterized by a laminar boundary layer and significant chemical nonequilibrium in the acceleration gas. The second test condition is at a total enthalpy of 10.2 MJ/Kg and a static pressure of 38 kPa and is characterized by a turbulent boundary layer. For both cases, the time-varying test gas pressure predicted by the simulations is in good agreement with experimental data. The computations are also found to be in good agreement with Mirels' correlations for shock tube flow. It is shown that the nonuniformity of the test gas observed in the HYPULSE expansion tube is strongly linked to the boundary layer thickness. The turbulent flow investigated has a larger boundary layer and greater test gas nonuniformity. In order to investigate possibilities of improving expansion tube flow quality by reducing the boundary layer thickness, parametric studies showing the effect of density and turbulent transition point on the test conditions are also presented. Although an increase in the expansion tube operating pressure level would reduce the boundary layer thickness, the simulations indicate that the reduction would be less than what is predicted by flat plate boundary layer correlations.

  16. Testing and Analysis of Composite Skin/Stringer Debonding Under Multi-Axial Loading

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Cvitkovich, Michael K.; O'Brien, T. Kevin; Minguet, Pierre J.

    2000-01-01

    A consistent step-wise approach is presented to investigate the damage mechanism in composite bonded skin/stringer constructions under uniaxial and biaxial (in-plane/out-of-plane) loading conditions. The approach uses experiments to detect the failure mechanism, computational stress analysis to determine the location of first matrix cracking and computational fracture mechanics to investigate the potential for delamination growth. In a first step, tests were performed on specimens, which consisted of a tapered composite flange, representing a stringer or frame, bonded onto a composite skin. Tests were performed under monotonic loading conditions in tension, three-point bending, and combined tension/bending to evaluate the debonding mechanisms between the skin and the bonded stringer. For combined tension/bending testing, a unique servohydraulic load frame was used that was capable of applying both in-plane tension and out-of-plane bending loads simultaneously. Specimen edges were examined on the microscope to document the damage occurrence and to identify typical damage patterns. For all three load cases, observed failure initiated in the flange, near the flange tip, causing the flange to almost fully debond from skin. In a second step, a two dimensional plane-strain finite element model was developed to analyze the different test cases using a geometrically nonlinear solution. For all three loading conditions, computed principal stresses exceeded the transverse strength of the material in those areas of the flange where the matrix cracks had developed during the tests. In a third step, delaminations of various lengths were simulated in two locations where delaminations were observed during the tests. The analyses showed that at the loads corresponding to matrix ply crack initiation computed strain energy release rates exceeded the values obtained from a mixed mode failure criterion in one location, Hence. Unstable delamination propagation is likely to occur as observed in the experiments.

  17. Testing and Analysis of Composite Skin/Stringer Debonding under Multi-Axial Loading

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Cvitkovich, Michael; OBrien, Kevin; Minguet, Pierre J.

    2000-01-01

    A consistent step-wise approach is presented to investigate the damage mechanism in composite bonded skin/stringer constructions under uniaxial and biaxial (in-plane/out-of-plane) loading conditions. The approach uses experiments to detect the failure mechanism, computational stress analysis to determine the location of first matrix cracking and computational fracture mechanics to investigate the potential for delamination growth. In a first step, tests were performed on specimens, which consisted of a tapered composite flange, representing a stringer or frame, bonded onto a composite skin. Tests were performed under monotonic loading conditions in tension, three-point bending, and combined tension/bending to evaluate the debonding mechanisms between the skin and the bonded stringer. For combined tension/bending testing, a unique servohydraulic load frame was used that was capable of applying both in-plane tension and out-of-plane bending loads simultaneously. Specimen edges were examined on the microscope to document the damage occurrence and to identify typical damage patterns. For all three load cases, observed failure initiated in the flange, near the flange tip, causing the flange to almost fully debond from the skin. In a second step, a two-dimensional plane-strain finite element model was developed to analyze the different test cases using a geometrically nonlinear solution. For all three loading conditions, computed principal stresses exceeded the transverse strength of the material in those areas of the flange where the matrix cracks had developed during the tests. In a third step, delaminations of various lengths were simulated in two locations where delaminations were observed during the tests. The analyses showed that at the loads corresponding to matrix ply crack initiation computed strain energy release rates exceeded the values obtained from a mixed mode failure criterion in one location. Hence, unstable delamination propagation is likely to occur as observed in the experiments.

  18. Reliable semiclassical computations in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dine, Michael; Department of Physics, Stanford University Stanford, California 94305-4060; Festuccia, Guido

    We revisit the question of whether or not one can perform reliable semiclassical QCD computations at zero temperature. We study correlation functions with no perturbative contributions, and organize the problem by means of the operator product expansion, establishing a precise criterion for the validity of a semiclassical calculation. For N{sub f}>N, a systematic computation is possible; for N{sub f}

  19. Evaluation of the discrete vortex wake cross flow model using vector computers. Part 1: Theory and application

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The current program had the objective to modify a discrete vortex wake method to efficiently compute the aerodynamic forces and moments on high fineness ratio bodies (f approximately 10.0). The approach is to increase computational efficiency by structuring the program to take advantage of new computer vector software and by developing new algorithms when vector software can not efficiently be used. An efficient program was written and substantial savings achieved. Several test cases were run for fineness ratios up to f = 16.0 and angles of attack up to 50 degrees.

  20. The Influence of Computer-Assisted Instruction on Students' Conceptual Understanding of Chemical Bonding and Attitude toward Chemistry: A Case for Turkey

    ERIC Educational Resources Information Center

    Ozmen, Haluk

    2008-01-01

    In this study, the effect of computer-assisted instruction on conceptual understanding of chemical bonding and attitude toward chemistry was investigated. The study employed a quasi-experimental design involving 11 grade students; 25 in an experimental and 25 in a control group. The Chemical Bonding Achievement Test (CBAT) consisting of 15…

  1. Host computer software specifications for a zero-g payload manhandling simulator

    NASA Technical Reports Server (NTRS)

    Wilson, S. W.

    1986-01-01

    The HP PASCAL source code was developed for the Mission Planning and Analysis Division (MPAD) of NASA/JSC, and takes the place of detailed flow charts defining the host computer software specifications for MANHANDLE, a digital/graphical simulator that can be used to analyze the dynamics of onorbit (zero-g) payload manhandling operations. Input and output data for representative test cases are contained.

  2. A computable phenotype for asthma case identification in adult and pediatric patients: External validation in the Chicago Area Patient-Outcomes Research Network (CAPriCORN).

    PubMed

    Afshar, Majid; Press, Valerie G; Robison, Rachel G; Kho, Abel N; Bandi, Sindhura; Biswas, Ashvini; Avila, Pedro C; Kumar, Harsha Vardhan Madan; Yu, Byung; Naureckas, Edward T; Nyenhuis, Sharmilee M; Codispoti, Christopher D

    2017-10-13

    Comprehensive, rapid, and accurate identification of patients with asthma for clinical care and engagement in research efforts is needed. The original development and validation of a computable phenotype for asthma case identification occurred at a single institution in Chicago and demonstrated excellent test characteristics. However, its application in a diverse payer mix, across different health systems and multiple electronic health record vendors, and in both children and adults was not examined. The objective of this study is to externally validate the computable phenotype across diverse Chicago institutions to accurately identify pediatric and adult patients with asthma. A cohort of 900 asthma and control patients was identified from the electronic health record between January 1, 2012 and November 30, 2014. Two physicians at each site independently reviewed the patient chart to annotate cases. The inter-observer reliability between the physician reviewers had a κ-coefficient of 0.95 (95% CI 0.93-0.97). The accuracy, sensitivity, specificity, negative predictive value, and positive predictive value of the computable phenotype were all above 94% in the full cohort. The excellent positive and negative predictive values in this multi-center external validation study establish a useful tool to identify asthma cases in in the electronic health record for research and care. This computable phenotype could be used in large-scale comparative-effectiveness trials.

  3. Crashworthiness of light aircraft fuselage structures: A numerical and experimental investigation

    NASA Technical Reports Server (NTRS)

    Nanyaro, A. P.; Tennyson, R. C.; Hansen, J. S.

    1984-01-01

    The dynamic behavior of aircraft fuselage structures subject to various impact conditions was investigated. An analytical model was developed based on a self-consistent finite element (CFE) formulation utilizing shell, curved beam, and stringer type elements. Equations of motion were formulated and linearized (i.e., for small displacements), although material nonlinearity was retained to treat local plastic deformation. The equations were solved using the implicit Newmark-Beta method with a frontal solver routine. Stiffened aluminum fuselage models were also tested in free flight using the UTIAS pendulum crash test facility. Data were obtained on dynamic strains, g-loads, and transient deformations (using high speed photography in the latter case) during the impact process. Correlations between tests and predicted results are presented, together with computer graphics, based on the CFE model. These results include level and oblique angle impacts as well as the free-flight crash test. Comparisons with a hybrid, lumped mass finite element computer model demonstrate that the CFE formulation provides the test overall agreement with impact test data for comparable computing costs.

  4. Parallel ALLSPD-3D: Speeding Up Combustor Analysis Via Parallel Processing

    NASA Technical Reports Server (NTRS)

    Fricker, David M.

    1997-01-01

    The ALLSPD-3D Computational Fluid Dynamics code for reacting flow simulation was run on a set of benchmark test cases to determine its parallel efficiency. These test cases included non-reacting and reacting flow simulations with varying numbers of processors. Also, the tests explored the effects of scaling the simulation with the number of processors in addition to distributing a constant size problem over an increasing number of processors. The test cases were run on a cluster of IBM RS/6000 Model 590 workstations with ethernet and ATM networking plus a shared memory SGI Power Challenge L workstation. The results indicate that the network capabilities significantly influence the parallel efficiency, i.e., a shared memory machine is fastest and ATM networking provides acceptable performance. The limitations of ethernet greatly hamper the rapid calculation of flows using ALLSPD-3D.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadgu, Teklu; Appel, Gordon John

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the currentmore » analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less

  6. Efficient fuzzy Bayesian inference algorithms for incorporating expert knowledge in parameter estimation

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad

    2016-05-01

    Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert elicitation methodology is developed and applied to the real-world test case in order to provide a road map for the use of fuzzy Bayesian inference in groundwater modeling applications.

  7. Mars Science Laboratory Flight Software Boot Robustness Testing Project Report

    NASA Technical Reports Server (NTRS)

    Roth, Brian

    2011-01-01

    On the surface of Mars, the Mars Science Laboratory will boot up its flight computers every morning, having charged the batteries through the night. This boot process is complicated, critical, and affected by numerous hardware states that can be difficult to test. The hardware test beds do not facilitate testing a long duration of back-to-back unmanned automated tests, and although the software simulation has provided the necessary functionality and fidelity for this boot testing, there has not been support for the full flexibility necessary for this task. Therefore to perform this testing a framework has been build around the software simulation that supports running automated tests loading a variety of starting configurations for software and hardware states. This implementation has been tested against the nominal cases to validate the methodology, and support for configuring off-nominal cases is ongoing. The implication of this testing is that the introduction of input configurations that have yet proved difficult to test may reveal boot scenarios worth higher fidelity investigation, and in other cases increase confidence in the robustness of the flight software boot process.

  8. Big Data Processing for a Central Texas Groundwater Case Study

    NASA Astrophysics Data System (ADS)

    Cantu, A.; Rivera, O.; Martínez, A.; Lewis, D. H.; Gentle, J. N., Jr.; Fuentes, G.; Pierce, S. A.

    2016-12-01

    As computational methods improve, scientists are able to expand the level and scale of experimental simulation and testing that is completed for case studies. This study presents a comparative analysis of multiple models for the Barton Springs segment of the Edwards aquifer. Several numerical simulations using state-mandated MODFLOW models ran on Stampede, a High Performance Computing system housed at the Texas Advanced Computing Center, were performed for multiple scenario testing. One goal of this multidisciplinary project aims to visualize and compare the output data of the groundwater model using the statistical programming language R to find revealing data patterns produced by different pumping scenarios. Presenting data in a friendly post-processing format is covered in this paper. Visualization of the data and creating workflows applicable to the management of the data are tasks performed after data extraction. Resulting analyses provide an example of how supercomputing can be used to accelerate evaluation of scientific uncertainty and geological knowledge in relation to policy and management decisions. Understanding the aquifer behavior helps policy makers avoid negative impact on the endangered species, environmental services and aids in maximizing the aquifer yield.

  9. Multiphase Modeling of Secondary Atomization in a Shock Environment

    NASA Astrophysics Data System (ADS)

    St. Clair, Jeffrey; McGrath, Thomas; Balachandar, Sivaramakrishnan

    2017-06-01

    Understanding and developing accurate modeling strategies for shock-particulate interaction remains a challenging and important topic, with application to energetic materials development, volcanic eruptions, and safety/risk assessment. This work presents computational modeling of compressible multiphase flows with shock-induced droplet atomization. Droplet size has a strong influence on the interphase momentum and heat transfer. A test case is presented that is sensitive to this, requiring the dynamic modeling of the secondary atomization process occurring when the shock impacts the droplets. An Eulerian-Eulerian computational model that treats all phases as compressible, is hyperbolic and satisfies the 2nd Law of Thermodynamics is applied. Four different breakup models are applied to the test case in which a planar shock wave encounters a cloud of water droplets. The numerical results are compared with both experimental and previously-generated modeling results. The effect of the drag relation used is also investigated. The computed results indicate the necessity of using a droplet breakup model for this application, and the relative accuracy of results obtained with the different droplet breakup and drag models is discussed.

  10. Simulation of partially coherent light propagation using parallel computing devices

    NASA Astrophysics Data System (ADS)

    Magalhães, Tiago C.; Rebordão, José M.

    2017-08-01

    Light acquires or loses coherence and coherence is one of the few optical observables. Spectra can be derived from coherence functions and understanding any interferometric experiment is also relying upon coherence functions. Beyond the two limiting cases (full coherence or incoherence) the coherence of light is always partial and it changes with propagation. We have implemented a code to compute the propagation of partially coherent light from the source plane to the observation plane using parallel computing devices (PCDs). In this paper, we restrict the propagation in free space only. To this end, we used the Open Computing Language (OpenCL) and the open-source toolkit PyOpenCL, which gives access to OpenCL parallel computation through Python. To test our code, we chose two coherence source models: an incoherent source and a Gaussian Schell-model source. In the former case, we divided into two different source shapes: circular and rectangular. The results were compared to the theoretical values. Our implemented code allows one to choose between the PyOpenCL implementation and a standard one, i.e using the CPU only. To test the computation time for each implementation (PyOpenCL and standard), we used several computer systems with different CPUs and GPUs. We used powers of two for the dimensions of the cross-spectral density matrix (e.g. 324, 644) and a significant speed increase is observed in the PyOpenCL implementation when compared to the standard one. This can be an important tool for studying new source models.

  11. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    NASA Astrophysics Data System (ADS)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  12. A parabolic velocity-decomposition method for wind turbines

    NASA Astrophysics Data System (ADS)

    Mittal, Anshul; Briley, W. Roger; Sreenivas, Kidambi; Taylor, Lafayette K.

    2017-02-01

    An economical parabolized Navier-Stokes approximation for steady incompressible flow is combined with a compatible wind turbine model to simulate wind turbine flows, both upstream of the turbine and in downstream wake regions. The inviscid parabolizing approximation is based on a Helmholtz decomposition of the secondary velocity vector and physical order-of-magnitude estimates, rather than an axial pressure gradient approximation. The wind turbine is modeled by distributed source-term forces incorporating time-averaged aerodynamic forces generated by a blade-element momentum turbine model. A solution algorithm is given whose dependent variables are streamwise velocity, streamwise vorticity, and pressure, with secondary velocity determined by two-dimensional scalar and vector potentials. In addition to laminar and turbulent boundary-layer test cases, solutions for a streamwise vortex-convection test problem are assessed by mesh refinement and comparison with Navier-Stokes solutions using the same grid. Computed results for a single turbine and a three-turbine array are presented using the NREL offshore 5-MW baseline wind turbine. These are also compared with an unsteady Reynolds-averaged Navier-Stokes solution computed with full rotor resolution. On balance, the agreement in turbine wake predictions for these test cases is very encouraging given the substantial differences in physical modeling fidelity and computer resources required.

  13. Computer aided system engineering and analysis (CASE/A) modeling package for ECLS systems - An overview

    NASA Technical Reports Server (NTRS)

    Dalee, Robert C.; Bacskay, Allen S.; Knox, James C.

    1990-01-01

    An overview of the CASE/A-ECLSS series modeling package is presented. CASE/A is an analytical tool that has supplied engineering productivity accomplishments during ECLSS design activities. A components verification program was performed to assure component modeling validity based on test data from the Phase II comparative test program completed at the Marshall Space Flight Center. An integrated plotting feature has been added to the program which allows the operator to analyze on-screen data trends or get hard copy plots from within the CASE/A operating environment. New command features in the areas of schematic, output, and model management, and component data editing have been incorporated to enhance the engineer's productivity during a modeling program.

  14. Research in Varying Burner Tilt Angle to Reduce Rear Pass Temperature in Coal Fired Boiler

    NASA Astrophysics Data System (ADS)

    Thrangaraju, Savithry K.; Munisamy, Kannan M.; Baskaran, Saravanan

    2017-04-01

    This research shows the investigation conducted on one of techniques that is used in Manjung 700 MW tangentially fired coal power plant. The investigation conducted in this research is finding out the right tilt angle for the burners in the boiler that causes an efficient temperature distribution and combustion gas flow pattern in the boiler especially at the rear pass section. The main outcome of the project is to determine the right tilt angle for the burner to create an efficient temperature distribution and combustion gas flow pattern that able to increase the efficiency of the boiler. The investigation is carried out by using Computational Fluid Dynamics method to obtain the results by varying the burner tilt angle. The boiler model is drawn by using designing software which is called Solid Works and Fluent from Computational Fluid Dynamics is used to conduct the analysis on the boiler model. The analysis is to imitate the real combustion process in the real Manjung 700 MW boiler. The expected results are to determine the right burner tilt angle with a computational fluid analysis by obtaining the temperature distribution and combustion gas flow pattern for each of the three angles set for the burner tilt angle in FLUENT software. Three burner tilt angles are selected which are burner tilt angle at (0°) as test case 1, burner tilt angle at (+10°) as test case 2 and burner tilt angle at (-10°) as test case 3. These entire three cases were run in CFD software and the results of temperature distribution and velocity vector were obtained to find out the changes on the three cases at the furnace and rear pass section of the boiler. The results are being compared in analysis part by plotting graphs to determine the right tilting angle that reduces the rear pass temperature.

  15. The Impact of the Pre-Instructional Cognitive Profile on Learning Gain and Final Exam of Physics Courses: A Case Study

    ERIC Educational Resources Information Center

    Capizzo, Maria Concetta; Nuzzo, Silvana; Zarcone, Michelangelo

    2006-01-01

    The case study described in this paper investigates the relationship among some pre-instructional knowledge, the learning gain and the final physics performance of computing engineering students in the introductory physics course. The results of the entrance engineering test (EET) have been used as a measurement of reading comprehension, logic and…

  16. Capability Extension to the Turbine Off-Design Computer Program AXOD With Applications to the Highly Loaded Fan-Drive Turbines

    NASA Technical Reports Server (NTRS)

    Chen, Shu-cheng S.

    2011-01-01

    The axial flow turbine off-design computer program AXOD has been upgraded to include the outlet guide vane (OGV) into its acceptable turbine configurations. The mathematical bases and the techniques used for the code implementation are described and discussed in lengths in this paper. This extended capability is verified and validated with two cases of highly loaded fan-drive turbines, designed and tested in the V/STOL Program of NASA. The first case is a 4 1/2-stage turbine with an average stage loading factor of 4.66, designed by Pratt & Whitney Aircraft. The second case is a 3 1/2-stage turbine with an average loading factor of 4.0, designed in-house by the NASA Lewis Research Center (now the NASA Glenn Research Center). Both cases were experimentally tested in the turbine facility located at the Glenn Research Center. The processes conducted in these studies are described in detail in this paper, and the results in comparison with the experimental data are presented and discussed. The comparisons between the AXOD results and the experimental data are in excellent agreement.

  17. Neutron imaging with lithium indium diselenide: Surface properties, spatial resolution, and computed tomography

    NASA Astrophysics Data System (ADS)

    Lukosi, Eric D.; Herrera, Elan H.; Hamm, Daniel S.; Burger, Arnold; Stowe, Ashley C.

    2017-11-01

    An array of lithium indium diselenide (LISe) scintillators were investigated for application in neutron imaging. The sensors, varying in thickness and surface roughness, were tested using both reflective and anti-reflective mounting to an aluminum window. The spatial resolution of each LISe scintillator was calculated using the knife-edge test and a modulation transfer function analysis. It was found that the anti-reflective backing case yielded higher spatial resolutions by up to a factor of two over the reflective backing case despite a reduction in measured light yield by an average of 1.97. In most cases, the use of an anti-reflective backing resulted in a higher spatial resolution than the 50 μm-thick ZnS(Cu):6 LiF comparison scintillation screen. The effect of surface roughness was not directly correlated to measured light yield or observed spatial resolution, but weighting the reflective backing case by the random surface roughness revealed that a linear relationship exists between the fractional change (RB/ARB) of the two. Finally, the LISe scintillator array was used in neutron computed tomography to investigate the features of halyomorpha halys with the reflective and anti-reflective backing.

  18. An Overview of the NCC Spray/Monte-Carlo-PDF Computations

    NASA Technical Reports Server (NTRS)

    Raju, M. S.; Liu, Nan-Suey (Technical Monitor)

    2000-01-01

    This paper advances the state-of-the-art in spray computations with some of our recent contributions involving scalar Monte Carlo PDF (Probability Density Function), unstructured grids and parallel computing. It provides a complete overview of the scalar Monte Carlo PDF and Lagrangian spray computer codes developed for application with unstructured grids and parallel computing. Detailed comparisons for the case of a reacting non-swirling spray clearly highlight the important role that chemistry/turbulence interactions play in the modeling of reacting sprays. The results from the PDF and non-PDF methods were found to be markedly different and the PDF solution is closer to the reported experimental data. The PDF computations predict that some of the combustion occurs in a predominantly premixed-flame environment and the rest in a predominantly diffusion-flame environment. However, the non-PDF solution predicts wrongly for the combustion to occur in a vaporization-controlled regime. Near the premixed flame, the Monte Carlo particle temperature distribution shows two distinct peaks: one centered around the flame temperature and the other around the surrounding-gas temperature. Near the diffusion flame, the Monte Carlo particle temperature distribution shows a single peak. In both cases, the computed PDF's shape and strength are found to vary substantially depending upon the proximity to the flame surface. The results bring to the fore some of the deficiencies associated with the use of assumed-shape PDF methods in spray computations. Finally, we end the paper by demonstrating the computational viability of the present solution procedure for its use in 3D combustor calculations by summarizing the results of a 3D test case with periodic boundary conditions. For the 3D case, the parallel performance of all the three solvers (CFD, PDF, and spray) has been found to be good when the computations were performed on a 24-processor SGI Origin work-station.

  19. Structural, Linguistic and Topic Variables in Verbal and Computational Problems in Elementary Mathematics.

    ERIC Educational Resources Information Center

    Beardslee, Edward C.; Jerman, Max E.

    Five structural, four linguistic and twelve topic variables are used in regression analyses on results of a 50-item achievement test. The test items are related to 12 topics from the third-grade mathematics curriculum. The items reflect one of two cases of the structural variable, cognitive level; the two levels are characterized, inductive…

  20. Ergonomics Factors in English as a Foreign Language Testing: The Case of PLEVALEX

    ERIC Educational Resources Information Center

    Garcia Laborda, Jesus; Magal-Royo, Teresa; de Siqueira Rocha, Jose Macario; Alvarez, Miguel Fernandez

    2010-01-01

    Although much has been said about ergonomics in interface and in computer tools and interface design, very few articles in major journals have addressed this topic in relation to language testing. This article describes an experiment carried out at the Polytechnic University of Valencia, Spain, in which 27 Media and Communication students provided…

  1. Experimental and Analytical Characterization of the Macromechanical Response for Triaxial Braided Composite Materials

    NASA Technical Reports Server (NTRS)

    Littell, Justin D.

    2013-01-01

    Increasingly, carbon composite structures are being used in aerospace applications. Their highstrength, high-stiffness, and low-weight properties make them good candidates for replacing many aerospace structures currently made of aluminum or steel. Recently, many of the aircraft engine manufacturers have developed new commercial jet engines that will use composite fan cases. Instead of using traditional composite layup techniques, these new fan cases will use a triaxially braided pattern, which improves case performance. The impact characteristics of composite materials for jet engine fan case applications have been an important research topic because Federal regulations require that an engine case be able to contain a blade and blade fragments during an engine blade-out event. Once the impact characteristics of these triaxial braided materials become known, computer models can be developed to simulate a jet engine blade-out event, thus reducing cost and time in the development of these composite jet engine cases. The two main problems that have arisen in this area of research are that the properties for these materials have not been fully determined and computationally efficient computer models, which incorporate much of the microscale deformation and failure mechanisms, are not available. The research reported herein addresses some of the deficiencies present in previous research regarding these triaxial braided composite materials. The current research develops new techniques to accurately quantify the material properties of the triaxial braided composite materials. New test methods are developed for the polymer resin composite constituent and representative composite coupons. These methods expand previous research by using novel specimen designs along with using a noncontact measuring system that is also capable of identifying and quantifying many of the microscale failure mechanisms present in the materials. Finally, using the data gathered, a new hybrid micromacromechanical computer model is created to simulate the behavior of these composite material systems under static and ballistic impact loading using the test data acquired. The model also quantifies the way in which the fiber/matrix interface affects material response under static and impact loading. The results show that the test methods are capable of accurately quantifying the polymer resin under a variety of strain rates and temperature for three loading conditions. The resin strength and stiffness data show a clear rate and temperature dependence. The data also show the hydrostatic stress effects and hysteresis, all of which can be used by researchers developing composite constitutive models for the resins. The results for the composite data reveal noticeable differences in strength, failure strain, and stiffness in the different material systems presented. The investigations into the microscale failure mechanisms provide information about the nature of the different material system behaviors. Finally, the developed computer model predicts composite static strength and stiffness to within 10 percent of the gathered test data and also agrees with composite impact data, where available.

  2. Application of the implicit MacCormack scheme to the PNS equations

    NASA Technical Reports Server (NTRS)

    Lawrence, S. L.; Tannehill, J. C.; Chaussee, D. S.

    1983-01-01

    The two-dimensional parabolized Navier-Stokes equations are solved using MacCormack's (1981) implicit finite-difference scheme. It is shown that this method for solving the parabolized Navier-Stokes equations does not require the inversion of block tridiagonal systems of algebraic equations and allows the original explicit scheme to be employed in those regions where implicit treatment is not needed. The finite-difference algorithm is discussed and the computational results for two laminar test cases are presented. Results obtained using this method for the case of a flat plate boundary layer are compared with those obtained using the conventional Beam-Warming scheme, as well as those obtained from a boundary layer code. The computed results for a more severe test of the method, the hypersonic flow past a 15 deg compression corner, are found to compare favorably with experiment and a numerical solution of the complete Navier-Stokes equations.

  3. Parallel Unsteady Turbopump Simulations for Liquid Rocket Engines

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin C.; Kwak, Dochan; Chan, William

    2000-01-01

    This paper reports the progress being made towards complete turbo-pump simulation capability for liquid rocket engines. Space Shuttle Main Engine (SSME) turbo-pump impeller is used as a test case for the performance evaluation of the MPI and hybrid MPI/Open-MP versions of the INS3D code. Then, a computational model of a turbo-pump has been developed for the shuttle upgrade program. Relative motion of the grid system for rotor-stator interaction was obtained by employing overset grid techniques. Time-accuracy of the scheme has been evaluated by using simple test cases. Unsteady computations for SSME turbo-pump, which contains 136 zones with 35 Million grid points, are currently underway on Origin 2000 systems at NASA Ames Research Center. Results from time-accurate simulations with moving boundary capability, and the performance of the parallel versions of the code will be presented in the final paper.

  4. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1991-01-01

    Continuing studies associated with the development of the quasi-analytical (QA) sensitivity method for three dimensional transonic flow about wings are presented. Furthermore, initial results using the quasi-analytical approach were obtained and compared to those computed using the finite difference (FD) approach. The basic goals achieved were: (1) carrying out various debugging operations pertaining to the quasi-analytical method; (2) addition of section design variables to the sensitivity equation in the form of multiple right hand sides; (3) reconfiguring the analysis/sensitivity package in order to facilitate the execution of analysis/FD/QA test cases; and (4) enhancing the display of output data to allow careful examination of the results and to permit various comparisons of sensitivity derivatives obtained using the FC/QA methods to be conducted easily and quickly. In addition to discussing the above goals, the results of executing subcritical and supercritical test cases are presented.

  5. Touch-screen tablet user configurations and case-supported tilt affect head and neck flexion angles.

    PubMed

    Young, Justin G; Trudeau, Matthieu; Odell, Dan; Marinelli, Kim; Dennerlein, Jack T

    2012-01-01

    The aim of this study was to determine how head and neck postures vary when using two media tablet (slate) computers in four common user configurations. Fifteen experienced media tablet users completed a set of simulated tasks with two media tablets in four typical user configurations. The four configurations were: on the lap and held with the user's hands, on the lap and in a case, on a table and in a case, and on a table and in a case set at a high angle for watching movies. An infra-red LED marker based motion analysis system measured head/neck postures. Head and neck flexion significantly varied across the four configurations and across the two tablets tested. Head and neck flexion angles during tablet use were greater, in general, than angles previously reported for desktop and notebook computing. Postural differences between tablets were driven by case designs, which provided significantly different tilt angles, while postural differences between configurations were driven by gaze and viewing angles. Head and neck posture during tablet computing can be improved by placing the tablet higher to avoid low gaze angles (i.e. on a table rather than on the lap) and through the use of a case that provides optimal viewing angles.

  6. TVC actuator model. [for the space shuttle main engine

    NASA Technical Reports Server (NTRS)

    Baslock, R. W.

    1977-01-01

    A prototype Space Shuttle Main Engine (SSME) Thrust Vector Control (TVC) Actuator analog model was successfully completed. The prototype, mounted on five printed circuit (PC) boards, was delivered to NASA, checked out and tested using a modular replacement technique on an analog computer. In all cases, the prototype model performed within the recording techniques of the analog computer which is well within the tolerances of the specifications.

  7. Analyzing student conceptual understanding of resistor networks using binary, descriptive, and computational questions

    NASA Astrophysics Data System (ADS)

    Mujtaba, Abid H.

    2018-02-01

    This paper presents a case study assessing and analyzing student engagement with and responses to binary, descriptive, and computational questions testing the concepts underlying resistor networks (series and parallel combinations). The participants of the study were undergraduate students enrolled in a university in Pakistan. The majority of students struggled with the descriptive question, and while successfully answering the binary and computational ones, they failed to build an expectation for the answer, and betrayed significant lack of conceptual understanding in the process. The data collected was also used to analyze the relative efficacy of the three questions as a means of assessing conceptual understanding. The three questions were revealed to be uncorrelated and unlikely to be testing the same construct. The ability to answer the binary or computational question was observed to be divorced from a deeper understanding of the concepts involved.

  8. A Computational and Experimental Study of Slit Resonators

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W.; Ju, H.; Jones, M. G.; Watson, W. R.; Parrott, T. L.

    2003-01-01

    Computational and experimental studies are carried out to offer validation of the results obtained from direct numerical simulation (DNS) of the flow and acoustic fields of slit resonators. The test cases include slits with 90-degree corners and slits with 45-degree bevel angle housed inside an acoustic impedance tube. Three slit widths are used. Six frequencies from 0.5 to 3.0 kHz are chosen. Good agreement is found between computed and measured reflection factors. In addition, incident sound waves having white noise spectrum and a prescribed pseudo-random noise spectrum are used in subsequent series of tests. The computed broadband results are again found to agree well with experimental data. It is believed the present results provide strong support that DNS can eventually be a useful and accurate prediction tool for liner aeroacoustics. The usage of DNS as a design tool is discussed and illustrated by a simple example.

  9. Performance of the MIR Cooperative Solar Array After 2.5 Years in Orbit

    NASA Technical Reports Server (NTRS)

    Kerslake, Thomas W.; Hoffman, David J.

    1999-01-01

    The Mir Cooperative Solar Array (MCSA) was developed jointly by the United States and Russia to produce 6 kW of power for the Russian space station Mir. Four, multi-orbit test sequences were executed between June 1996 and December 1998 to measure MCSA electrical performance. A dedicated Fortran computer code was developed to analyze the detailed thermal-electrical performance of the MCSA. The computational performance results compared very favorably with the measured flight data in most cases. Minor performance degradation was detected in one current generating section of the MCSA. Yet overall, the flight data indicated the MCSA was meeting and exceeding performance expectations. There was no precipitous performance loss due to contamination or other causes after 2.5 years of operation. In this paper, we review the MCSA flight electrical performance tests, data and computational modeling and discuss findings from data comparisons with the computational results.

  10. A computer program for helicopter rotor noise using Lowson's formula in the time domain

    NASA Technical Reports Server (NTRS)

    Parks, C. L.

    1975-01-01

    A computer program (D3910) was developed to calculate both the far field and near field acoustic pressure signature of a tilted rotor in hover or uniform forward speed. The analysis, carried out in the time domain, is based on Lowson's formulation of the acoustic field of a moving force. The digital computer program is described, including methods used in the calculations, a flow chart, program D3910 source listing, instructions for the user, and two test cases with input and output listings and output plots.

  11. Automated chest-radiography as a triage for Xpert testing in resource-constrained settings: a prospective study of diagnostic accuracy and costs

    NASA Astrophysics Data System (ADS)

    Philipsen, R. H. H. M.; Sánchez, C. I.; Maduskar, P.; Melendez, J.; Peters-Bax, L.; Peter, J. G.; Dawson, R.; Theron, G.; Dheda, K.; van Ginneken, B.

    2015-07-01

    Molecular tests hold great potential for tuberculosis (TB) diagnosis, but are costly, time consuming, and HIV-infected patients are often sputum scarce. Therefore, alternative approaches are needed. We evaluated automated digital chest radiography (ACR) as a rapid and cheap pre-screen test prior to Xpert MTB/RIF (Xpert). 388 suspected TB subjects underwent chest radiography, Xpert and sputum culture testing. Radiographs were analysed by computer software (CAD4TB) and specialist readers, and abnormality scores were allocated. A triage algorithm was simulated in which subjects with a score above a threshold underwent Xpert. We computed sensitivity, specificity, cost per screened subject (CSS), cost per notified TB case (CNTBC) and throughput for different diagnostic thresholds. 18.3% of subjects had culture positive TB. For Xpert alone, sensitivity was 78.9%, specificity 98.1%, CSS $13.09 and CNTBC $90.70. In a pre-screening setting where 40% of subjects would undergo Xpert, CSS decreased to $6.72 and CNTBC to $54.34, with eight TB cases missed and throughput increased from 45 to 113 patients/day. Specialists, on average, read 57% of radiographs as abnormal, reducing CSS ($8.95) and CNTBC ($64.84). ACR pre-screening could substantially reduce costs, and increase daily throughput with few TB cases missed. These data inform public health policy in resource-constrained settings.

  12. Verifying a computational method for predicting extreme ground motion

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  13. User's guide to the NOZL3D and NOZLIC computer programs

    NASA Technical Reports Server (NTRS)

    Thomas, P. D.

    1980-01-01

    Complete FORTRAN listings and running instructions are given for a set of computer programs that perform an implicit numerical solution to the unsteady Navier-Stokes equations to predict the flow characteristics and performance of nonaxisymmetric nozzles. The set includes the NOZL3D program, which performs the flow computations; the NOZLIC program, which sets up the flow field initial conditions for general nozzle configurations, and also generates the computational grid for simple two dimensional and axisymmetric configurations; and the RGRIDD program, which generates the computational grid for complicated three dimensional configurations. The programs are designed specifically for the NASA-Langley CYBER 175 computer, and employ auxiliary disk files for primary data storage. Input instructions and computed results are given for four test cases that include two dimensional, three dimensional, and axisymmetric configurations.

  14. Slit scan radiographic system for intermediate size rocket motors

    NASA Astrophysics Data System (ADS)

    Bernardi, Richard T.; Waters, David D.

    1992-12-01

    The development of slit-scan radiography capability for the NASA Advanced Computed Tomography Inspection System (ACTIS) computed tomography (CT) scanner at MSFC is discussed. This allows for tangential case interface (bondline) inspection at 2 MeV of intermediate-size rocket motors like the Hawk. Motorized mounting fixture hardware was designed, fabricated, installed, and tested on ACTIS. The ACTIS linear array of x-ray detectors was aligned parallel to the tangent line of a horizontal Hawk motor case. A 5 mm thick x-ray fan beam was used. Slit-scan images were produced with continuous rotation of a horizontal Hawk motor. Image features along Hawk motor case interfaces were indicated. A motorized exit cone fixture for ACTIS slit-scan inspection was also provided. The results of this SBIR have shown that slit scanning is an alternative imaging technique for case interface inspection. More data is required to qualify the technique for bondline inspection.

  15. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2001-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  16. Further development of the dynamic gas temperature measurement system. Volume 2: Computer program user's manual

    NASA Technical Reports Server (NTRS)

    Stocks, Dana R.

    1986-01-01

    The Dynamic Gas Temperature Measurement System compensation software accepts digitized data from two different diameter thermocouples and computes a compensated frequency response spectrum for one of the thermocouples. Detailed discussions of the physical system, analytical model, and computer software are presented in this volume and in Volume 1 of this report under Task 3. Computer program software restrictions and test cases are also presented. Compensated and uncompensated data may be presented in either the time or frequency domain. Time domain data are presented as instantaneous temperature vs time. Frequency domain data may be presented in several forms such as power spectral density vs frequency.

  17. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  18. Comparitive Study of High-Order Positivity-Preserving WENO Schemes

    NASA Technical Reports Server (NTRS)

    Kotov, D. V.; Yee, H. C.; Sjogreen, B.

    2014-01-01

    In gas dynamics and magnetohydrodynamics flows, physically, the density ? and the pressure p should both be positive. In a standard conservative numerical scheme, however, the computed internal energy is The ideas of Zhang & Shu (2012) and Hu et al. (2012) precisely address the aforementioned issue. Zhang & Shu constructed a new conservative positivity-preserving procedure to preserve positive density and pressure for high-order Weighted Essentially Non-Oscillatory (WENO) schemes by the Lax-Friedrichs flux (WENO/LLF). In general, WENO/LLF is obtained by subtracting the kinetic energy from the total energy, resulting in a computed p that may be negative. Examples are problems in which the dominant energy is kinetic. Negative ? may often emerge in computing blast waves. In such situations the computed eigenvalues of the Jacobian will become imaginary. Consequently, the initial value problem for the linearized system will be ill posed. This explains why failure of preserving positivity of density or pressure may cause blow-ups of the numerical algorithm. The adhoc methods in numerical strategy which modify the computed negative density and/or the computed negative pressure to be positive are neither a conservative cure nor a stable solution. Conservative positivity-preserving schemes are more appropriate for such flow problems. too dissipative for flows such as turbulence with strong shocks computed in direct numerical simulations (DNS) and large eddy simulations (LES). The new conservative positivity-preserving procedure proposed in Hu et al. (2012) can be used with any high-order shock-capturing scheme, including high-order WENO schemes using the Roe's flux (WENO/Roe). The goal of this study is to compare the results obtained by non-positivity-preserving methods with the recently developed positivity-preserving schemes for representative test cases. In particular the more di cult 3D Noh and Sedov problems are considered. These test cases are chosen because of the negative pressure/density most often exhibited by standard high-order shock-capturing schemes. The simulation of a hypersonic nonequilibrium viscous shock tube that is related to the NASA Electric Arc Shock Tube (EAST) is also included. EAST is a high-temperature and high Mach number viscous nonequilibrium ow consisting of 13 species. In addition, as most common shock-capturing schemes have been developed for problems without source terms, when applied to problems with nonlinear and/or sti source terms these methods can result in spurious solutions, even when solving a conservative system of equations with a conservative scheme. This kind of behavior can be observed even for a scalar case as well as for the case consisting of two species and one reaction.. This EAST example indicated that standard high-order shock-capturing methods exhibit instability of density/pressure in addition to grid-dependent discontinuity locations with insufficient grid points. The evaluation of these test cases is based on the stability of the numerical schemes together with the accuracy of the obtained solutions.

  19. Development of a Quantitative Decision Metric for Selecting the Most Suitable Discretization Method for SN Transport Problems

    NASA Astrophysics Data System (ADS)

    Schunert, Sebastian

    In this work we develop a quantitative decision metric for spatial discretization methods of the SN equations. The quantitative decision metric utilizes performance data from selected test problems for computing a fitness score that is used for the selection of the most suitable discretization method for a particular SN transport application. The fitness score is aggregated as a weighted geometric mean of single performance indicators representing various performance aspects relevant to the user. Thus, the fitness function can be adjusted to the particular needs of the code practitioner by adding/removing single performance indicators or changing their importance via the supplied weights. Within this work a special, broad class of methods is considered, referred to as nodal methods. This class is naturally comprised of the DGFEM methods of all function space families. Within this work it is also shown that the Higher Order Diamond Difference (HODD) method is a nodal method. Building on earlier findings that the Arbitrarily High Order Method of the Nodal type (AHOTN) is also a nodal method, a generalized finite-element framework is created to yield as special cases various methods that were developed independently using profoundly different formalisms. A selection of test problems related to a certain performance aspect are considered: an Method of Manufactured Solutions (MMS) test suite for assessing accuracy and execution time, Lathrop's test problem for assessing resilience against occurrence of negative fluxes, and a simple, homogeneous cube test problem to verify if a method possesses the thick diffusive limit. The contending methods are implemented as efficiently as possible under a common SN transport code framework to level the playing field for a fair comparison of their computational load. Numerical results are presented for all three test problems and a qualitative rating of each method's performance is provided for each aspect: accuracy/efficiency, resilience against negative fluxes, and possession of the thick diffusion limit, separately. The choice of the most efficient method depends on the utilized error norm: in Lp error norms higher order methods such as the AHOTN method of order three perform best, while for computing integral quantities the linear nodal (LN) method is most efficient. The most resilient method against occurrence of negative fluxes is the simple corner balance (SCB) method. A validation of the quantitative decision metric is performed based on the NEA box-inbox suite of test problems. The validation exercise comprises two stages: first prediction of the contending methods' performance via the decision metric and second computing the actual scores based on data obtained from the NEA benchmark problem. The comparison of predicted and actual scores via a penalty function (ratio of predicted best performer's score to actual best score) completes the validation exercise. It is found that the decision metric is capable of very accurate predictions (penalty < 10%) in more than 83% of the considered cases and features penalties up to 20% for the remaining cases. An exception to this rule is the third test case NEA-III intentionally set up to incorporate a poor match of the benchmark with the "data" problems. However, even under these worst case conditions the decision metric's suggestions are never detrimental. Suggestions for improving the decision metric's accuracy are to increase the pool of employed data, to refine the mapping of a given configuration to a case in the database, and to better characterize the desired target quantities.

  20. Case Series Investigations in Cognitive Neuropsychology

    PubMed Central

    Schwartz, Myrna F.; Dell, Gary S.

    2011-01-01

    Case series methodology involves the systematic assessment of a sample of related patients, with the goal of understanding how and why they differ from one another. This method has become increasingly important in cognitive neuropsychology, which has long been identified with single-subject research. We review case series studies dealing with impaired semantic memory, reading, and language production, and draw attention to the affinity of this methodology for testing theories that are expressed as computational models and for addressing questions about neuroanatomy. It is concluded that case series methods usefully complement single-subject techniques. PMID:21714756

  1. Automated Agatston score computation in non-ECG gated CT scans using deep learning

    NASA Astrophysics Data System (ADS)

    Cano-Espinosa, Carlos; González, Germán.; Washko, George R.; Cazorla, Miguel; San José Estépar, Raúl

    2018-03-01

    Introduction: The Agatston score is a well-established metric of cardiovascular disease related to clinical outcomes. It is computed from CT scans by a) measuring the volume and intensity of the atherosclerotic plaques and b) aggregating such information in an index. Objective: To generate a convolutional neural network that inputs a non-contrast chest CT scan and outputs the Agatston score associated with it directly, without a prior segmentation of Coronary Artery Calcifications (CAC). Materials and methods: We use a database of 5973 non-contrast non-ECG gated chest CT scans where the Agatston score has been manually computed. The heart of each scan is cropped automatically using an object detector. The database is split in 4973 cases for training and 1000 for testing. We train a 3D deep convolutional neural network to regress the Agatston score directly from the extracted hearts. Results: The proposed method yields a Pearson correlation coefficient of r = 0.93; p <= 0.0001 against manual reference standard in the 1000 test cases. It further stratifies correctly 72.6% of the cases with respect to standard risk groups. This compares to more complex state-of-the-art methods based on prior segmentations of the CACs, which achieve r = 0.94 in ECG-gated pulmonary CT. Conclusions: A convolutional neural network can regress the Agatston score from the image of the heart directly, without a prior segmentation of the CACs. This is a new and simpler paradigm in the Agatston score computation that yields similar results to the state-of-the-art literature.

  2. Implementing secure laptop-based testing in an undergraduate nursing program: a case study.

    PubMed

    Tao, Jinyuan; Lorentz, B Chris; Hawes, Stacey; Rugless, Fely; Preston, Janice

    2012-07-01

    This article presents the implementation of secure laptop-based testing in an undergraduate nursing program. Details on how to design, develop, implement, and secure tests are discussed. Laptop-based testing mode is also compared with the computer-laboratory-based testing model. Five elements of the laptop-based testing model are illustrated: (1) it simulates the national board examination, (2) security is achievable, (3) it is convenient for both instructors and students, (4) it provides students hands-on practice, (5) continuous technical support is the key.

  3. Efficient analytical implementation of the DOT Riemann solver for the de Saint Venant-Exner morphodynamic model

    NASA Astrophysics Data System (ADS)

    Carraro, F.; Valiani, A.; Caleffi, V.

    2018-03-01

    Within the framework of the de Saint Venant equations coupled with the Exner equation for morphodynamic evolution, this work presents a new efficient implementation of the Dumbser-Osher-Toro (DOT) scheme for non-conservative problems. The DOT path-conservative scheme is a robust upwind method based on a complete Riemann solver, but it has the drawback of requiring expensive numerical computations. Indeed, to compute the non-linear time evolution in each time step, the DOT scheme requires numerical computation of the flux matrix eigenstructure (the totality of eigenvalues and eigenvectors) several times at each cell edge. In this work, an analytical and compact formulation of the eigenstructure for the de Saint Venant-Exner (dSVE) model is introduced and tested in terms of numerical efficiency and stability. Using the original DOT and PRICE-C (a very efficient FORCE-type method) as reference methods, we present a convergence analysis (error against CPU time) to study the performance of the DOT method with our new analytical implementation of eigenstructure calculations (A-DOT). In particular, the numerical performance of the three methods is tested in three test cases: a movable bed Riemann problem with analytical solution; a problem with smooth analytical solution; a test in which the water flow is characterised by subcritical and supercritical regions. For a given target error, the A-DOT method is always the most efficient choice. Finally, two experimental data sets and different transport formulae are considered to test the A-DOT model in more practical case studies.

  4. Differential computation method used to calibrate the angle-centroid relationship in coaxial reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Li, Xinji; Hui, Mei; Zhao, Zhu; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-05-01

    A differential computation method is presented to improve the precision of calibration for coaxial reverse Hartmann test (RHT). In the calibration, the accuracy of the distance measurement greatly influences the surface shape test, as demonstrated in the mathematical analyses. However, high-precision absolute distance measurement is difficult in the calibration. Thus, a differential computation method that only requires the relative distance was developed. In the proposed method, a liquid crystal display screen successively displayed two regular dot matrix patterns with different dot spacing. In a special case, images on the detector exhibited similar centroid distributions during the reflector translation. Thus, the critical value of the relative displacement distance and the centroid distributions of the dots on the detector were utilized to establish the relationship between the rays at certain angles and the detector coordinates. Experiments revealed the approximately linear behavior of the centroid variation with the relative displacement distance. With the differential computation method, we increased the precision of traditional calibration 10-5 rad root mean square. The precision of the RHT was increased by approximately 100 nm.

  5. A Case-Series Test of the Interactive Two-Step Model of Lexical Access: Predicting Word Repetition from Picture Naming

    ERIC Educational Resources Information Center

    Dell, Gary S.; Martin, Nadine; Schwartz, Myrna F.

    2007-01-01

    Lexical access in language production, and particularly pathologies of lexical access, are often investigated by examining errors in picture naming and word repetition. In this article, we test a computational approach to lexical access, the two-step interactive model, by examining whether the model can quantitatively predict the repetition-error…

  6. MicroHH 1.0: a computational fluid dynamics code for direct numerical simulation and large-eddy simulation of atmospheric boundary layer flows

    NASA Astrophysics Data System (ADS)

    van Heerwaarden, Chiel C.; van Stratum, Bart J. H.; Heus, Thijs; Gibbs, Jeremy A.; Fedorovich, Evgeni; Mellado, Juan Pedro

    2017-08-01

    This paper describes MicroHH 1.0, a new and open-source (www.microhh.org) computational fluid dynamics code for the simulation of turbulent flows in the atmosphere. It is primarily made for direct numerical simulation but also supports large-eddy simulation (LES). The paper covers the description of the governing equations, their numerical implementation, and the parameterizations included in the code. Furthermore, the paper presents the validation of the dynamical core in the form of convergence and conservation tests, and comparison of simulations of channel flows and slope flows against well-established test cases. The full numerical model, including the associated parameterizations for LES, has been tested for a set of cases under stable and unstable conditions, under the Boussinesq and anelastic approximations, and with dry and moist convection under stationary and time-varying boundary conditions. The paper presents performance tests showing good scaling from 256 to 32 768 processes. The graphical processing unit (GPU)-enabled version of the code can reach a speedup of more than an order of magnitude for simulations that fit in the memory of a single GPU.

  7. Utilising three-dimensional printing techniques when providing unique assistive devices: A case report.

    PubMed

    Day, Sarah Jane; Riley, Shaun Patrick

    2018-02-01

    The evolution of three-dimensional printing into prosthetics has opened conversations about the availability and cost of prostheses. This report will discuss how a prosthetic team incorporated additive manufacture techniques into the treatment of a patient with a partial hand amputation to create and test a unique assistive device which he could use to hold his French horn. Case description and methods: Using a process of shape capture, photogrammetry, computer-aided design and finite element analysis, a suitable assistive device was designed and tested. The design was fabricated using three-dimensional printing. Patient satisfaction was measured using a Pugh's Matrix™, and a cost comparison was made between the process used and traditional manufacturing. Findings and outcomes: Patient satisfaction was high. The three-dimensional printed devices were 56% cheaper to fabricate than a similar laminated device. Computer-aided design and three-dimensional printing proved to be an effective method for designing, testing and fabricating a unique assistive device. Clinical relevance CAD and 3D printing techniques can enable devices to be designed, tested and fabricated cheaper than when using traditional techniques. This may lead to improvements in quality and accessibility.

  8. Airside HVAC BESTEST: HVAC Air-Distribution System Model Test Cases for ASHRAE Standard 140

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, Ronald; Neymark, Joel; Kennedy, Mike D.

    This paper summarizes recent work to develop new airside HVAC equipment model analytical verification test cases for ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs. The analytical verification test method allows comparison of simulation results from a wide variety of building energy simulation programs with quasi-analytical solutions, further described below. Standard 140 is widely cited for evaluating software for use with performance-path energy efficiency analysis, in conjunction with well-known energy-efficiency standards including ASHRAE Standard 90.1, the International Energy Conservation Code, and other international standards. Airside HVAC Equipment is a common area ofmore » modelling not previously explicitly tested by Standard 140. Integration of the completed test suite into Standard 140 is in progress.« less

  9. Evaluation of Critical Bandwidth Using Digitally Processed Speech.

    DTIC Science & Technology

    1982-05-12

    observed after re- peating the two tests on persons with confirmed cases of sensorineural hearing impairment. Again, the plotted speech discrimination...quantifying the critical bandwidth of persons on a cli- nical or pre-employment level. The complex portion of the test design (the computer generation of...34super" normal hearing indi- viduals (i.e., those persons with narrower-than-normal cri- tical bands). This ability of the test shows promise as a valuable

  10. A Micro-Computer Model for Army Air Defense Training.

    DTIC Science & Technology

    1985-03-01

    generator. The period is 32763 numbers generated before a repetitive sequence is encountered on the development system. Chi-Squared tests for frequency...C’ Tests CPeriodicity. The period is 32763 numbers generated C’before a repetitive sequence is encountered on the development system. This was...positions in the test array. This was done with several different random number seeds. In each case 32763 p random numbers were generated before a

  11. Numerical predictions and measurements in the lubrication of aeronautical engine and transmission components

    NASA Astrophysics Data System (ADS)

    Moraru, Laurentiu Eugen

    2005-11-01

    This dissertation treats a variety of aspects of the lubrication of mechanical components encountered in aeronautical engines and transmissions. The study covers dual clearance squeeze film dampers, mixed elastohydrodynamic lubrication (EHL) cases and thermal elastohydrodynamic contacts. The dual clearance squeeze film damper (SFD) invented by Fleming is investigated both theoretically and experimentally for cases when the sleeve that separates the two oil films is free to float and for cases when the separating sleeve is supported by a squirrel cage. The Reynolds equation is developed to handle each of these cases and it is solved analytically for short bearings. A rotordynamic model of a test rig is developed, for both the single and dual SFD cases. A computer code is written to calculate the motion of the test rig rotor. Experiments are performed in order to validate the theoretical results. Rotordynamics computations are found to favorably agree with measured data. A probabilistic model for mixed EHL is developed and implemented. Surface roughness of gears are measured and processed. The mixed EHL model incorporates the average flow model of Patir and Cheng and the elasto-plastic contact mechanics model of Chang Etsion and Bogy. The current algorithm allows for the computation of the load supported by an oil film and for the load supported by the elasto-plastically deformed asperities. This work also presents a way to incorporate the effect of the fluid induced roughness deformation by utilizing the "amplitude reduction" results provided by the deterministic analyses. The Lobatto point Gaussian integration algorithm of Elrod and Brewe was extended for thermal lubrication problems involving compressible lubricants and it was implemented in thermal elastohydrodynamic cases. The unknown variables across the film are written in series of Legendre polynomials. The thermal Reynolds equation is obtained in terms of the series coefficients and it is proven that it can only explicitly contain the information from the first three Legendre polynomials. A computer code was written to implement the Lobatto point algorithm for a EHL line contact. Use of the Labatto point calculation method has resulted in greater accuracy without the use of a larger number of grid points.

  12. Distributed storage and cloud computing: a test case

    NASA Astrophysics Data System (ADS)

    Piano, S.; Delia Ricca, G.

    2014-06-01

    Since 2003 the computing farm hosted by the INFN Tier3 facility in Trieste supports the activities of many scientific communities. Hundreds of jobs from 45 different VOs, including those of the LHC experiments, are processed simultaneously. Given that normally the requirements of the different computational communities are not synchronized, the probability that at any given time the resources owned by one of the participants are not fully utilized is quite high. A balanced compensation should in principle allocate the free resources to other users, but there are limits to this mechanism. In fact, the Trieste site may not hold the amount of data needed to attract enough analysis jobs, and even in that case there could be a lack of bandwidth for their access. The Trieste ALICE and CMS computing groups, in collaboration with other Italian groups, aim to overcome the limitations of existing solutions using two approaches: sharing the data among all the participants taking full advantage of GARR-X wide area networks (10 GB/s) and integrating the resources dedicated to batch analysis with the ones reserved for dynamic interactive analysis, through modern solutions as cloud computing.

  13. Case study of supply induced demand: the case of provision of imaging scans (computed tomography and magnetic resonance) at Unimed-Manaus.

    PubMed

    Andrade, Edson de Oliveira; Andrade, Elizabeth Nogueira de; Gallo, José Hiran

    2011-01-01

    To present the experience of a health plan operator (Unimed-Manaus) in Manaus, Amazonas, Brazil, with the accreditation of imaging services and the demand induced by the supply of new services (Roemer's Law). This is a retrospective work studying a time series covering the period from January 1998 to June 2004, in which the computed tomography and the magnetic resonance imaging services were implemented as part of the services offered by that health plan operator. Statistical analysis consisted of a descriptive and an inferential part, with the latter using a mean parametric test (Student T-test and ANOVA) and the Pearson correlation test. A 5% alpha and a 95% confidence interval were adopted. At Unimed-Manaus, the supply of new imaging services, by itself, was identified as capable of generating an increased service demand, thus characterizing the phenomenon described by Roemer. The results underscore the need to be aware of the fact that the supply of new health services could bring about their increased use without a real demand.

  14. Estimating Skin Cancer Risk: Evaluating Mobile Computer-Adaptive Testing.

    PubMed

    Djaja, Ngadiman; Janda, Monika; Olsen, Catherine M; Whiteman, David C; Chien, Tsair-Wei

    2016-01-22

    Response burden is a major detriment to questionnaire completion rates. Computer adaptive testing may offer advantages over non-adaptive testing, including reduction of numbers of items required for precise measurement. Our aim was to compare the efficiency of non-adaptive (NAT) and computer adaptive testing (CAT) facilitated by Partial Credit Model (PCM)-derived calibration to estimate skin cancer risk. We used a random sample from a population-based Australian cohort study of skin cancer risk (N=43,794). All 30 items of the skin cancer risk scale were calibrated with the Rasch PCM. A total of 1000 cases generated following a normal distribution (mean [SD] 0 [1]) were simulated using three Rasch models with three fixed-item (dichotomous, rating scale, and partial credit) scenarios, respectively. We calculated the comparative efficiency and precision of CAT and NAT (shortening of questionnaire length and the count difference number ratio less than 5% using independent t tests). We found that use of CAT led to smaller person standard error of the estimated measure than NAT, with substantially higher efficiency but no loss of precision, reducing response burden by 48%, 66%, and 66% for dichotomous, Rating Scale Model, and PCM models, respectively. CAT-based administrations of the skin cancer risk scale could substantially reduce participant burden without compromising measurement precision. A mobile computer adaptive test was developed to help people efficiently assess their skin cancer risk.

  15. Blade loss transient dynamics analysis, volume 1. Task 1: Survey and perspective. [aircraft gas turbine engines

    NASA Technical Reports Server (NTRS)

    Gallardo, V. C.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.

    1981-01-01

    An analytical technique was developed to predict the behavior of a rotor system subjected to sudden unbalance. The technique is implemented in the Turbine Engine Transient Rotor Analysis (TETRA) computer program using the component element method. The analysis was particularly aimed toward blade-loss phenomena in gas turbine engines. A dual-rotor, casing, and pylon structure can be modeled by the computer program. Blade tip rubs, Coriolis forces, and mechanical clearances are included. The analytical system was verified by modeling and simulating actual test conditions for a rig test as well as a full-engine, blade-release demonstration.

  16. An implicit higher-order spatially accurate scheme for solving time dependent flows on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Tomaro, Robert F.

    1998-07-01

    The present research is aimed at developing a higher-order, spatially accurate scheme for both steady and unsteady flow simulations using unstructured meshes. The resulting scheme must work on a variety of general problems to ensure the creation of a flexible, reliable and accurate aerodynamic analysis tool. To calculate the flow around complex configurations, unstructured grids and the associated flow solvers have been developed. Efficient simulations require the minimum use of computer memory and computational times. Unstructured flow solvers typically require more computer memory than a structured flow solver due to the indirect addressing of the cells. The approach taken in the present research was to modify an existing three-dimensional unstructured flow solver to first decrease the computational time required for a solution and then to increase the spatial accuracy. The terms required to simulate flow involving non-stationary grids were also implemented. First, an implicit solution algorithm was implemented to replace the existing explicit procedure. Several test cases, including internal and external, inviscid and viscous, two-dimensional, three-dimensional and axi-symmetric problems, were simulated for comparison between the explicit and implicit solution procedures. The increased efficiency and robustness of modified code due to the implicit algorithm was demonstrated. Two unsteady test cases, a plunging airfoil and a wing undergoing bending and torsion, were simulated using the implicit algorithm modified to include the terms required for a moving and/or deforming grid. Secondly, a higher than second-order spatially accurate scheme was developed and implemented into the baseline code. Third- and fourth-order spatially accurate schemes were implemented and tested. The original dissipation was modified to include higher-order terms and modified near shock waves to limit pre- and post-shock oscillations. The unsteady cases were repeated using the higher-order spatially accurate code. The new solutions were compared with those obtained using the second-order spatially accurate scheme. Finally, the increased efficiency of using an implicit solution algorithm in a production Computational Fluid Dynamics flow solver was demonstrated for steady and unsteady flows. A third- and fourth-order spatially accurate scheme has been implemented creating a basis for a state-of-the-art aerodynamic analysis tool.

  17. Analysis of Test Case Computations and Experiments for the First Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Heeg, Jennifer; Wieseman, Carol D.; Chwalowski, Pawel

    2013-01-01

    This paper compares computational and experimental data from the Aeroelastic Prediction Workshop (AePW) held in April 2012. This workshop was designed as a series of technical interchange meetings to assess the state of the art of computational methods for predicting unsteady flowfields and static and dynamic aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques to simulate aeroelastic problems and to identify computational and experimental areas needing additional research and development. Three subject configurations were chosen from existing wind-tunnel data sets where there is pertinent experimental data available for comparison. Participant researchers analyzed one or more of the subject configurations, and results from all of these computations were compared at the workshop.

  18. Computation of transonic flow past projectiles at angle of attack

    NASA Technical Reports Server (NTRS)

    Reklis, R. P.; Sturek, W. B.; Bailey, F. R.

    1978-01-01

    Aerodynamic properties of artillery shell such as normal force and pitching moment reach peak values in a narrow transonic Mach number range. In order to compute these quantities, numerical techniques have been developed to obtain solutions to the three-dimensional transonic small disturbance equation about slender bodies at angle of attack. The computation is based on a plane relaxation technique involving Fourier transforms to partially decouple the three-dimensional difference equations. Particular care is taken to assure accurate solutions near corners found in shell designs. Computed surface pressures are compared to experimental measurements for circular arc and cone cylinder bodies which have been selected as test cases. Computed pitching moments are compared to range measurements for a typical projectile shape.

  19. Determining mode excitations of vacuum electronics devices via three-dimensional simulations using the SOS code

    NASA Technical Reports Server (NTRS)

    Warren, Gary

    1988-01-01

    The SOS code is used to compute the resonance modes (frequency-domain information) of sample devices and separately to compute the transient behavior of the same devices. A code, DOT, is created to compute appropriate dot products of the time-domain and frequency-domain results. The transient behavior of individual modes in the device is then plotted. Modes in a coupled-cavity traveling-wave tube (CCTWT) section excited beam in separate simulations are analyzed. Mode energy vs. time and mode phase vs. time are computed and it is determined whether the transient waves are forward or backward waves for each case. Finally, the hot-test mode frequencies of the CCTWT section are computed.

  20. Similarity based false-positive reduction for breast cancer using radiographic and pathologic imaging features

    NASA Astrophysics Data System (ADS)

    Pai, Akshay; Samala, Ravi K.; Zhang, Jianying; Qian, Wei

    2010-03-01

    Mammography reading by radiologists and breast tissue image interpretation by pathologists often leads to high False Positive (FP) Rates. Similarly, current Computer Aided Diagnosis (CADx) methods tend to concentrate more on sensitivity, thus increasing the FP rates. A novel method is introduced here which employs similarity based method to decrease the FP rate in the diagnosis of microcalcifications. This method employs the Principal Component Analysis (PCA) and the similarity metrics in order to achieve the proposed goal. The training and testing set is divided into generalized (Normal and Abnormal) and more specific (Abnormal, Normal, Benign) classes. The performance of this method as a standalone classification system is evaluated in both the cases (general and specific). In another approach the probability of each case belonging to a particular class is calculated. If the probabilities are too close to classify, the augmented CADx system can be instructed to have a detailed analysis of such cases. In case of normal cases with high probability, no further processing is necessary, thus reducing the computation time. Hence, this novel method can be employed in cascade with CADx to reduce the FP rate and also avoid unnecessary computational time. Using this methodology, a false positive rate of 8% and 11% is achieved for mammography and cellular images respectively.

  1. Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: a primer and applications.

    PubMed

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E

    2014-04-01

    This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  2. Validation of a numerical method for interface-resolving simulation of multicomponent gas-liquid mass transfer and evaluation of multicomponent diffusion models

    NASA Astrophysics Data System (ADS)

    Woo, Mino; Wörner, Martin; Tischer, Steffen; Deutschmann, Olaf

    2018-03-01

    The multicomponent model and the effective diffusivity model are well established diffusion models for numerical simulation of single-phase flows consisting of several components but are seldom used for two-phase flows so far. In this paper, a specific numerical model for interfacial mass transfer by means of a continuous single-field concentration formulation is combined with the multicomponent model and effective diffusivity model and is validated for multicomponent mass transfer. For this purpose, several test cases for one-dimensional physical or reactive mass transfer of ternary mixtures are considered. The numerical results are compared with analytical or numerical solutions of the Maxell-Stefan equations and/or experimental data. The composition-dependent elements of the diffusivity matrix of the multicomponent and effective diffusivity model are found to substantially differ for non-dilute conditions. The species mole fraction or concentration profiles computed with both diffusion models are, however, for all test cases very similar and in good agreement with the analytical/numerical solutions or measurements. For practical computations, the effective diffusivity model is recommended due to its simplicity and lower computational costs.

  3. Application of an Upwind High Resolution Finite-Differencing Scheme and Multigrid Method in Steady-State Incompressible Flow Simulations

    NASA Technical Reports Server (NTRS)

    Yang, Cheng I.; Guo, Yan-Hu; Liu, C.- H.

    1996-01-01

    The analysis and design of a submarine propulsor requires the ability to predict the characteristics of both laminar and turbulent flows to a higher degree of accuracy. This report presents results of certain benchmark computations based on an upwind, high-resolution, finite-differencing Navier-Stokes solver. The purpose of the computations is to evaluate the ability, the accuracy and the performance of the solver in the simulation of detailed features of viscous flows. Features of interest include flow separation and reattachment, surface pressure and skin friction distributions. Those features are particularly relevant to the propulsor analysis. Test cases with a wide range of Reynolds numbers are selected; therefore, the effects of the convective and the diffusive terms of the solver can be evaluated separately. Test cases include flows over bluff bodies, such as circular cylinders and spheres, at various low Reynolds numbers, flows over a flat plate with and without turbulence effects, and turbulent flows over axisymmetric bodies with and without propulsor effects. Finally, to enhance the iterative solution procedure, a full approximation scheme V-cycle multigrid method is implemented. Preliminary results indicate that the method significantly reduces the computational effort.

  4. Predicting bioactive conformations and binding modes of macrocycles

    NASA Astrophysics Data System (ADS)

    Anighoro, Andrew; de la Vega de León, Antonio; Bajorath, Jürgen

    2016-10-01

    Macrocyclic compounds experience increasing interest in drug discovery. It is often thought that these large and chemically complex molecules provide promising candidates to address difficult targets and interfere with protein-protein interactions. From a computational viewpoint, these molecules are difficult to treat. For example, flexible docking of macrocyclic compounds is hindered by the limited ability of current docking approaches to optimize conformations of extended ring systems for pose prediction. Herein, we report predictions of bioactive conformations of macrocycles using conformational search and binding modes using docking. Conformational ensembles generated using specialized search technique of about 70 % of the tested macrocycles contained accurate bioactive conformations. However, these conformations were difficult to identify on the basis of conformational energies. Moreover, docking calculations with limited ligand flexibility starting from individual low energy conformations rarely yielded highly accurate binding modes. In about 40 % of the test cases, binding modes were approximated with reasonable accuracy. However, when conformational ensembles were subjected to rigid body docking, an increase in meaningful binding mode predictions to more than 50 % of the test cases was observed. Electrostatic effects did not contribute to these predictions in a positive or negative manner. Rather, achieving shape complementarity at macrocycle-target interfaces was a decisive factor. In summary, a combined computational protocol using pre-computed conformational ensembles of macrocycles as a starting point for docking shows promise in modeling binding modes of macrocyclic compounds.

  5. Semi-automatic forensic approach using mandibular midline lingual structures as fingerprint: a pilot study.

    PubMed

    Shaheen, E; Mowafy, B; Politis, C; Jacobs, R

    2017-12-01

    Previous research proposed the use of the mandibular midline neurovascular canal structures as a forensic finger print. In their observer study, an average correct identification of 95% was reached which triggered this study. To present a semi-automatic computer recognition approach to replace the observers and to validate the accuracy of this newly proposed method. Imaging data from Computer Tomography (CT) and Cone Beam Computer Tomography (CBCT) of mandibles scanned at two different moments were collected to simulate an AM and PM situation where the first scan presented AM and the second scan was used to simulate PM. Ten cases with 20 scans were used to build a classifier which relies on voxel based matching and results with classification into one of two groups: "Unmatched" and "Matched". This protocol was then tested using five other scans out of the database. Unpaired t-testing was applied and accuracy of the computerized approach was determined. A significant difference was found between the "Unmatched" and "Matched" classes with means of 0.41 and 0.86 respectively. Furthermore, the testing phase showed an accuracy of 100%. The validation of this method pushes this protocol further to a fully automatic identification procedure for victim identification based on the mandibular midline canals structures only in cases with available AM and PM CBCT/CT data.

  6. Experimental and Computational Aerothermodynamics of a Mars Entry Vehicle

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.

    1996-01-01

    An aerothermodynamic database has been generated through both experimental testing and computational fluid dynamics simulations for a 70 deg sphere-cone configuration based on the NASA Mars Pathfinder entry vehicle. The aerothermodynamics of several related parametric configurations were also investigated. Experimental heat-transfer data were obtained at hypersonic test conditions in both a perfect gas air wind tunnel and in a hypervelocity, high-enthalpy expansion tube in which both air and carbon dioxide were employed as test gases. In these facilities, measurements were made with thin-film temperature-resistance gages on both the entry vehicle models and on the support stings of the models. Computational results for freestream conditions equivalent to those of the test facilities were generated using an axisymmetric/2D laminar Navier-Stokes solver with both perfect-gas and nonequilibrium thermochemical models. Forebody computational and experimental heating distributions agreed to within the experimental uncertainty for both the perfect-gas and high-enthalpy test conditions. In the wake, quantitative differences between experimental and computational heating distributions for the perfect-gas conditions indicated transition of the free shear layer near the reattachment point on the sting. For the high enthalpy cases, agreement to within, or slightly greater than, the experimental uncertainty was achieved in the wake except within the recirculation region, where further grid resolution appeared to be required. Comparisons between the perfect-gas and high-enthalpy results indicated that the wake remained laminar at the high-enthalpy test conditions, for which the Reynolds number was significantly lower than that of the perfect-gas conditions.

  7. Three-dimensional multigrid Navier-Stokes computations for turbomachinery applications

    NASA Astrophysics Data System (ADS)

    Subramanian, S. V.

    1989-07-01

    The fully three-dimensional, time-dependent compressible Navier-Stokes equations in cylindrical coordinates are presently used, in conjunction with the multistage Runge-Kutta numerical integration scheme for solution of the governing flow equations, to simulate complex flowfields within turbomechanical components whose pertinent effects encompass those of viscosity, compressibility, blade rotation, and tip clearance. Computed results are presented for selected cascades, emphasizing the code's capabilities in the accurate prediction of such features as airfoil loadings, exit flow angles, shocks, and secondary flows. Computations for several test cases have been performed on a Cray-YMP, using nearly 90,000 grid points.

  8. Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code

    NASA Technical Reports Server (NTRS)

    Weinberg, B. C.; Mcdonald, H.

    1980-01-01

    There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.

  9. Thermal/Structural Tailoring of Engine Blades (T/STAEBL) User's manual

    NASA Technical Reports Server (NTRS)

    Brown, K. W.

    1994-01-01

    The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a computer code that is able to perform numerical optimizations of cooled jet engine turbine blades and vanes. These optimizations seek an airfoil design of minimum operating cost that satisfies realistic design constraints. This report documents the organization of the T/STAEBL computer program, its design and analysis procedure, its optimization procedure, and provides an overview of the input required to run the program, as well as the computer resources required for its effective use. Additionally, usage of the program is demonstrated through a validation test case.

  10. Plans for Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Ballmann, Josef; Bhatia, Kumar; Blades, Eric; Boucke, Alexander; Chwalowski, Pawel; Dietz, Guido; Dowell, Earl; Florance, Jennifer P.; Hansen, Thorsten; hide

    2011-01-01

    This paper summarizes the plans for the first Aeroelastic Prediction Workshop. The workshop is designed to assess the state of the art of computational methods for predicting unsteady flow fields and aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques, and to identify computational and experimental areas needing additional research and development. Three subject configurations have been chosen from existing wind tunnel data sets where there is pertinent experimental data available for comparison. For each case chosen, the wind tunnel testing was conducted using forced oscillation of the model at specified frequencies

  11. Thermal/Structural Tailoring of Engine Blades (T/STAEBL): User's manual

    NASA Astrophysics Data System (ADS)

    Brown, K. W.

    1994-03-01

    The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a computer code that is able to perform numerical optimizations of cooled jet engine turbine blades and vanes. These optimizations seek an airfoil design of minimum operating cost that satisfies realistic design constraints. This report documents the organization of the T/STAEBL computer program, its design and analysis procedure, its optimization procedure, and provides an overview of the input required to run the program, as well as the computer resources required for its effective use. Additionally, usage of the program is demonstrated through a validation test case.

  12. Performance of the Widely-Used CFD Code OVERFLOW on the Pleides Supercomputer

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2017-01-01

    Computational performance studies were made for NASA's widely used Computational Fluid Dynamics code OVERFLOW on the Pleiades Supercomputer. Two test cases were considered: a full launch vehicle with a grid of 286 million points and a full rotorcraft model with a grid of 614 million points. Computations using up to 8000 cores were run on Sandy Bridge and Ivy Bridge nodes. Performance was monitored using times reported in the day files from the Portable Batch System utility. Results for two grid topologies are presented and compared in detail. Observations and suggestions for future work are made.

  13. HOMAR: A computer code for generating homotopic grids using algebraic relations: User's manual

    NASA Technical Reports Server (NTRS)

    Moitra, Anutosh

    1989-01-01

    A computer code for fast automatic generation of quasi-three-dimensional grid systems for aerospace configurations is described. The code employs a homotopic method to algebraically generate two-dimensional grids in cross-sectional planes, which are stacked to produce a three-dimensional grid system. Implementation of the algebraic equivalents of the homotopic relations for generating body geometries and grids are explained. Procedures for controlling grid orthogonality and distortion are described. Test cases with description and specification of inputs are presented in detail. The FORTRAN computer program and notes on implementation and use are included.

  14. RANS computations of tip vortex cavitation

    NASA Astrophysics Data System (ADS)

    Decaix, Jean; Balarac, Guillaume; Dreyer, Matthieu; Farhat, Mohamed; Münch, Cécile

    2015-12-01

    The present study is related to the development of the tip vortex cavitation in Kaplan turbines. The investigation is carried out on a simplified test case consisting of a NACA0009 blade with a gap between the blade tip and the side wall. Computations with and without cavitation are performed using a R ANS modelling and a transport equation for the liquid volume fraction. Compared with experimental data, the R ANS computations turn out to be able to capture accurately the development of the tip vortex. The simulations have also highlighted the influence of cavitation on the tip vortex trajectory.

  15. Sociopathic Knowledge Bases: Correct Knowledge Can Be Harmful Even Given Unlimited Computation

    DTIC Science & Technology

    1989-08-01

    pobitive, as false positives generated by a medical program can often be caught by a physician upon further testing . False negatives, however, may be...improvement over the knowledge base tested is obtained. Although our work is pretty much theoretical research oriented one example of ex- periments is...knowledge base, improves the performance by about 10%. of tests . First, we divide the cases into a training set and a validation set with 70% vs. 30% each

  16. Observations of the structure and vertical transport of the polar upper ionosphere with the EISCAT VHF radar. I - Is EISCAT able to determine O(+) and H(+) polar wind characteristic? A simulation study

    NASA Technical Reports Server (NTRS)

    Blelly, Pierre-Louis; Barakat, Abdullah R.; Fontanari, Jean; Alcayde, Denis; Blanc, Michel; Wu, Jian; Lathuillere, C.

    1992-01-01

    A method presented by Wu et al. (1992) for computing the H(+) vertical velocity from the main ionospheric parameters measured by the EISCAT VHF radar is tested in a fully controlled sequence which consists of generating an ideal ionospheric model by solving the coupled continuity and momentum equations for a two-ion plasma (O(+) and H(+)). Synthetic autocorrelation functions are generated from this model with the radar characteristics and used as actual measurements to compute the H(+) vertical velocities. Results of these simulations are shown and discussed for three cases of typical and low SNR and for low and increased mixing ratios. In most cases general agreement is found between computed H(+) velocities and generic ones with the altitude range considered, i.e., 200-1000 km. The method is shown to be reliable.

  17. Computer-Based Instruction's (CBI) Rediscovered Role in K-12: An Evaluation Case Study of One High School's Use of CBI to Improve Pass Rates on High-Stakes Tests

    ERIC Educational Resources Information Center

    Hannafin, Robert D.; Foshay, Wellesley R.

    2008-01-01

    Patriot High School (PHS) adopted a remediation strategy to help its 10th-grade students at risk of failing the Math portion of MCAS, the state's end of year competency exam. The centerpiece of that strategy was a computer-based instructional (CBI) course. PHS used a commercially available CBI product to align the course content with the…

  18. One-Loop Test of Quantum Black Holes in anti–de Sitter Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, James T.; Pando Zayas, Leopoldo A.; Rathee, Vimal

    Within 11-dimensional supergravity we compute the logarithmic correction to the entropy of magnetically charged asymptotically AdS4 black holes with arbitrary horizon topology. We find perfect agreement with the expected microscopic result arising from the dual field theory computation of the topologically twisted index. Our result relies crucially on a particular limit to the extremal black hole case and clarifies some aspects of quantum corrections in asymptotically AdS spacetimes.

  19. One-Loop Test of Quantum Black Holes in anti–de Sitter Space

    DOE PAGES

    Liu, James T.; Pando Zayas, Leopoldo A.; Rathee, Vimal; ...

    2018-06-01

    Within 11-dimensional supergravity we compute the logarithmic correction to the entropy of magnetically charged asymptotically AdS4 black holes with arbitrary horizon topology. We find perfect agreement with the expected microscopic result arising from the dual field theory computation of the topologically twisted index. Our result relies crucially on a particular limit to the extremal black hole case and clarifies some aspects of quantum corrections in asymptotically AdS spacetimes.

  20. One-Loop Test of Quantum Black Holes in anti-de Sitter Space

    NASA Astrophysics Data System (ADS)

    Liu, James T.; Pando Zayas, Leopoldo A.; Rathee, Vimal; Zhao, Wenli

    2018-06-01

    Within 11-dimensional supergravity we compute the logarithmic correction to the entropy of magnetically charged asymptotically AdS4 black holes with arbitrary horizon topology. We find perfect agreement with the expected microscopic result arising from the dual field theory computation of the topologically twisted index. Our result relies crucially on a particular limit to the extremal black hole case and clarifies some aspects of quantum corrections in asymptotically AdS spacetimes.

  1. One-Loop Test of Quantum Black Holes in anti-de Sitter Space.

    PubMed

    Liu, James T; Pando Zayas, Leopoldo A; Rathee, Vimal; Zhao, Wenli

    2018-06-01

    Within 11-dimensional supergravity we compute the logarithmic correction to the entropy of magnetically charged asymptotically AdS_{4} black holes with arbitrary horizon topology. We find perfect agreement with the expected microscopic result arising from the dual field theory computation of the topologically twisted index. Our result relies crucially on a particular limit to the extremal black hole case and clarifies some aspects of quantum corrections in asymptotically AdS spacetimes.

  2. Dynamic Stall Measurements and Computations for a VR-12 Airfoil with a Variable Droop Leading Edge

    NASA Technical Reports Server (NTRS)

    Martin, P. B.; McAlister, K. W.; Chandrasekhara, M. S.; Geissler, W.

    2003-01-01

    High density-altitude operations of helicopters with advanced performance and maneuver capabilities have lead to fundamental research on active high-lift system concepts for rotor blades. The requirement for this type of system was to improve the sectional lift-to-drag ratio by alleviating dynamic stall on the retreating blade while simultaneously reducing the transonic drag rise of the advancing blade. Both measured and computational results showed that a Variable Droop Leading Edge (VDLE) airfoil is a viable concept for application to a rotor high-lift system. Results are presented for a series of 2D compressible dynamic stall wind tunnel tests with supporting CFD results for selected test cases. These measurements and computations show a dramatic decrease in the drag and pitching moment associated with severe dynamic stall when the VDLE concept is applied to the Boeing VR-12 airfoil. Test results also show an elimination of the negative pitch damping observed in the baseline moment hysteresis curves.

  3. On-Line Mu Method for Robust Flutter Prediction in Expanding a Safe Flight Envelope for an Aircraft Model Under Flight Test

    NASA Technical Reports Server (NTRS)

    Lind, Richard C. (Inventor); Brenner, Martin J.

    2001-01-01

    A structured singular value (mu) analysis method of computing flutter margins has robust stability of a linear aeroelastic model with uncertainty operators (Delta). Flight data is used to update the uncertainty operators to accurately account for errors in the computed model and the observed range of aircraft dynamics of the aircraft under test caused by time-varying aircraft parameters, nonlinearities, and flight anomalies, such as test nonrepeatability. This mu-based approach computes predict flutter margins that are worst case with respect to the modeling uncertainty for use in determining when the aircraft is approaching a flutter condition and defining an expanded safe flight envelope for the aircraft that is accepted with more confidence than traditional methods that do not update the analysis algorithm with flight data by introducing mu as a flutter margin parameter that presents several advantages over tracking damping trends as a measure of a tendency to instability from available flight data.

  4. Improved ATLAS HammerCloud Monitoring for Local Site Administration

    NASA Astrophysics Data System (ADS)

    Böhler, M.; Elmsheuser, J.; Hönig, F.; Legger, F.; Mancinelli, V.; Sciacca, G.

    2015-12-01

    Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, and CMS experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore it is essential to provide a detailed and well organized web interface for the local site administrators such that they can easily spot and promptly solve site issues. Additional functionality has been developed to extract and visualize the most relevant information. The site administrators can now be pointed easily to major site issues which lead to site blacklisting as well as possible minor issues that are usually not conspicuous enough to warrant the blacklisting of a specific site, but can still cause undesired effects such as a non-negligible job failure rate. This paper summarizes the different developments and optimizations of the HammerCloud web interface and gives an overview of typical use cases.

  5. Nuclear Ensemble Approach with Importance Sampling.

    PubMed

    Kossoski, Fábris; Barbatti, Mario

    2018-06-12

    We show that the importance sampling technique can effectively augment the range of problems where the nuclear ensemble approach can be applied. A sampling probability distribution function initially determines the collection of initial conditions for which calculations are performed, as usual. Then, results for a distinct target distribution are computed by introducing compensating importance sampling weights for each sampled point. This mapping between the two probability distributions can be performed whenever they are both explicitly constructed. Perhaps most notably, this procedure allows for the computation of temperature dependent observables. As a test case, we investigated the UV absorption spectra of phenol, which has been shown to have a marked temperature dependence. Application of the proposed technique to a range that covers 500 K provides results that converge to those obtained with conventional sampling. We further show that an overall improved rate of convergence is obtained when sampling is performed at intermediate temperatures. The comparison between calculated and the available measured cross sections is very satisfactory, as the main features of the spectra are correctly reproduced. As a second test case, one of Tully's classical models was revisited, and we show that the computation of dynamical observables also profits from the importance sampling technique. In summary, the strategy developed here can be employed to assess the role of temperature for any property calculated within the nuclear ensemble method, with the same computational cost as doing so for a single temperature.

  6. Open angle glaucoma effects on preattentive visual search efficiency for flicker, motion displacement and orientation pop-out tasks.

    PubMed

    Loughman, James; Davison, Peter; Flitcroft, Ian

    2007-11-01

    Preattentive visual search (PAVS) describes rapid and efficient retinal and neural processing capable of immediate target detection in the visual field. Damage to the nerve fibre layer or visual pathway might reduce the efficiency with which the visual system performs such analysis. The purpose of this study was to test the hypothesis that patients with glaucoma are impaired on parallel search tasks, and that this would serve to distinguish glaucoma in early cases. Three groups of observers (glaucoma patients, suspect and normal individuals) were examined, using computer-generated flicker, orientation, and vertical motion displacement targets to assess PAVS efficiency. The task required rapid and accurate localisation of a singularity embedded in a field of 119 homogeneous distractors on either the left or right-hand side of a computer monitor. All subjects also completed a choice reaction time (CRT) task. Independent sample T tests revealed PAVS efficiency to be significantly impaired in the glaucoma group compared with both normal and suspect individuals. Performance was impaired in all types of glaucoma tested. Analysis between normal and suspect individuals revealed a significant difference only for motion displacement response times. Similar analysis using a PAVS/CRT index confirmed the glaucoma findings but also showed statistically significant differences between suspect and normal individuals across all target types. A test of PAVS efficiency appears capable of differentiating early glaucoma from both normal and suspect cases. Analysis incorporating a PAVS/CRT index enhances the diagnostic capacity to differentiate normal from suspect cases.

  7. Computer-aided detection of bladder wall thickening in CT urography (CTU)

    NASA Astrophysics Data System (ADS)

    Cha, Kenny H.; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.; Weizer, Alon Z.; Gordon, Marshall N.; Samala, Ravi K.

    2018-02-01

    We are developing a computer-aided detection system for bladder cancer in CT urography (CTU). Bladder wall thickening is a manifestation of bladder cancer and its detection is more challenging than the detection of bladder masses. We first segmented the inner and outer bladder walls using our method that combined deep-learning convolutional neural network with level sets. The non-contrast-enhanced region was separated from the contrast-enhanced region with a maximum-intensity-projection-based method. The non-contrast region was smoothed and gray level threshold was applied to the contrast and non-contrast regions separately to extract the bladder wall and potential lesions. The bladder wall was transformed into a straightened thickness profile, which was analyzed to identify regions of wall thickening candidates. Volume-based features of the wall thickening candidates were analyzed with linear discriminant analysis (LDA) to differentiate bladder wall thickenings from false positives. A data set of 112 patients, 87 with wall thickening and 25 with normal bladders, was collected retrospectively with IRB approval, and split into independent training and test sets. Of the 57 training cases, 44 had bladder wall thickening and 13 were normal. Of the 55 test cases, 43 had wall thickening and 12 were normal. The LDA classifier was trained with the training set and evaluated with the test set. FROC analysis showed that the system achieved sensitivities of 93.2% and 88.4% for the training and test sets, respectively, at 0.5 FPs/case.

  8. Polymorphic Electronic Circuits

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian

    2004-01-01

    Polymorphic electronics is a nascent technological discipline that involves, among other things, designing the same circuit to perform different analog and/or digital functions under different conditions. For example, a circuit can be designed to function as an OR gate or an AND gate, depending on the temperature (see figure). Polymorphic electronics can also be considered a subset of polytronics, which is a broader technological discipline in which optical and possibly other information- processing systems could also be designed to perform multiple functions. Polytronics is an outgrowth of evolvable hardware (EHW). The basic concepts and some specific implementations of EHW were described in a number of previous NASA Tech Briefs articles. To recapitulate: The essence of EHW is to design, construct, and test a sequence of populations of circuits that function as incrementally better solutions of a given design problem through the selective, repetitive connection and/or disconnection of capacitors, transistors, amplifiers, inverters, and/or other circuit building blocks. The evolution is guided by a search-and-optimization algorithm (in particular, a genetic algorithm) that operates in the space of possible circuits to find a circuit that exhibits an acceptably close approximation of the desired functionality. The evolved circuits can be tested by computational simulation (in which case the evolution is said to be extrinsic), tested in real hardware (in which case the evolution is said to be intrinsic), or tested in random sequences of computational simulation and real hardware (in which case the evolution is said to be mixtrinsic).

  9. A Human Proximity Operations System test case validation approach

    NASA Astrophysics Data System (ADS)

    Huber, Justin; Straub, Jeremy

    A Human Proximity Operations System (HPOS) poses numerous risks in a real world environment. These risks range from mundane tasks such as avoiding walls and fixed obstacles to the critical need to keep people and processes safe in the context of the HPOS's situation-specific decision making. Validating the performance of an HPOS, which must operate in a real-world environment, is an ill posed problem due to the complexity that is introduced by erratic (non-computer) actors. In order to prove the HPOS's usefulness, test cases must be generated to simulate possible actions of these actors, so the HPOS can be shown to be able perform safely in environments where it will be operated. The HPOS must demonstrate its ability to be as safe as a human, across a wide range of foreseeable circumstances. This paper evaluates the use of test cases to validate HPOS performance and utility. It considers an HPOS's safe performance in the context of a common human activity, moving through a crowded corridor, and extrapolates (based on this) to the suitability of using test cases for AI validation in other areas of prospective application.

  10. Advanced information processing system: Fault injection study and results

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura F.; Masotto, Thomas K.; Lala, Jaynarayan H.

    1992-01-01

    The objective of the AIPS program is to achieve a validated fault tolerant distributed computer system. The goals of the AIPS fault injection study were: (1) to present the fault injection study components addressing the AIPS validation objective; (2) to obtain feedback for fault removal from the design implementation; (3) to obtain statistical data regarding fault detection, isolation, and reconfiguration responses; and (4) to obtain data regarding the effects of faults on system performance. The parameters are described that must be varied to create a comprehensive set of fault injection tests, the subset of test cases selected, the test case measurements, and the test case execution. Both pin level hardware faults using a hardware fault injector and software injected memory mutations were used to test the system. An overview is provided of the hardware fault injector and the associated software used to carry out the experiments. Detailed specifications are given of fault and test results for the I/O Network and the AIPS Fault Tolerant Processor, respectively. The results are summarized and conclusions are given.

  11. An Analysis of Navigation Algorithms for Smartphones Using J2ME

    NASA Astrophysics Data System (ADS)

    Santos, André C.; Tarrataca, Luís; Cardoso, João M. P.

    Embedded systems are considered one of the most potential areas for future innovations. Two embedded fields that will most certainly take a primary role in future innovations are mobile robotics and mobile computing. Mobile robots and smartphones are growing in number and functionalities, becoming a presence in our daily life. In this paper, we study the current feasibility of a smartphone to execute navigation algorithms. As a test case, we use a smartphone to control an autonomous mobile robot. We tested three navigation problems: Mapping, Localization and Path Planning. For each of these problems, an algorithm has been chosen, developed in J2ME, and tested on the field. Results show the current mobile Java capacity for executing computationally demanding algorithms and reveal the real possibility of using smartphones for autonomous navigation.

  12. Impact of Virtual Patients as Optional Learning Material in Veterinary Biochemistry Education.

    PubMed

    Kleinsorgen, Christin; von Köckritz-Blickwede, Maren; Naim, Hassan Y; Branitzki-Heinemann, Katja; Kankofer, Marta; Mándoki, Míra; Adler, Martin; Tipold, Andrea; Ehlers, Jan P

    2018-01-01

    Biochemistry and physiology teachers from veterinary faculties in Hannover, Budapest, and Lublin prepared innovative, computer-based, integrative clinical case scenarios as optional learning materials for teaching and learning in basic sciences. These learning materials were designed to enhance attention and increase interest and intrinsic motivation for learning, thus strengthening autonomous, active, and self-directed learning. We investigated learning progress and success by administering a pre-test before exposure to the virtual patients (vetVIP) cases, offered vetVIP cases alongside regular biochemistry courses, and then administered a complementary post-test. We analyzed improvement in cohort performance and level of confidence in rating questions. Results of the performance in biochemistry examinations in 2014, 2015, and 2016 were correlated with the use of and performance in vetVIP cases throughout biochemistry courses in Hannover. Surveys of students reflected that interactive cases helped them understand the relevance of basic sciences in veterinary education. Differences between identical pre- and post-tests revealed knowledge improvement (correct answers: +28% in Hannover, +9% in Lublin) and enhanced confidence in decision making ("I don't know" answers: -20% in Hannover, -7.5% in Lublin). High case usage and voluntary participation (use of vetVIP cases in Hannover and Lublin >70%, Budapest <1%; response rates in pre-test 72% and post-test 48%) indicated a good increase in motivation for the subject of biochemistry. Despite increased motivation, there was only a weak correlation between performance in final exams and performance in the vetVIP cases. Case-based e-learning could be extended and generated cases should be shared across veterinary faculties.

  13. Evaluation of Big Data Containers for Popular Storage, Retrieval, and Computation Primitives in Earth Science Analysis

    NASA Astrophysics Data System (ADS)

    Das, K.; Clune, T.; Kuo, K. S.; Mattmann, C. A.; Huang, T.; Duffy, D.; Yang, C. P.; Habermann, T.

    2015-12-01

    Data containers are infrastructures that facilitate storage, retrieval, and analysis of data sets. Big data applications in Earth Science require a mix of processing techniques, data sources and storage formats that are supported by different data containers. Some of the most popular data containers used in Earth Science studies are Hadoop, Spark, SciDB, AsterixDB, and RasDaMan. These containers optimize different aspects of the data processing pipeline and are, therefore, suitable for different types of applications. These containers are expected to undergo rapid evolution and the ability to re-test, as they evolve, is very important to ensure the containers are up to date and ready to be deployed to handle large volumes of observational data and model output. Our goal is to develop an evaluation plan for these containers to assess their suitability for Earth Science data processing needs. We have identified a selection of test cases that are relevant to most data processing exercises in Earth Science applications and we aim to evaluate these systems for optimal performance against each of these test cases. The use cases identified as part of this study are (i) data fetching, (ii) data preparation for multivariate analysis, (iii) data normalization, (iv) distance (kernel) computation, and (v) optimization. In this study we develop a set of metrics for performance evaluation, define the specifics of governance, and test the plan on current versions of the data containers. The test plan and the design mechanism are expandable to allow repeated testing with both new containers and upgraded versions of the ones mentioned above, so that we can gauge their utility as they evolve.

  14. Computational Simulations of Inferior Vena Cava (IVC) Filter Placement and Hemodynamics in Patient-Specific Geometries

    NASA Astrophysics Data System (ADS)

    Aycock, Kenneth; Sastry, Shankar; Kim, Jibum; Shontz, Suzanne; Campbell, Robert; Manning, Keefe; Lynch, Frank; Craven, Brent

    2013-11-01

    A computational methodology for simulating inferior vena cava (IVC) filter placement and IVC hemodynamics was developed and tested on two patient-specific IVC geometries: a left-sided IVC, and an IVC with a retroaortic left renal vein. Virtual IVC filter placement was performed with finite element analysis (FEA) using non-linear material models and contact modeling, yielding maximum vein displacements of approximately 10% of the IVC diameters. Blood flow was then simulated using computational fluid dynamics (CFD) with four cases for each patient IVC: 1) an IVC only, 2) an IVC with a placed filter, 3) an IVC with a placed filter and a model embolus, all at resting flow conditions, and 4) an IVC with a placed filter and a model embolus at exercise flow conditions. Significant hemodynamic differences were observed between the two patient IVCs, with the development of a right-sided jet (all cases) and a larger stagnation region (cases 3-4) in the left-sided IVC. These results support further investigation of the effects of IVC filter placement on a patient-specific basis.

  15. Two Cases of Lethal Complications Following Ultrasound-Guided Percutaneous Fine-Needle Biopsy of the Liver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drinkovic, Ivan; Brkljacic, Boris

    1996-09-15

    Two cases with lethal complications are reported among 1750 ultrasound (US)-guided percutaneous fine-needle liver biopsies performed in our department. The first patient had angiosarcoma of the liver which was not suspected after computed tomography (CT) and US studies had been performed. The other patient had hepatocellular carcinoma in advanced hepatic cirrhosis. Death was due to bleeding in both cases. Pre-procedure laboratory tests did not reveal the existence of major bleeding disorders in either case. Normal liver tissue was interposed in the needle track between the liver capsule and the lesions which were targeted.

  16. Computer-aided detection of bladder masses in CT urography (CTU)

    NASA Astrophysics Data System (ADS)

    Cha, Kenny H.; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.; Weizer, Alon; Samala, Ravi K.

    2017-03-01

    We are developing a computer-aided detection system for bladder cancer in CT urography (CTU). We have previously developed methods for detection of bladder masses within the contrast-enhanced and the non-contrastenhanced regions of the bladder individually. In this study, we investigated methods for detection of bladder masses within the entire bladder. The bladder was segmented using our method that combined deep-learning convolutional neural network with level sets. The non-contrast-enhanced region was separated from the contrast-enhanced region with a maximum-intensity-projection-based method. The non-contrast region was smoothed and gray level threshold was applied to the contrast and non-contrast regions separately to extract the bladder wall and potential masses. The bladder wall was transformed into a straightened thickness profile, which was analyzed to identify lesion candidates in a prescreening step. The candidates were mapped back to the 3D CT volume and segmented using our auto-initialized cascaded level set (AI-CALS) segmentation method. Twenty-seven morphological features were extracted for each candidate. A data set of 57 patients with 71 biopsy-proven bladder lesions was used, which was split into independent training and test sets: 42 training cases with 52 lesions, and 15 test cases with 19 lesions. Using the training set, feature selection was performed and a linear discriminant (LDA) classifier was designed to merge the selected features for classification of bladder lesions and false positives. The trained classifier was evaluated with the test set. FROC analysis showed that the system achieved a sensitivity of 86.5% at 3.3 FPs/case for the training set, and 84.2% at 3.7 FPs/case for the test set.

  17. Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations

    NASA Astrophysics Data System (ADS)

    Chua, A. J. K.; Hee, S.; Handley, W. J.; Higson, E.; Moore, C. J.; Gair, J. R.; Hobson, M. P.; Lasenby, A. N.

    2018-07-01

    Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalized waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalized model for extreme-mass-ratio inspirals constructed on deformed black hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.

  18. Ambiguity resolution for satellite Doppler positioning systems

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Marini, J.

    1979-01-01

    The implementation of satellite-based Doppler positioning systems frequently requires the recovery of transmitter position from a single pass of Doppler data. The least-squares approach to the problem yields conjugate solutions on either side of the satellite subtrack. It is important to develop a procedure for choosing the proper solution which is correct in a high percentage of cases. A test for ambiguity resolution which is the most powerful in the sense that it maximizes the probability of a correct decision is derived. When systematic error sources are properly included in the least-squares reduction process to yield an optimal solution the test reduces to choosing the solution which provides the smaller valuation of the least-squares loss function. When systematic error sources are ignored in the least-squares reduction, the most powerful test is a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudoinverse of a reduced-rank square matrix. A formula for computing the power of the most powerful test is provided. Numerical examples are included in which the power of the test is computed for situations that are relevant to the design of a satellite-aided search and rescue system.

  19. Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations

    NASA Astrophysics Data System (ADS)

    Chua, A. J. K.; Hee, S.; Handley, W. J.; Higson, E.; Moore, C. J.; Gair, J. R.; Hobson, M. P.; Lasenby, A. N.

    2018-04-01

    Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black-hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalised waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalised model for extreme-mass-ratio inspirals constructed on deformed black-hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.

  20. Numerical propulsion system simulation

    NASA Technical Reports Server (NTRS)

    Lytle, John K.; Remaklus, David A.; Nichols, Lester D.

    1990-01-01

    The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributors to the high cost is the need to perform many large scale system tests. Extensive testing is used to capture the complex interactions among the multiple disciplines and the multiple components inherent in complex systems. The objective of the Numerical Propulsion System Simulation (NPSS) is to provide insight into these complex interactions through computational simulations. This will allow for comprehensive evaluation of new concepts early in the design phase before a commitment to hardware is made. It will also allow for rapid assessment of field-related problems, particularly in cases where operational problems were encountered during conditions that would be difficult to simulate experimentally. The tremendous progress taking place in computational engineering and the rapid increase in computing power expected through parallel processing make this concept feasible within the near future. However it is critical that the framework for such simulations be put in place now to serve as a focal point for the continued developments in computational engineering and computing hardware and software. The NPSS concept which is described will provide that framework.

  1. Reduction of false-positive recalls using a computerized mammographic image feature analysis scheme

    NASA Astrophysics Data System (ADS)

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-08-01

    The high false-positive recall rate is one of the major dilemmas that significantly reduce the efficacy of screening mammography, which harms a large fraction of women and increases healthcare cost. This study aims to investigate the feasibility of helping reduce false-positive recalls by developing a new computer-aided diagnosis (CAD) scheme based on the analysis of global mammographic texture and density features computed from four-view images. Our database includes full-field digital mammography (FFDM) images acquired from 1052 recalled women (669 positive for cancer and 383 benign). Each case has four images: two craniocaudal (CC) and two mediolateral oblique (MLO) views. Our CAD scheme first computed global texture features related to the mammographic density distribution on the segmented breast regions of four images. Second, the computed features were given to two artificial neural network (ANN) classifiers that were separately trained and tested in a ten-fold cross-validation scheme on CC and MLO view images, respectively. Finally, two ANN classification scores were combined using a new adaptive scoring fusion method that automatically determined the optimal weights to assign to both views. CAD performance was tested using the area under a receiver operating characteristic curve (AUC). The AUC = 0.793  ±  0.026 was obtained for this four-view CAD scheme, which was significantly higher at the 5% significance level than the AUCs achieved when using only CC (p = 0.025) or MLO (p = 0.0004) view images, respectively. This study demonstrates that a quantitative assessment of global mammographic image texture and density features could provide useful and/or supplementary information to classify between malignant and benign cases among the recalled cases, which may eventually help reduce the false-positive recall rate in screening mammography.

  2. IGB grid: User's manual (A turbomachinery grid generation code)

    NASA Technical Reports Server (NTRS)

    Beach, T. A.; Hoffman, G.

    1992-01-01

    A grid generation code called IGB is presented for use in computational investigations of turbomachinery flowfields. It contains a combination of algebraic and elliptic techniques coded for use on an interactive graphics workstation. The instructions for use and a test case are included.

  3. Closed Environment Module - Modularization and extension of the Virtual Habitat

    NASA Astrophysics Data System (ADS)

    Plötner, Peter; Czupalla, Markus; Zhukov, Anton

    2013-12-01

    The Virtual Habitat (V-HAB), is a Life Support System (LSS) simulation, created to perform dynamic simulation of LSS's for future human spaceflight missions. It allows the testing of LSS robustness by means of computer simulations, e.g. of worst case scenarios.

  4. Instructional Technologies in the Workforce: Case Studies from the Nuclear Industry.

    ERIC Educational Resources Information Center

    Widen, William C.; Roth, Gene L.

    1992-01-01

    Describes six types of instructional technology used in the nuclear industry: Study Pacs, computerized test banks, computer-based training, interactive videodisc, artificial intelligence, and full-scope simulation. Each description presents the need, training device, outcomes, and limitations or constraints on use. (SK)

  5. Teaching Clinical Neurology with the PLATO IV Computer System

    ERIC Educational Resources Information Center

    Parker, Alan; Trynda, Richard

    1975-01-01

    A "Neurox" program entitled "Canine Neurological Diagnosis" developed at the University of Illinois College of Veterinary Medicine enables a student to obtain the results of 78 possible neurological tests or associated questions on a single case. A lesson and possible adaptations are described. (LBH)

  6. 40 CFR 51.354 - Adequate tools and resources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... assurance, data analysis and reporting, and the holding of hearings and adjudication of cases. A portion of... supply of vehicles for covert auditing, test equipment and facilities for program evaluation, and computers capable of data processing, analysis, and reporting. Equipment or equivalent services may be...

  7. Using a Drug Interaction Program (Drug Interactions Advisor™) in a Community Hospital

    PubMed Central

    Harvey, A. C.; Diehl, G. R.; Finlayson, W. B.

    1987-01-01

    To test the usefulness of a drugs-interaction program in a community hospital one hundred patients in three medical wards were surveyed with respect to their drug regime. The drugs listed for each patient were entered into Drug Interactions Advisor™ a commercial interactions program running on an Apple IIE. Interacting drugs were listed with the severity of the interaction in each case. Of one hundred patients fifty-one had drugs which could potentially interact and in fifty-one percent of cases a change in therapy would have been advised by Drug Interactions Advisor™. The completeness of the data base was assessed as to its inclusion of drugs actually given and it dealt with eighty-nine percent. The program was tested against ten known interactions and it identified six. Multiple drug therapy is a major problem nowadays and will increase with the aging of the population. Drug interactions programs exploit computer technology to make drug surveillance easier. Without computers such surveillance is difficult if not impossible.

  8. Software Defined Networking challenges and future direction: A case study of implementing SDN features on OpenStack private cloud

    NASA Astrophysics Data System (ADS)

    Shamugam, Veeramani; Murray, I.; Leong, J. A.; Sidhu, Amandeep S.

    2016-03-01

    Cloud computing provides services on demand instantly, such as access to network infrastructure consisting of computing hardware, operating systems, network storage, database and applications. Network usage and demands are growing at a very fast rate and to meet the current requirements, there is a need for automatic infrastructure scaling. Traditional networks are difficult to automate because of the distributed nature of their decision making process for switching or routing which are collocated on the same device. Managing complex environments using traditional networks is time-consuming and expensive, especially in the case of generating virtual machines, migration and network configuration. To mitigate the challenges, network operations require efficient, flexible, agile and scalable software defined networks (SDN). This paper discuss various issues in SDN and suggests how to mitigate the network management related issues. A private cloud prototype test bed was setup to implement the SDN on the OpenStack platform to test and evaluate the various network performances provided by the various configurations.

  9. Demonstration of the range over which the Langley Research Center digital computer charring ablation program (CHAP) can be used with confidence: Comparisons of CHAP predictions and test data for three ablation materials

    NASA Technical Reports Server (NTRS)

    Moyer, C. B.; Green, K. A.

    1972-01-01

    Comparisons of ablation calculations with the charring ablation computer code and ablation test data are presented over a wide range of environmental conditions in air for three materials: low-density nylon phenolic, Avcoat 5026-39HC/G, and a filled silicon elastomer. Heat fluxes considered range from over 500 Btu/sq ft-sec to less than 50 Btu/sq ft-sec. Pressures range from 0.5 atm to .004 atm. Enthalpies range from about 2000 Btu/lb to 18000 Btu/lb. Predictions of recession, pyrolysis penetration, and thermocouple responses are considered. Recession predictions for nylon phenolic are good as steady state is approached, but strongly transient cases are underpredicted. Pyrolysis penetrations and thermocouple responses are very well predicted. Recession amounts for Avcoat and silicone elastomer are less well predicted, although high heat flux cases near steady state are fairly satisfactory. Pyrolysis penetrations and thermocouple responses are very well predicted.

  10. Comparative Study on High-Order Positivity-preserving WENO Schemes

    NASA Technical Reports Server (NTRS)

    Kotov, D. V.; Yee, H. C.; Sjogreen, B.

    2013-01-01

    In gas dynamics and magnetohydrodynamics flows, physically, the density and the pressure p should both be positive. In a standard conservative numerical scheme, however, the computed internal energy is obtained by subtracting the kinetic energy from the total energy, resulting in a computed p that may be negative. Examples are problems in which the dominant energy is kinetic. Negative may often emerge in computing blast waves. In such situations the computed eigenvalues of the Jacobian will become imaginary. Consequently, the initial value problem for the linearized system will be ill posed. This explains why failure of preserving positivity of density or pressure may cause blow-ups of the numerical algorithm. The adhoc methods in numerical strategy which modify the computed negative density and/or the computed negative pressure to be positive are neither a conservative cure nor a stable solution. Conservative positivity-preserving schemes are more appropriate for such flow problems. The ideas of Zhang & Shu (2012) and Hu et al. (2012) precisely address the aforementioned issue. Zhang & Shu constructed a new conservative positivity-preserving procedure to preserve positive density and pressure for high-order WENO schemes by the Lax-Friedrichs flux (WENO/LLF). In general, WENO/LLF is too dissipative for flows such as turbulence with strong shocks computed in direct numerical simulations (DNS) and large eddy simulations (LES). The new conservative positivity-preserving procedure proposed in Hu et al. (2012) can be used with any high-order shock-capturing scheme, including high-order WENO schemes using the Roe's flux (WENO/Roe). The goal of this study is to compare the results obtained by non-positivity-preserving methods with the recently developed positivity-preserving schemes for representative test cases. In particular the more difficult 3D Noh and Sedov problems are considered. These test cases are chosen because of the negative pressure/density most often exhibited by standard high-order shock-capturing schemes. The simulation of a hypersonic nonequilibrium viscous shock tube that is related to the NASA Electric Arc Shock Tube (EAST) is also included. EAST is a high-temperature and high Mach number viscous nonequilibrium flow consisting of 13 species. In addition, as most common shock-capturing schemes have been developed for problems without source terms, when applied to problems with nonlinear and/or sti source terms these methods can result in spurious solutions, even when solving a conservative system of equations with a conservative scheme. This kind of behavior can be observed even for a scalar case (LeVeque & Yee 1990) as well as for the case consisting of two species and one reaction (Wang et al. 2012). For further information concerning this issue see (LeVeque & Yee 1990; Griffiths et al. 1992; Lafon & Yee 1996; Yee et al. 2012). This EAST example indicated that standard high-order shock-capturing methods exhibit instability of density/pressure in addition to grid-dependent discontinuity locations with insufficient grid points. The evaluation of these test cases is based on the stability of the numerical schemes together with the accuracy of the obtained solutions.

  11. Computational considerations for the simulation of shock-induced sound

    NASA Technical Reports Server (NTRS)

    Casper, Jay; Carpenter, Mark H.

    1996-01-01

    The numerical study of aeroacoustic problems places stringent demands on the choice of a computational algorithm, because it requires the ability to propagate disturbances of small amplitude and short wavelength. The demands are particularly high when shock waves are involved, because the chosen algorithm must also resolve discontinuities in the solution. The extent to which a high-order-accurate shock-capturing method can be relied upon for aeroacoustics applications that involve the interaction of shocks with other waves has not been previously quantified. Such a study is initiated in this work. A fourth-order-accurate essentially nonoscillatory (ENO) method is used to investigate the solutions of inviscid, compressible flows with shocks in a quasi-one-dimensional nozzle flow. The design order of accuracy is achieved in the smooth regions of a steady-state test case. However, in an unsteady test case, only first-order results are obtained downstream of a sound-shock interaction. The difficulty in obtaining a globally high-order-accurate solution in such a case with a shock-capturing method is demonstrated through the study of a simplified, linear model problem. Some of the difficult issues and ramifications for aeroacoustics simulations of flows with shocks that are raised by these results are discussed.

  12. Nonlinear Boltzmann equation for the homogeneous isotropic case: Minimal deterministic Matlab program

    NASA Astrophysics Data System (ADS)

    Asinari, Pietro

    2010-10-01

    The homogeneous isotropic Boltzmann equation (HIBE) is a fundamental dynamic model for many applications in thermodynamics, econophysics and sociodynamics. Despite recent hardware improvements, the solution of the Boltzmann equation remains extremely challenging from the computational point of view, in particular by deterministic methods (free of stochastic noise). This work aims to improve a deterministic direct method recently proposed [V.V. Aristov, Kluwer Academic Publishers, 2001] for solving the HIBE with a generic collisional kernel and, in particular, for taking care of the late dynamics of the relaxation towards the equilibrium. Essentially (a) the original problem is reformulated in terms of particle kinetic energy (exact particle number and energy conservation during microscopic collisions) and (b) the computation of the relaxation rates is improved by the DVM-like correction, where DVM stands for Discrete Velocity Model (ensuring that the macroscopic conservation laws are exactly satisfied). Both these corrections make possible to derive very accurate reference solutions for this test case. Moreover this work aims to distribute an open-source program (called HOMISBOLTZ), which can be redistributed and/or modified for dealing with different applications, under the terms of the GNU General Public License. The program has been purposely designed in order to be minimal, not only with regards to the reduced number of lines (less than 1000), but also with regards to the coding style (as simple as possible). Program summaryProgram title: HOMISBOLTZ Catalogue identifier: AEGN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 23 340 No. of bytes in distributed program, including test data, etc.: 7 635 236 Distribution format: tar.gz Programming language: Tested with Matlab version ⩽6.5. However, in principle, any recent version of Matlab or Octave should work Computer: All supporting Matlab or Octave Operating system: All supporting Matlab or Octave RAM: 300 MBytes Classification: 23 Nature of problem: The problem consists in integrating the homogeneous Boltzmann equation for a generic collisional kernel in case of isotropic symmetry, by a deterministic direct method. Difficulties arise from the multi-dimensionality of the collisional operator and from satisfying the conservation of particle number and energy (momentum is trivial for this test case) as accurately as possible, in order to preserve the late dynamics. Solution method: The solution is based on the method proposed by Aristov (2001) [1], but with two substantial improvements: (a) the original problem is reformulated in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation during microscopic collisions) and (b) a DVM-like correction (where DVM stands for Discrete Velocity Model) is adopted for improving the relaxation rates (this allows one to satisfy exactly the conservation laws at macroscopic level, which is particularly important for describing the late dynamics in the relaxation towards the equilibrium). Both these corrections make possible to derive very accurate reference solutions for this test case. Restrictions: The nonlinear Boltzmann equation is extremely challenging from the computational point of view, in particular for deterministic methods, despite the increased computational power of recent hardware. In this work, only the homogeneous isotropic case is considered, for making possible the development of a minimal program (by a simple scripting language) and allowing the user to check the advantages of the proposed improvements beyond Aristov's (2001) method [1]. The initial conditions are supposed parameterized according to a fixed analytical expression, but this can be easily modified. Running time: From minutes to hours (depending on the adopted discretization of the kinetic energy space). For example, on a 64 bit workstation with Intel CoreTM i7-820Q Quad Core CPU at 1.73 GHz and 8 MBytes of RAM, the provided test run (with the corresponding binary data file storing the pre-computed relaxation rates) requires 154 seconds. References:V.V. Aristov, Direct Methods for Solving the Boltzmann Equation and Study of Nonequilibrium Flows, Kluwer Academic Publishers, 2001.

  13. Computer-aided diagnosis of lung cancer: the effect of training data sets on classification accuracy of lung nodules.

    PubMed

    Gong, Jing; Liu, Ji-Yu; Sun, Xi-Wen; Zheng, Bin; Nie, Sheng-Dong

    2018-02-05

    This study aims to develop a computer-aided diagnosis (CADx) scheme for classification between malignant and benign lung nodules, and also assess whether CADx performance changes in detecting nodules associated with early and advanced stage lung cancer. The study involves 243 biopsy-confirmed pulmonary nodules. Among them, 76 are benign, 81 are stage I and 86 are stage III malignant nodules. The cases are separated into three data sets involving: (1) all nodules, (2) benign and stage I malignant nodules, and (3) benign and stage III malignant nodules. A CADx scheme is applied to segment lung nodules depicted on computed tomography images and we initially computed 66 3D image features. Then, three machine learning models namely, a support vector machine, naïve Bayes classifier and linear discriminant analysis, are separately trained and tested by using three data sets and a leave-one-case-out cross-validation method embedded with a Relief-F feature selection algorithm. When separately using three data sets to train and test three classifiers, the average areas under receiver operating characteristic curves (AUC) are 0.94, 0.90 and 0.99, respectively. When using the classifiers trained using data sets with all nodules, average AUC values are 0.88 and 0.99 for detecting early and advanced stage nodules, respectively. AUC values computed from three classifiers trained using the same data set are consistent without statistically significant difference (p  >  0.05). This study demonstrates (1) the feasibility of applying a CADx scheme to accurately distinguish between benign and malignant lung nodules, and (2) a positive trend between CADx performance and cancer progression stage. Thus, in order to increase CADx performance in detecting subtle and early cancer, training data sets should include more diverse early stage cancer cases.

  14. Computer-aided diagnosis of lung cancer: the effect of training data sets on classification accuracy of lung nodules

    NASA Astrophysics Data System (ADS)

    Gong, Jing; Liu, Ji-Yu; Sun, Xi-Wen; Zheng, Bin; Nie, Sheng-Dong

    2018-02-01

    This study aims to develop a computer-aided diagnosis (CADx) scheme for classification between malignant and benign lung nodules, and also assess whether CADx performance changes in detecting nodules associated with early and advanced stage lung cancer. The study involves 243 biopsy-confirmed pulmonary nodules. Among them, 76 are benign, 81 are stage I and 86 are stage III malignant nodules. The cases are separated into three data sets involving: (1) all nodules, (2) benign and stage I malignant nodules, and (3) benign and stage III malignant nodules. A CADx scheme is applied to segment lung nodules depicted on computed tomography images and we initially computed 66 3D image features. Then, three machine learning models namely, a support vector machine, naïve Bayes classifier and linear discriminant analysis, are separately trained and tested by using three data sets and a leave-one-case-out cross-validation method embedded with a Relief-F feature selection algorithm. When separately using three data sets to train and test three classifiers, the average areas under receiver operating characteristic curves (AUC) are 0.94, 0.90 and 0.99, respectively. When using the classifiers trained using data sets with all nodules, average AUC values are 0.88 and 0.99 for detecting early and advanced stage nodules, respectively. AUC values computed from three classifiers trained using the same data set are consistent without statistically significant difference (p  >  0.05). This study demonstrates (1) the feasibility of applying a CADx scheme to accurately distinguish between benign and malignant lung nodules, and (2) a positive trend between CADx performance and cancer progression stage. Thus, in order to increase CADx performance in detecting subtle and early cancer, training data sets should include more diverse early stage cancer cases.

  15. Embedded object concept: case balancing two-wheeled robot

    NASA Astrophysics Data System (ADS)

    Vallius, Tero; Röning, Juha

    2007-09-01

    This paper presents the Embedded Object Concept (EOC) and a telepresence robot system which is a test case for the EOC. The EOC utilizes common object-oriented methods used in software by applying them to combined Lego-like software-hardware entities. These entities represent objects in object-oriented design methods, and they are the building blocks of embedded systems. The goal of the EOC is to make the designing of embedded systems faster and easier. This concept enables people without comprehensive knowledge in electronics design to create new embedded systems, and for experts it shortens the design time of new embedded systems. We present the current status of a telepresence robot created with Atomi-objects, which is the name for our implementation of the embedded objects. The telepresence robot is a relatively complex test case for the EOC. The robot has been constructed using incremental device development, which is made possible by the architecture of the EOC. The robot contains video and audio exchange capability and a controlling system for driving with two wheels. The robot consists of Atomi-objects, demonstrating the suitability of the EOC for prototyping and easy modifications, and proving the capabilities of the EOC by realizing a function that normally requires a computer. The computer counterpart is a regular PC with audio and video capabilities running with a robot control application. The robot is functional and successfully tested.

  16. A New Time Domain Formulation for Broadband Noise Predictions

    NASA Technical Reports Server (NTRS)

    Casper, J.; Farassat, F.

    2002-01-01

    A new analytic result in acoustics called "Formulation 1B," proposed by Farassat, is used to compute the loading noise from an unsteady surface pressure distribution on a thin airfoil in the time domain. This formulation is a new solution of the Ffowcs Williams-Hawkings equation with the loading source term. The formulation contains a far field surface integral that depends on the time derivative and the surface gradient of the pressure on the airfoil, as well as a contour integral on the boundary of the airfoil surface. As a first test case, the new formulation is used to compute the noise radiated from a flat plate, moving through a sinusoidal gust of constant frequency. The unsteady surface pressure for this test case is analytically specified from a result based on linear airfoil theory. This test case is used to examine the velocity scaling properties of Formulation 1B and to demonstrate its equivalence to Formulation 1A of Farassat. The new acoustic formulation, again with an analytic surface pressure, is then used to predict broadband noise radiated from an airfoil immersed in homogeneous, isotropic turbulence. The results are compared with experimental data previously reported by Paterson and Amiet. Good agreement between predictions and measurements is obtained. Finally, an alternative form of Formulation 1B is described for statistical analysis of broadband noise.

  17. A New Time Domain Formulation for Broadband Noise Predictions

    NASA Technical Reports Server (NTRS)

    Casper, Jay H.; Farassat, Fereidoun

    2002-01-01

    A new analytic result in acoustics called "Formulation 1B," proposed by Farassat, is used to compute the loading noise from an unsteady surface pressure distribution on a thin airfoil in the time domain. This formulation is a new solution of the Ffowcs Williams-Hawkings equation with the loading source term. The formulation contains a far field surface integral that depends on the time derivative and the surface gradient of the pressure on the airfoil, as well as a contour integral on the boundary of the airfoil surface. As a first test case, the new formulation is used to compute the noise radiated from a flat plate, moving through a sinusoidal gust of constant frequency. The unsteady surface pressure for this test case is analytically specied from a result based on linear airfoil theory. This test case is used to examine the velocity scaling properties of Formulation 1B and to demonstrate its equivalence to Formulation 1A of Farassat. The new acoustic formulation, again with an analytic surface pressure, is then used to predict broadband noise radiated from an airfoil immersed in homogeneous, isotropic turbulence. The results are compared with experimental data previously reported by Paterson and Amiet. Good agreement between predictions and measurements is obtained. Finally, an alternative form of Formulation 1B is described for statistical analysis of broadband noise.

  18. Estimation of tunnel blockage from wall pressure signatures: A review and data correlation

    NASA Technical Reports Server (NTRS)

    Hackett, J. E.; Wilsden, D. J.; Lilley, D. E.

    1979-01-01

    A method is described for estimating low speed wind tunnel blockage, including model volume, bubble separation and viscous wake effects. A tunnel-centerline, source/sink distribution is derived from measured wall pressure signatures using fast algorithms to solve the inverse problem in three dimensions. Blockage may then be computed throughout the test volume. Correlations using scaled models or tests in two tunnels were made in all cases. In many cases model reference area exceeded 10% of the tunnel cross-sectional area. Good correlations were obtained regarding model surface pressures, lift drag and pitching moment. It is shown that blockage-induced velocity variations across the test section are relatively unimportant but axial gradients should be considered when model size is determined.

  19. Aeroacoustic simulation of a linear cascade by a prefactored compact scheme

    NASA Astrophysics Data System (ADS)

    Ghillani, Pietro

    This work documents the development of a three-dimensional high-order prefactored compact finite-difference solver for computational aeroacoustics (CAA) based on the inviscid Euler equations. This time explicit scheme is applied to representative problems of sound generation by flow interacting with solid boundaries. Four aeroacoustic problems are explored and the results validated against available reference analytical solution. Selected mesh convergence studies are conducted to determine the effective order of accuracy of the complete scheme. The first test case simulates the noise emitted by a still cylinder in an oscillating field. It provides a simple validation for the CAA-compatible solid wall condition used in the remainder of the work. The following test cases are increasingly complex versions of the turbomachinery rotor-stator interaction problem taken from NASA CAA workshops. In all the cases the results are compared against the available literature. The numerical method features some appreciable contributions to computational aeroacoustics. A reduced data exchange technique for parallel computations is implemented, which requires the exchange of just two values for each boundary node, independently of the size of the zone overlap. A modified version of the non-reflecting buffer layer by Chen is used to allow aerodynamic perturbations at the through flow boundaries. The Giles subsonic boundary conditions are extended to three-dimensional curvilinear coordinates. These advances have enabled to resolve the aerodynamic noise generation and near-field propagation on a representative cascade geometry with a time-marching scheme, with accuracy similar to spectral methods..

  20. Wind Tunnel Interference Effects on Tilt Rotor Testing Using Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Koning, Witold J. F.

    2015-01-01

    Experimental techniques to measure rotorcraft aerodynamic performance are widely used. However, most of them are either unable to capture interference effects from bodies, or require an extremely large computational budget. The objective of the present research is to develop an XV-15 Tilt Rotor Research Aircraft rotor model for investigation of wind tunnel wall interference using a novel Computational Fluid Dynamics (CFD) solver for rotorcraft, RotCFD. In RotCFD, a mid-fidelity URANS solver is used with an incompressible flow model and a realizable k-e turbulence model. The rotor is, however, not modeled using a computationally expensive, unsteady viscous body-fitted grid, but is instead modeled using a blade element model with a momentum source approach. Various flight modes of the XV-15 isolated rotor, including hover, tilt and airplane mode, have been simulated and correlated to existing experimental and theoretical data. The rotor model is subsequently used for wind tunnel wall interference simulations in the National Full-Scale Aerodynamics Complex (NFAC) at NASA Ames Research Center in California. The results from the validation of the isolated rotor performance showed good correlation with experimental and theoretical data. The results were on par with known theoretical analyses. In RotCFD the setup, grid generation and running of cases is faster than many CFD codes, which makes it a useful engineering tool. Performance predictions need not be as accurate as high-fidelity CFD codes, as long as wall effects can be properly simulated. For both test sections of the NFAC wall interference was examined by simulating the XV-15 rotor in the test section of the wind tunnel and with an identical grid but extended boundaries in free field. Both cases were also examined with an isolated rotor or with the rotor mounted on the modeled geometry of the Tiltrotor Test Rig (TTR). A 'quasi linear trim' was used to trim the thrust for the rotor to compare the power as a unique variable. Power differences between free field and wind tunnel cases were found from -7 % to 0 % in the 80- by 120-Foot Wind Tunnel test section and -1.6 % to 4.8 % in the 40- by 80-Foot Wind Tunnel, depending on the TTR orientation, tunnel velocity and blade setting. The TTR will be used in 2016 to test the Bell 609 rotor in a similar fashion to the research in this report.

  1. Calculus domains modelled using an original bool algebra based on polygons

    NASA Astrophysics Data System (ADS)

    Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.

    2016-08-01

    Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering.

  2. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  3. Evaluation of Rock Joint Coefficients

    NASA Astrophysics Data System (ADS)

    Audy, Ondřej; Ficker, Tomáš

    2017-10-01

    A computer method for evaluation of rock joint coefficients is described and several applications are presented. The method is based on two absolute numerical indicators that are formed by means of the Fourier replicas of rock joint profiles. The first indicator quantifies the vertical depth of profiles and the second indicator classifies wavy character of profiles. The absolute indicators have replaced the formerly used relative indicators that showed some artificial behavior in some cases. This contribution is focused on practical computations testing the functionality of the newly introduced indicators.

  4. Personal supercomputing by using transputer and Intel 80860 in plasma engineering

    NASA Astrophysics Data System (ADS)

    Ido, S.; Aoki, K.; Ishine, M.; Kubota, M.

    1992-09-01

    Transputer (T800) and 64-bit RISC Intel 80860 (i860) added on a personal computer can be used as an accelerator. When 32-bit T800s in a parallel system or 64-bit i860s are used, scientific calculations are carried out several ten times as fast as in the case of commonly used 32-bit personal computers or UNIX workstations. Benchmark tests and examples of physical simulations using T800s and i860 are reported.

  5. Subscale Fast Cookoff Testing and Modeling for the Hazard Assessment of Large Rocket Motors

    DTIC Science & Technology

    2001-03-01

    41 LIST OF TABLES Table 1 Heats of Vaporization Parameter for Two-liner Phase Transformation - Complete Liner Sublimation and/or Combined Liner...One-dimensional 2-D Two-dimensional ALE3D Arbitrary-Lagrange-Eulerian (3-D) Computer Code ALEGRA 3-D Arbitrary-Lagrange-Eulerian Computer Code for...case-liner bond areas and in the grain inner bore to explore the pre-ignition and ignition phases , as well as burning evolution in rocket motor fast

  6. High-resolution computed tomography findings in eight patients with hantavirus pulmonary syndrome.

    PubMed

    Barbosa, Diego de Lacerda; Hochhegger, Bruno; Souza, Arthur Soares; Zanetti, Gláucia; Escuissato, Dante Luiz; Meirelles, Gustavo de Souza Portes; Funari, Marcelo Buarque de Gusmão; Marchiori, Edson

    2017-01-01

    The purpose of this study was to describe the high-resolution computed tomography (HRCT) findings in patients with hantavirus pulmonary syndrome (HPS). We retrospectively reviewed HRCT findings from eight cases of HPS. All patients were men, aged 19-70 (mean, 41.7) years. Diagnoses were established by serological test (enzyme-linked immunosorbent assay) in all patients. Two chest radiologists analyzed the images and reached decisions by consensus. The predominant HRCT findings were ground-glass opacities (GGOs) and smooth inter- and intralobular septal thickening, found in all eight cases; however, the crazy-paving pattern was found in only three cases. Pleural effusion and peribronchovascular thickening were observed in five patients. The abnormalities were bilateral in all patients. The predominant HRCT findings in patients with HPS were GGOs and smooth inter- and intralobular septal thickening, which probably correlate with the histopathologic findings of pulmonary edema.

  7. Testing by artificial intelligence: computational alternatives to the determination of mutagenicity.

    PubMed

    Klopman, G; Rosenkranz, H S

    1992-08-01

    In order to develop methods for evaluating the predictive performance of computer-driven structure-activity methods (SAR) as well as to determine the limits of predictivity, we investigated the behavior of two Salmonella mutagenicity data bases: (a) a subset from the Genetox Program and (b) one from the U.S. National Toxicology Program (NTP). For molecules common to the two data bases, the experimental concordance was 76% when "marginals" were included and 81% when they were excluded. Three SAR methods were evaluated: CASE, MULTICASE and CASE/Graph Indices (CASE/GI). The programs "learned" the Genetox data base and used it to predict NTP molecules that were not present in the Genetox compilation. The concordances were 72, 80 and 47% respectively. Obviously, the MULTICASE version is superior and approaches the 85% interlaboratory variability observed for the Salmonella mutagenicity assays when the latter was carried out under carefully controlled conditions.

  8. Anterior uveitis after treatment of hepatitis C with alpha interferon: the recurrence of a previous inflammatory process due to presumed ocular toxocariasis.

    PubMed

    Damasceno, Eduardo F; Damasceno, Nadyr A

    2012-02-01

    To report a case of recurrent unilateral presumed ocular toxocariasis after treatment of hepatitis C. Case study. Clinical findings, ultrasonography, computed tomography, and serological tests were performed. Once diagnosis was made, effective treatment was administered. A 46-year-old woman with a long history of decreased unilateral visual acuity presented with anterior uveitis after the use of interferon alpha and ribavirin for treatment of hepatitis C. A biomicroscopic examination revealed active anterior uveitis, with ultrasonography and computed tomography demonstrating a central granuloma due to partially calcified toxocariasis. After treatment with corticosteroids and cycloplegics, the symptoms were alleviated. immunostimulation could cause a relapse of the inflammatory reaction found in uveitis due to toxocariasis.

  9. A case of systemic arterial supply to the right lower lobe of the lung: imaging findings and review of the literature.

    PubMed

    Mautone, Marcela; Naidoo, Parm

    2014-03-01

    Systemic arterialization of the lung without pulmonary sequestration is the rarest form of anomalous systemic arterial supply to the lung. This condition is characterised by an aberrant arterial branch arising from the aorta which supplies an area of lung parenchyma with normal bronchopulmonary anatomy. It is often diagnosed following investigation of an incidental cardiac murmur or based on abnormal imaging, as most patients are asymptomatic or minimally symptomatic. Thoracic computed tomography and computed tomography angiography are generally the most useful diagnostic tests. We present a case of a 22-year old female who was diagnosed with systemic arterial supply to a portion of otherwise normal right lower lobe following investigation of low volume haemoptysis.

  10. An approximate Riemann solver for hypervelocity flows

    NASA Technical Reports Server (NTRS)

    Jacobs, Peter A.

    1991-01-01

    We describe an approximate Riemann solver for the computation of hypervelocity flows in which there are strong shocks and viscous interactions. The scheme has three stages, the first of which computes the intermediate states assuming isentropic waves. A second stage, based on the strong shock relations, may then be invoked if the pressure jump across either wave is large. The third stage interpolates the interface state from the two initial states and the intermediate states. The solver is used as part of a finite-volume code and is demonstrated on two test cases. The first is a high Mach number flow over a sphere while the second is a flow over a slender cone with an adiabatic boundary layer. In both cases the solver performs well.

  11. Visuospatial skills and computer game experience influence the performance of virtual endoscopy.

    PubMed

    Enochsson, Lars; Isaksson, Bengt; Tour, René; Kjellin, Ann; Hedman, Leif; Wredmark, Torsten; Tsai-Felländer, Li

    2004-11-01

    Advanced medical simulators have been introduced to facilitate surgical and endoscopic training and thereby improve patient safety. Residents trained in the Procedicus Minimally Invasive Surgical Trainer-Virtual Reality (MIST-VR) laparoscopic simulator perform laparoscopic cholecystectomy safer and faster than a control group. Little has been reported regarding whether factors like gender, computer experience, and visuospatial tests can predict the performance with a medical simulator. Our aim was to investigate whether such factors influence the performance of simulated gastroscopy. Seventeen medical students were asked about computer gaming experiences. Before virtual endoscopy, they performed the visuospatial test PicCOr, which discriminates the ability of the tested person to create a three-dimensional image from a two-dimensional presentation. Each student performed one gastroscopy (level 1, case 1) in the GI Mentor II, Simbionix, and several variables related to performance were registered. Percentage of time spent with a clear view in the endoscope correlated well with the performance on the PicSOr test (r = 0.56, P < 0.001). Efficiency of screening also correlated with PicSOr (r = 0.23, P < 0.05). In students with computer gaming experience, the efficiency of screening increased (33.6% +/- 3.1% versus 22.6% +/- 2.8%, P < 0.05) and the duration of the examination decreased by 1.5 minutes (P < 0.05). A similar trend was seen in men compared with women. The visuospatial test PicSOr predicts the results with the endoscopic simulator GI Mentor II. Two-dimensional image experience, as in computer games, also seems to affect the outcome.

  12. TRL - A FORMAL TEST REPRESENTATION LANGUAGE AND TOOL FOR FUNCTIONAL TEST DESIGNS

    NASA Technical Reports Server (NTRS)

    Hops, J. M.

    1994-01-01

    A Formal Test Representation Language and Tool for Functional Test Designs (TRL) is an automatic tool and a formal language that is used to implement the Category-Partition Method and produce the specification of test cases in the testing phase of software development. The Category-Partition Method is particularly useful in defining the inputs, outputs and purpose of the test design phase and combines the benefits of choosing normal cases with error exposing properties. Traceability can be maintained quite easily by creating a test design for each objective in the test plan. The effort to transform the test cases into procedures is simplified by using an automatic tool to create the cases based on the test design. The method allows the rapid elimination of undesired test cases from consideration, and easy review of test designs by peer groups. The first step in the category-partition method is functional decomposition, in which the specification and/or requirements are decomposed into functional units that can be tested independently. A secondary purpose of this step is to identify the parameters that affect the behavior of the system for each functional unit. The second step, category analysis, carries the work done in the previous step further by determining the properties or sub-properties of the parameters that would make the system behave in different ways. The designer should analyze the requirements to determine the features or categories of each parameter and how the system may behave if the category were to vary its value. If the parameter undergoing refinement is a data-item, then categories of this data-item may be any of its attributes, such as type, size, value, units, frequency of change, or source. After all the categories for the parameters of the functional unit have been determined, the next step is to partition each category's range space into mutually exclusive values that the category can assume. In choosing partition values, all possible kinds of values should be included, especially the ones that will maximize error detection. The purpose of the final step, partition constraint analysis, is to refine the test design specification so that only the technically effective and economically feasible test cases are implied. TRL is written in C-language to be machine independent. It has been successfully implemented on an IBM PC compatible running MS DOS, a Sun4 series computer running SunOS, an HP 9000/700 series workstation running HP-UX, a DECstation running DEC RISC ULTRIX, and a DEC VAX series computer running VMS. TRL requires 1Mb of disk space and a minimum of 84K of RAM. The documentation is available in electronic form in Word Perfect format. The standard distribution media for TRL is a 5.25 inch 360K MS-DOS format diskette. Alternate distribution media and formats are available upon request. TRL was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  13. Evaluation of direct and indirect additive manufacture of maxillofacial prostheses.

    PubMed

    Eggbeer, Dominic; Bibb, Richard; Evans, Peter; Ji, Lu

    2012-09-01

    The efficacy of computer-aided technologies in the design and manufacture of maxillofacial prostheses has not been fully proven. This paper presents research into the evaluation of direct and indirect additive manufacture of a maxillofacial prosthesis against conventional laboratory-based techniques. An implant/magnet-retained nasal prosthesis case from a UK maxillofacial unit was selected as a case study. A benchmark prosthesis was fabricated using conventional laboratory-based techniques for comparison against additive manufactured prostheses. For the computer-aided workflow, photogrammetry, computer-aided design and additive manufacture (AM) methods were evaluated in direct prosthesis body fabrication and indirect production using an additively manufactured mould. Qualitative analysis of position, shape, colour and edge quality was undertaken. Mechanical testing to ISO standards was also used to compare the silicone rubber used in the conventional prosthesis with the AM material. Critical evaluation has shown that utilising a computer-aided work-flow can produce a prosthesis body that is comparable to that produced using existing best practice. Technical limitations currently prevent the direct fabrication method demonstrated in this paper from being clinically viable. This research helps prosthesis providers understand the application of a computer-aided approach and guides technology developers and researchers to address the limitations identified.

  14. A first-principle calculation of the XANES spectrum of Cu2+ in water

    NASA Astrophysics Data System (ADS)

    La Penna, G.; Minicozzi, V.; Morante, S.; Rossi, G. C.; Stellato, F.

    2015-09-01

    The progress in high performance computing we are witnessing today offers the possibility of accurate electron density calculations of systems in realistic physico-chemical conditions. In this paper, we present a strategy aimed at performing a first-principle computation of the low energy part of the X-ray Absorption Spectroscopy (XAS) spectrum based on the density functional theory calculation of the electronic potential. To test its effectiveness, we apply the method to the computation of the X-ray absorption near edge structure part of the XAS spectrum in the paradigmatic, but simple case of Cu2+ in water. In order to keep into account the effect of the metal site structure fluctuations in determining the experimental signal, the theoretical spectrum is evaluated as the average over the computed spectra of a statistically significant number of simulated metal site configurations. The comparison of experimental data with theoretical calculations suggests that Cu2+ lives preferentially in a square-pyramidal geometry. The remarkable success of this approach in the interpretation of XAS data makes us optimistic about the possibility of extending the computational strategy we have outlined to the more interesting case of molecules of biological relevance bound to transition metal ions.

  15. On the Bayesian Treed Multivariate Gaussian Process with Linear Model of Coregionalization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Lin, Guang

    2015-02-01

    The Bayesian treed Gaussian process (BTGP) has gained popularity in recent years because it provides a straightforward mechanism for modeling non-stationary data and can alleviate computational demands by fitting models to less data. The extension of BTGP to the multivariate setting requires us to model the cross-covariance and to propose efficient algorithms that can deal with trans-dimensional MCMC moves. In this paper we extend the cross-covariance of the Bayesian treed multivariate Gaussian process (BTMGP) to that of linear model of Coregionalization (LMC) cross-covariances. Different strategies have been developed to improve the MCMC mixing and invert smaller matrices in the Bayesianmore » inference. Moreover, we compare the proposed BTMGP with existing multiple BTGP and BTMGP in test cases and multiphase flow computer experiment in a full scale regenerator of a carbon capture unit. The use of the BTMGP with LMC cross-covariance helped to predict the computer experiments relatively better than existing competitors. The proposed model has a wide variety of applications, such as computer experiments and environmental data. In the case of computer experiments we also develop an adaptive sampling strategy for the BTMGP with LMC cross-covariance function.« less

  16. Applications of Taylor-Galerkin finite element method to compressible internal flow problems

    NASA Technical Reports Server (NTRS)

    Sohn, Jeong L.; Kim, Yongmo; Chung, T. J.

    1989-01-01

    A two-step Taylor-Galerkin finite element method with Lapidus' artificial viscosity scheme is applied to several test cases for internal compressible inviscid flow problems. Investigations for the effect of supersonic/subsonic inlet and outlet boundary conditions on computational results are particularly emphasized.

  17. Implementing a frame representation in CLIPS/COOL

    NASA Technical Reports Server (NTRS)

    Myers, Leonard; Snyder, James

    1991-01-01

    An implementation is described and evaluated of frames in COOL. The test case is a frame based semantic network previously implemented in CLIPS (C Language Integrated Production System) Version 4.3 as part of the Intelligent Computer Aided Design System (ICADS) and reported at the first CLIPS conference.

  18. The Impact of Computed Tomography on Decision Making in Tibial Plateau Fractures.

    PubMed

    Castiglia, Marcello Teixeira; Nogueira-Barbosa, Marcello Henrique; Messias, Andre Marcio Vieira; Salim, Rodrigo; Fogagnolo, Fabricio; Schatzker, Joseph; Kfuri, Mauricio

    2018-02-14

    Schatzker introduced one of the most used classification systems for tibial plateau fractures, based on plain radiographs. Computed tomography brought to attention the importance of coronal plane-oriented fractures. The goal of our study was to determine if the addition of computed tomography would affect the decision making of surgeons who usually use the Schatzker classification to assess tibial plateau fractures. Image studies of 70 patients who sustained tibial plateau fractures were uploaded to a dedicated homepage. Every patient was linked to a folder which contained two radiographic projections (anteroposterior and lateral), three interactive videos of computed tomography (axial, sagittal, and coronal), and eight pictures depicting tridimensional reconstructions of the tibial plateau. Ten attending orthopaedic surgeons, who were blinded to the cases, were granted access to the homepage and assessed each set of images in two different rounds, separated to each other by an interval of 2 weeks. Each case was evaluated in three steps, where surgeons had access, respectively to radiographs, two-dimensional videos of computed tomography, and three-dimensional reconstruction images. After every step, surgeons were asked to present how would they classify the case using the Schatzker system and which surgical approaches would be appropriate. We evaluated the inter- and intraobserver reliability of the Schatzker classification using the Kappa concordance coefficient, as well as the impact of computed tomography in the decision making regarding the surgical approach for each case, by using the chi-square test and likelihood ratio. The interobserver concordance kappa coefficients after each assessment step were, respectively, 0.58, 0.62, and 0.64. For the intraobserver analysis, the coefficients were, respectively, 0.76, 0.75, and 0.78. Computed tomography changed the surgical approach selection for the types II, V, and VI of Schatzker ( p  < 0.01). The addition of computed tomography scans to plain radiographs improved the interobserver reliability of Schatzker classification. Computed tomography had a statistically significant impact in the selection of surgical approaches for the lateral tibial plateau. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  19. Towards Test Driven Development for Computational Science with pFUnit

    NASA Technical Reports Server (NTRS)

    Rilee, Michael L.; Clune, Thomas L.

    2014-01-01

    Developers working in Computational Science & Engineering (CSE)/High Performance Computing (HPC) must contend with constant change due to advances in computing technology and science. Test Driven Development (TDD) is a methodology that mitigates software development risks due to change at the cost of adding comprehensive and continuous testing to the development process. Testing frameworks tailored for CSE/HPC, like pFUnit, can lower the barriers to such testing, yet CSE software faces unique constraints foreign to the broader software engineering community. Effective testing of numerical software requires a comprehensive suite of oracles, i.e., use cases with known answers, as well as robust estimates for the unavoidable numerical errors associated with implementation with finite-precision arithmetic. At first glance these concerns often seem exceedingly challenging or even insurmountable for real-world scientific applications. However, we argue that this common perception is incorrect and driven by (1) a conflation between model validation and software verification and (2) the general tendency in the scientific community to develop relatively coarse-grained, large procedures that compound numerous algorithmic steps.We believe TDD can be applied routinely to numerical software if developers pursue fine-grained implementations that permit testing, neatly side-stepping concerns about needing nontrivial oracles as well as the accumulation of errors. We present an example of a successful, complex legacy CSE/HPC code whose development process shares some aspects with TDD, which we contrast with current and potential capabilities. A mix of our proposed methodology and framework support should enable everyday use of TDD by CSE-expert developers.

  20. Adaptive Offset Correction for Intracortical Brain Computer Interfaces

    PubMed Central

    Homer, Mark L.; Perge, János A.; Black, Michael J.; Harrison, Matthew T.; Cash, Sydney S.; Hochberg, Leigh R.

    2014-01-01

    Intracortical brain computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user’s ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called MOCA, was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors (10.6 ±10.1%; p<0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs. PMID:24196868

  1. Adaptive offset correction for intracortical brain-computer interfaces.

    PubMed

    Homer, Mark L; Perge, Janos A; Black, Michael J; Harrison, Matthew T; Cash, Sydney S; Hochberg, Leigh R

    2014-03-01

    Intracortical brain-computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user's ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called multiple offset correction algorithm (MOCA), was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors ( 10.6 ± 10.1% ; p < 0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs.

  2. Comparison of diagnosis of early retinal lesions of diabetic retinopathy between a computer system and human experts.

    PubMed

    Lee, S C; Lee, E T; Kingsley, R M; Wang, Y; Russell, D; Klein, R; Warn, A

    2001-04-01

    To investigate whether a computer vision system is comparable with humans in detecting early retinal lesions of diabetic retinopathy using color fundus photographs. A computer system has been developed using image processing and pattern recognition techniques to detect early lesions of diabetic retinopathy (hemorrhages and microaneurysms, hard exudates, and cotton-wool spots). Color fundus photographs obtained from American Indians in Oklahoma were used in developing and testing the system. A set of 369 color fundus slides were used to train the computer system using 3 diagnostic categories: lesions present, questionable, or absent (Y/Q/N). A different set of 428 slides were used to test and evaluate the system, and its diagnostic results were compared with those of 2 human experts-the grader at the University of Wisconsin Fundus Photograph Reading Center (Madison) and a general ophthalmologist. The experiments included comparisons using 3 (Y/Q/N) and 2 diagnostic categories (Y/N) (questionable cases excluded in the latter). In the training phase, the agreement rates, sensitivity, and specificity in detecting the 3 lesions between the retinal specialist and the computer system were all above 90%. The kappa statistics were high (0.75-0.97), indicating excellent agreement between the specialist and the computer system. In the testing phase, the results obtained between the computer system and human experts were consistent with those of the training phase, and they were comparable with those between the human experts. The performance of the computer vision system in diagnosing early retinal lesions was comparable with that of human experts. Therefore, this mobile, electronically easily accessible, and noninvasive computer system, could become a mass screening tool and a clinical aid in diagnosing early lesions of diabetic retinopathy.

  3. An Analysis of Cloud Computing with Amazon Web Services for the Atmospheric Science Data Center

    NASA Astrophysics Data System (ADS)

    Gleason, J. L.; Little, M. M.

    2013-12-01

    NASA science and engineering efforts rely heavily on compute and data handling systems. The nature of NASA science data is such that it is not restricted to NASA users, instead it is widely shared across a globally distributed user community including scientists, educators, policy decision makers, and the public. Therefore NASA science computing is a candidate use case for cloud computing where compute resources are outsourced to an external vendor. Amazon Web Services (AWS) is a commercial cloud computing service developed to use excess computing capacity at Amazon, and potentially provides an alternative to costly and potentially underutilized dedicated acquisitions whenever NASA scientists or engineers require additional data processing. AWS desires to provide a simplified avenue for NASA scientists and researchers to share large, complex data sets with external partners and the public. AWS has been extensively used by JPL for a wide range of computing needs and was previously tested on a NASA Agency basis during the Nebula testing program. Its ability to support the Langley Science Directorate needs to be evaluated by integrating it with real world operational needs across NASA and the associated maturity that would come with that. The strengths and weaknesses of this architecture and its ability to support general science and engineering applications has been demonstrated during the previous testing. The Langley Office of the Chief Information Officer in partnership with the Atmospheric Sciences Data Center (ASDC) has established a pilot business interface to utilize AWS cloud computing resources on a organization and project level pay per use model. This poster discusses an effort to evaluate the feasibility of the pilot business interface from a project level perspective by specifically using a processing scenario involving the Clouds and Earth's Radiant Energy System (CERES) project.

  4. Parallel-vector unsymmetric Eigen-Solver on high performance computers

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Jiangning, Qin

    1993-01-01

    The popular QR algorithm for solving all eigenvalues of an unsymmetric matrix is reviewed. Among the basic components in the QR algorithm, it was concluded from this study, that the reduction of an unsymmetric matrix to a Hessenberg form (before applying the QR algorithm itself) can be done effectively by exploiting the vector speed and multiple processors offered by modern high-performance computers. Numerical examples of several test cases have indicated that the proposed parallel-vector algorithm for converting a given unsymmetric matrix to a Hessenberg form offers computational advantages over the existing algorithm. The time saving obtained by the proposed methods is increased as the problem size increased.

  5. Large Scale Flutter Data for Design of Rotating Blades Using Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2012-01-01

    A procedure to compute flutter boundaries of rotating blades is presented; a) Navier-Stokes equations. b) Frequency domain method compatible with industry practice. Procedure is initially validated: a) Unsteady loads with flapping wing experiment. b) Flutter boundary with fixed wing experiment. Large scale flutter computation is demonstrated for rotating blade: a) Single job submission script. b) Flutter boundary in 24 hour wall clock time with 100 cores. c) Linearly scalable with number of cores. Tested with 1000 cores that produced data in 25 hrs for 10 flutter boundaries. Further wall-clock speed-up is possible by performing parallel computations within each case.

  6. Bifilar analysis users manual, volume 2

    NASA Technical Reports Server (NTRS)

    Cassarino, S. J.

    1980-01-01

    The digital computer program developed to study the vibration response of a coupled rotor/bifilar/airframe coupled system is described. The theoretical development of the rotor/airframe system equations of motion is provided. The fuselage and bifilar absorber equations of motion are discussed. The modular block approach used in the make-up of this computer program is described. The input data needed to run the rotor and bifilar absorber analyses is described. Sample output formats are presented and discussed. The results for four test cases, which use the major logic paths of the computer program, are presented. The overall program structure is discussed in detail. The FORTRAN subroutines are described in detail.

  7. Extending the eigCG algorithm to nonsymmetric Lanczos for linear systems with multiple right-hand sides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas

    2014-08-01

    The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems andmore » then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.« less

  8. Acoustic environmental accuracy requirements for response determination

    NASA Technical Reports Server (NTRS)

    Pettitt, M. R.

    1983-01-01

    A general purpose computer program was developed for the prediction of vehicle interior noise. This program, named VIN, has both modal and statistical energy analysis capabilities for structural/acoustic interaction analysis. The analytic models and their computer implementation were verified through simple test cases with well-defined experimental results. The model was also applied in a space shuttle payload bay launch acoustics prediction study. The computer program processes large and small problems with equal efficiency because all arrays are dynamically sized by program input variables at run time. A data base is built and easily accessed for design studies. The data base significantly reduces the computational costs of such studies by allowing the reuse of the still-valid calculated parameters of previous iterations.

  9. Revisiting the Rossby Haurwitz wave test case with contour advection

    NASA Astrophysics Data System (ADS)

    Smith, Robert K.; Dritschel, David G.

    2006-09-01

    This paper re-examines a basic test case used for spherical shallow-water numerical models, and underscores the need for accurate, high resolution models of atmospheric and ocean dynamics. The Rossby-Haurwitz test case, first proposed by Williamson et al. [D.L. Williamson, J.B. Drake, J.J. Hack, R. Jakob, P.N. Swarztrauber, A standard test set for numerical approximations to the shallow-water equations on the sphere, J. Comput. Phys. (1992) 221-224], has been examined using a wide variety of shallow-water models in previous papers. Here, two contour-advective semi-Lagrangian (CASL) models are considered, and results are compared with previous test results. We go further by modifying this test case in a simple way to initiate a rapid breakdown of the basic wave state. This breakdown is accompanied by the formation of sharp potential vorticity gradients (fronts), placing far greater demands on the numerics than the original test case does. We also go further by examining other dynamical fields besides the height and potential vorticity, to assess how well the models deal with gravity waves. Such waves are sensitive to the presence or not of sharp potential vorticity gradients, as well as to numerical parameter settings. In particular, large time steps (convenient for semi-Lagrangian schemes) can seriously affect gravity waves but can also have an adverse impact on the primary fields of height and velocity. These problems are exacerbated by a poor resolution of potential vorticity gradients.

  10. Using computational fluid dynamics to test functional and ecological hypotheses in fossil taxa

    NASA Astrophysics Data System (ADS)

    Rahman, Imran

    2016-04-01

    Reconstructing how ancient organisms moved and fed is a major focus of study in palaeontology. Traditionally, this has been hampered by a lack of objective data on the functional morphology of extinct species, especially those without a clear modern analogue. However, cutting-edge techniques for characterizing specimens digitally and in three dimensions, coupled with state-of-the-art computer models, now provide a robust framework for testing functional and ecological hypotheses even in problematic fossil taxa. One such approach is computational fluid dynamics (CFD), a method for simulating fluid flows around objects that has primarily been applied to complex engineering-design problems. Here, I will present three case studies of CFD applied to fossil taxa, spanning a range of specimen sizes, taxonomic groups and geological ages. First, I will show how CFD enabled a rigorous test of hypothesized feeding modes in an enigmatic Ediacaran organism with three-fold symmetry, revealing previously unappreciated complexity of pre-Cambrian ecosystems. Second, I will show how CFD was used to evaluate hydrodynamic performance and feeding in Cambrian stem-group echinoderms, shedding light on the probable feeding strategy of the latest common ancestor of all deuterostomes. Third, I will show how CFD allowed us to explore the link between form and function in Mesozoic ichthyosaurs. These case studies serve to demonstrate the enormous potential of CFD for addressing long-standing hypotheses for a variety of fossil taxa, opening up an exciting new avenue in palaeontological studies of functional morphology.

  11. UV-C decontamination of hand-held tablet devices in the healthcare environment using the Codonics D6000™ disinfection system.

    PubMed

    Muzslay, M; Yui, S; Ali, S; Wilson, A P R

    2018-04-09

    Mobile phones and tablet computers may be contaminated with microorganisms and become a potential reservoir for cross-transmission of pathogens between healthcare workers and patients. There is no generally accepted guidance how to reduce contamination on mobile devices in healthcare settings. Our aim was to determine the efficacy of the Codonics D6000™ UV-C disinfection device. Daily disinfection reduced contamination on screens and on protective cases (test) significantly, but not all cases (control) could be decontaminated. The median aerobic colony count on the control and the test cases was 52 (IQR 33-89) cfu/25cm 2 and 22 (IQR 10.5-41) cfu/25cm 2 respectively before disinfection. Copyright © 2018. Published by Elsevier Ltd.

  12. Improving the performance of lesion-based computer-aided detection schemes of breast masses using a case-based adaptive cueing method

    NASA Astrophysics Data System (ADS)

    Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Qian, Wei; Zheng, Bin

    2016-03-01

    Current commercialized CAD schemes have high false-positive (FP) detection rates and also have high correlations in positive lesion detection with radiologists. Thus, we recently investigated a new approach to improve the efficacy of applying CAD to assist radiologists in reading and interpreting screening mammograms. Namely, we developed a new global feature based CAD approach/scheme that can cue the warning sign on the cases with high risk of being positive. In this study, we investigate the possibility of fusing global feature or case-based scores with the local or lesion-based CAD scores using an adaptive cueing method. We hypothesize that the information from the global feature extraction (features extracted from the whole breast regions) are different from and can provide supplementary information to the locally-extracted features (computed from the segmented lesion regions only). On a large and diverse full-field digital mammography (FFDM) testing dataset with 785 cases (347 negative and 438 cancer cases with masses only), we ran our lesion-based and case-based CAD schemes "as is" on the whole dataset. To assess the supplementary information provided by the global features, we used an adaptive cueing method to adaptively adjust the original CAD-generated detection scores (Sorg) of a detected suspicious mass region based on the computed case-based score (Scase) of the case associated with this detected region. Using the adaptive cueing method, better sensitivity results were obtained at lower FP rates (<= 1 FP per image). Namely, increases of sensitivities (in the FROC curves) of up to 6.7% and 8.2% were obtained for the ROI and Case-based results, respectively.

  13. Unsteady Computational Tests of a Non-Equilibrium

    NASA Astrophysics Data System (ADS)

    Jirasek, Adam; Hamlington, Peter; Lofthouse, Andrew; Usafa Collaboration; Cu Boulder Collaboration

    2017-11-01

    A non-equilibrium turbulence model is assessed on simulations of three practically-relevant unsteady test cases; oscillating channel flow, transonic flow around an oscillating airfoil, and transonic flow around the Benchmark Super-Critical Wing. The first case is related to piston-driven flows while the remaining cases are relevant to unsteady aerodynamics at high angles of attack and transonic speeds. Non-equilibrium turbulence effects arise in each of these cases in the form of a lag between the mean strain rate and Reynolds stresses, resulting in reduced kinetic energy production compared to classical equilibrium turbulence models that are based on the gradient transport (or Boussinesq) hypothesis. As a result of the improved representation of unsteady flow effects, the non-equilibrium model provides substantially better agreement with available experimental data than do classical equilibrium turbulence models. This suggests that the non-equilibrium model may be ideally suited for simulations of modern high-speed, high angle of attack aerodynamics problems.

  14. [Health technology assessment report: Computer-assisted Pap test for cervical cancer screening].

    PubMed

    Della Palma, Paolo; Moresco, Luca; Giorgi Rossi, Paolo

    2012-01-01

    HEALTH PROBLEM: Cervical cancer is a disease which is highly preventable by means of Pap test screening for the precancerous lesions, which can be easily treated. Furthermore, in the near future, control of the disease will be enhanced by the vaccination which prevents the infection of those human papillomavirus types that cause the vast majority of cervical cancers. The effectiveness of screening in drastically reducing cervical cancer incidence has been clearly demonstrated. The epidemiology of cervical cancer in industrialised countries is now determined mostly by the Pap test coverage of the female population and by the ability of health systems to assure appropriate follow up after an abnormal Pap test. Today there are two fully automated systems for computer-assisted Pap test: the BD FocalPoint and the Hologic Imager. Recently, the Hologic Integrated Imager, a semi-automated system, was launched. The two fully automated systems are composed of a central scanner, where the machine examines the cytologic slide, and of one or more review stations, where the cytologists analyze the slides previously centrally scanned. The softwares used by the two systems identify the fields of interest so that the cytologists can look only at those points, automatically pointed out by the review station. Furthermore, the FocalPoint system classifies the slides according to their level of risk of containing signs of relevant lesions. Those in the upper classes--about one fifth of the slides--are labelled as « further review », while those in the lower level of risk, i.e. slides that have such a low level of risk that they can be considered as negative with no human review, are labelled as « no further review ». The aim of computer-assisted Pap test is to reduce the time of slide examination and to increase productivity. Furthermore, the number of errors due to lack of attention may decrease. Both the systems can be applied to liquidbased cytology, while only the BD Focal Point can be used on conventional smears. Cytology screening has some critical points: there is a shortage of cytologists/cytotechnicians; the quality strongly depends on the experience and ability of the cytologist; there is a subjective component in the cytological diagnosis; in highly screened populations, the prevalence of lesions is very low and the activity of cytologists is very monotonous. On the other hand, a progressive shift to molecular screening using HPV-DNA test as primary screening test is very likely in the near future; cytology will be used as triage test, dramatically reducing the number of slides to process and increasing the prevalence of lesions in those Pap tests. In this Report we assume that the diagnostic accuracy of computer-assisted Pap test is equal to the accuracy of manual Pap test and, consequently, that screening using computer-assisted Pap test has the same efficacy in reducing cervical cancer incidence and mortality. Under this assumption, the effectiveness/ benefit/utility is the same for the two screening modes, i.e. the economic analysis will be a cost minimization study. Furthermore, the screening process is identical for the two modalities in all the phases except for slide interpretation. The cost minimization analysis will be limited to the only phase differing between the two modes, i.e. the study will be a differential cost analysis between a labour-intensive strategy (traditional Pap test) and a technology-intensive strategy (the computer-assisted Pap test). Briefly, the objectives of this HTA Report are: to determine the break even point of computer-assisted Pap test systems, i.e. the volume of slides processed per year at which putting in place a computer-assisted Pap test system becomes economically convenient; to quantify the cost per Pap test in different scenarios according to screening centre activity volume, productivity of cytologist, type of cytology (conventional smear or liquid-based, fully automated or semi-automated computer-assisted); to analyse the computer-assisted Pap test in the Italian context, through a survey of the centres using the technology, collecting data useful for the sensitivity analysis of the economic evaluation; to evaluate the acceptability of the technology in the screening services; to evaluate the organizational and financial impact of the computer-assisted Pap test in different scenarios; to illustrate the ideal organization to implement computer-assisted Pap test in terms of volume of activity, productivity, and human and technological resources. to produce this Report, the following process was adopted: application to the Ministry of health for a grant « Analysis of the impact of professional involvement in evidence generation for the HTA process »; within this project, the sub-project « Cost effectiveness evaluation of the computer-assisted Pap test in the Italian screening programmes » was financed; constitution of the Working Group, which included the project coordinator, the principal investigator, and the health economist; identification of the centres using the computer-assisted Pap test and which had published scientific reports on the subject; identification of the Consulting Committee (stakeholder), which included screening programmes managers, pathologists, economists, health policy-makers, citizen organizations, and manufacturers. Once the evaluation was concluded, a plenary meeting with Working Group and Consulting Committee was held. The working group drafted the final version of this Report, which took into account the comments received. the fully automated computer-assisted Pap test has an important financial and organizational impact on screening programmes. The assessment of this health technology reached the following conclusions: according to the survey results, after some distrust, cytologists accepted the use of the machine and appreciated the reduction in interpretation time and the reliability in identifying the fields of interest; from an economic point of view, the automated computer-assisted Pap test can be convenient only with conventional smears if the screening centre has a volume of more than 49,000 slides/year and the cytologist productivity increases about threefold. It must be highlighted that it is not sufficient to adopt the automated Pap test to reach such an increase in productivity; the laboratory must be organised or re-organised to optimise the use of the review stations and the person time. In the case of liquid-based cytology, the adoption of automated computer- assisted Pap test can only increase the costs. In fact, liquid-based cytology increases the cost of consumable materials but reduces the interpretation time, even in manual screening. Consequently, the reduction of human costs is smaller in the case of computer-assisted screening. Liquid-based cytology has other implications and advantages not linked to the use of computer-assisted Pap test that should be taken into account and are beyond the scope of this Report; given that the computer-assisted Pap test reduces human costs, it may be more advantageous where the cost of cytologists is higher; given the relatively small volume of activity of screening centres in Italy, computer-assisted Pap test may be reasonable for a network using only one central scanner and several remote review stations; the use of automated computer-assisted Pap test only for quality control in a single centre is not economically sustainable. In this case as well, several centres, for example at the regional level, may form a consortium to reach a reasonable number of slides to achieve the break even point. Regarding the use of a machine rather than human intelligence to interpret the slides, some ethical issues were initially raised, but both the scientific community and healthcare professionals have accepted this technology. The identification of fields of interest by the machine is highly reproducible, reducing subjectivity in the diagnostic process. The Hologic system always includes a check by the human eye, while the FocalPoint system identifies about one fifth of the slides as No Further Review. Several studies, some of which conducted in Italy, confirmed the reliability of this classification. There is still some resistance to accept the practice of No Further Review. A check of previous slides and clinical data can be useful to make the cytologist and the clinician more confident. Computer-assisted automated Pap test may be introduced only if there is a need to increase the volume of slides screened to cover the screening target population and sufficient human resources are not available. Switching a programme using conventional slides to automatic scanning can only lead to a reduction in costs if the volume of slides per year exceeds 49,000 slides/annum and cytologist productivity is optimised to more than 20,000 slides per year. At a productivity of 15,000 or fewer, the automated computer-assisted Pap test cannot be convenient. Switching from manual screening with conventional slides to automatic scanning with liquid-based cytology cannot generate any economic saving, but the system could increase output with a given number of staff. The transition from manual to computer assisted automated screening of liquid based cytology will not generate savings and the increase in productivity will be lower than that of the switch from manual/conventional to automated/conventional. The use of biologists or pathologists as cytologists is more costly than the use of cytoscreeners. Given that the automated computer-assisted Pap test reduces human resource costs, its adoption in a model using only biologists and pathologists for screening is more economically advantageous. (ABSTRACT TRUNCATED)

  15. Direct numerical simulation of particulate flows with an overset grid method

    NASA Astrophysics Data System (ADS)

    Koblitz, A. R.; Lovett, S.; Nikiforakis, N.; Henshaw, W. D.

    2017-08-01

    We evaluate an efficient overset grid method for two-dimensional and three-dimensional particulate flows for small numbers of particles at finite Reynolds number. The rigid particles are discretised using moving overset grids overlaid on a Cartesian background grid. This allows for strongly-enforced boundary conditions and local grid refinement at particle surfaces, thereby accurately capturing the viscous boundary layer at modest computational cost. The incompressible Navier-Stokes equations are solved with a fractional-step scheme which is second-order-accurate in space and time, while the fluid-solid coupling is achieved with a partitioned approach including multiple sub-iterations to increase stability for light, rigid bodies. Through a series of benchmark studies we demonstrate the accuracy and efficiency of this approach compared to other boundary conformal and static grid methods in the literature. In particular, we find that fully resolving boundary layers at particle surfaces is crucial to obtain accurate solutions to many common test cases. With our approach we are able to compute accurate solutions using as little as one third the number of grid points as uniform grid computations in the literature. A detailed convergence study shows a 13-fold decrease in CPU time over a uniform grid test case whilst maintaining comparable solution accuracy.

  16. Some practical turbulence modeling options for Reynolds-averaged full Navier-Stokes calculations of three-dimensional flows

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.

    1993-01-01

    New turbulence modeling options recently implemented for the 3-D version of Proteus, a Reynolds-averaged compressible Navier-Stokes code, are described. The implemented turbulence models include: the Baldwin-Lomax algebraic model, the Baldwin-Barth one-equation model, the Chien k-epsilon model, and the Launder-Sharma k-epsilon model. Features of this turbulence modeling package include: well documented and easy to use turbulence modeling options, uniform integration of turbulence models from different classes, automatic initialization of turbulence variables for calculations using one- or two-equation turbulence models, multiple solid boundaries treatment, and fully vectorized L-U solver for one- and two-equation models. Validation test cases include the incompressible and compressible flat plate turbulent boundary layers, turbulent developing S-duct flow, and glancing shock wave/turbulent boundary layer interaction. Good agreement is obtained between the computational results and experimental data. Sensitivity of the compressible turbulent solutions with the method of y(sup +) computation, the turbulent length scale correction, and some compressibility corrections are examined in detail. The test cases show that the highly optimized one-and two-equation turbulence models can be used in routine 3-D Navier-Stokes computations with no significant increase in CPU time as compared with the Baldwin-Lomax algebraic model.

  17. Development of a PC-based ground support system for a small satellite instrument

    NASA Astrophysics Data System (ADS)

    Deschambault, Robert L.; Gregory, Philip R.; Spenler, Stephen; Whalen, Brian A.

    1993-11-01

    The importance of effective ground support for the remote control and data retrieval of a satellite instrument cannot be understated. Problems with ground support may include the need to base personnel at a ground tracking station for extended periods, and the delay between the instrument observation and the processing of the data by the science team. Flexible solutions to such problems in the case of small satellite systems are provided by using low-cost, powerful personal computers and off-the-shelf software for data acquisition and processing, and by using Internet as a communication pathway to enable scientists to view and manipulate satellite data in real time at any ground location. The personal computer based ground support system is illustrated for the case of the cold plasma analyzer flown on the Freja satellite. Commercial software was used as building blocks for writing the ground support equipment software. Several levels of hardware support, including unit tests and development, functional tests, and integration were provided by portable and desktop personal computers. Satellite stations in Saskatchewan and Sweden were linked to the science team via phone lines and Internet, which provided remote control through a central point. These successful strategies will be used on future small satellite space programs.

  18. GPU-based ultra-fast dose calculation using a finite size pencil beam model.

    PubMed

    Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B

    2009-10-21

    Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.

  19. An Open Architecture to Support Social and Health Services in a Smart TV Environment.

    PubMed

    Costa, Carlos Rivas; Anido-Rifon, Luis E; Fernandez-Iglesias, Manuel J

    2017-03-01

    To design, implement, and test a solution to provide social and health services for the elderly at home based on smart TV technologies and access to all services. The architecture proposed is based on an open software platform and standard personal computing hardware. This provides great flexibility to develop new applications over the underlying infrastructure or to integrate new devices, for instance to monitor a broad range of vital signs in those cases where home monitoring is required. An actual system as a proof-of-concept was designed, implemented, and deployed. Applications range from social network clients to vital signs monitoring; from interactive TV contests to conventional online care applications such as medication reminders or telemedicine. In both cases, the results have been very positive, confirming the initial perception of the TV as a convenient, easy-to-use technology to provide social and health care. The TV set is a much more familiar computing interface for most senior users, and as a consequence, smart TVs become a most convenient solution for the design and implementation of applications and services targeted to this user group. This proposal has been tested in real setting with 62 senior people at their homes. Users included both individuals with experience using computers and others reluctant to them.

  20. A general method for calculating three-dimensional compressible laminar and turbulent boundary layers on arbitrary wings

    NASA Technical Reports Server (NTRS)

    Cebeci, T.; Kaups, K.; Ramsey, J. A.

    1977-01-01

    The method described utilizes a nonorthogonal coordinate system for boundary-layer calculations. It includes a geometry program that represents the wing analytically, and a velocity program that computes the external velocity components from a given experimental pressure distribution when the external velocity distribution is not computed theoretically. The boundary layer method is general, however, and can also be used for an external velocity distribution computed theoretically. Several test cases were computed by this method and the results were checked with other numerical calculations and with experiments when available. A typical computation time (CPU) on an IBM 370/165 computer for one surface of a wing which roughly consist of 30 spanwise stations and 25 streamwise stations, with 30 points across the boundary layer is less than 30 seconds for an incompressible flow and a little more for a compressible flow.

  1. Probability Distributions of Minkowski Distances between Discrete Random Variables.

    ERIC Educational Resources Information Center

    Schroger, Erich; And Others

    1993-01-01

    Minkowski distances are used to indicate similarity of two vectors in an N-dimensional space. How to compute the probability function, the expectation, and the variance for Minkowski distances and the special cases City-block distance and Euclidean distance. Critical values for tests of significance are presented in tables. (SLD)

  2. Error Detection in Mechanized Classification Systems

    ERIC Educational Resources Information Center

    Hoyle, W. G.

    1976-01-01

    When documentary material is indexed by a mechanized classification system, and the results judged by trained professionals, the number of documents in disagreement, after suitable adjustment, defines the error rate of the system. In a test case disagreement was 22 percent and, of this 22 percent, the computer correctly identified two-thirds of…

  3. Equitability of Treatment in Army Judicial Proceedings (ETAJUP)

    DTIC Science & Technology

    1993-12-01

    difference associated with Black offenders has a MIN of -0.8 percent at the "More than 75%" in charges level. (c) Comment on Factor. In general, most cases...system. The analysis seeks to identify the most significant variables drawn from court-martial case records which distinguish membership in these groups...means and variances of the two classes are computed for each variable. The variable with the most statistically significant 1-test is selected to

  4. Multi-GPU implementation of a VMAT treatment plan optimization algorithm.

    PubMed

    Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B

    2015-06-01

    Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.

  5. Prediction of the backflow and recovery regions in the backward facing step at various Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Michelassi, V.; Durbin, P. A.; Mansour, N. N.

    1996-01-01

    A four-equation model of turbulence is applied to the numerical simulation of flows with massive separation induced by a sudden expansion. The model constants are a function of the flow parameters, and two different formulations for these functions are tested. The results are compared with experimental data for a high Reynolds-number case and with experimental and DNS data for a low Reynolds-number case. The computations prove that the recovery region downstream of the massive separation is properly modeled only for the high Re case. The problems in this case stem from the gradient diffusion hypothesis, which underestimates the turbulent diffusion.

  6. Convergence issues in domain decomposition parallel computation of hovering rotor

    NASA Astrophysics Data System (ADS)

    Xiao, Zhongyun; Liu, Gang; Mou, Bin; Jiang, Xiong

    2018-05-01

    Implicit LU-SGS time integration algorithm has been widely used in parallel computation in spite of its lack of information from adjacent domains. When applied to parallel computation of hovering rotor flows in a rotating frame, it brings about convergence issues. To remedy the problem, three LU factorization-based implicit schemes (consisting of LU-SGS, DP-LUR and HLU-SGS) are investigated comparatively. A test case of pure grid rotation is designed to verify these algorithms, which show that LU-SGS algorithm introduces errors on boundary cells. When partition boundaries are circumferential, errors arise in proportion to grid speed, accumulating along with the rotation, and leading to computational failure in the end. Meanwhile, DP-LUR and HLU-SGS methods show good convergence owing to boundary treatment which are desirable in domain decomposition parallel computations.

  7. Determination of lung segments in computed tomography images using the Euclidean distance to the pulmonary artery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoecker, Christina; Moltz, Jan H.; Lassen, Bianca

    Purpose: Computed tomography (CT) imaging is the modality of choice for lung cancer diagnostics. With the increasing number of lung interventions on sublobar level in recent years, determining and visualizing pulmonary segments in CT images and, in oncological cases, reliable segment-related information about the location of tumors has become increasingly desirable. Computer-assisted identification of lung segments in CT images is subject of this work.Methods: The authors present a new interactive approach for the segmentation of lung segments that uses the Euclidean distance of each point in the lung to the segmental branches of the pulmonary artery. The aim is tomore » analyze the potential of the method. Detailed manual pulmonary artery segmentations are used to achieve the best possible segment approximation results. A detailed description of the method and its evaluation on 11 CT scans from clinical routine are given.Results: An accuracy of 2–3 mm is measured for the segment boundaries computed by the pulmonary artery-based method. On average, maximum deviations of 8 mm are observed. 135 intersegmental pulmonary veins detected in the 11 test CT scans serve as reference data. Furthermore, a comparison of the presented pulmonary artery-based approach to a similar approach that uses the Euclidean distance to the segmental branches of the bronchial tree is presented. It shows a significantly higher accuracy for the pulmonary artery-based approach in lung regions at least 30 mm distal to the lung hilum.Conclusions: A pulmonary artery-based determination of lung segments in CT images is promising. In the tests, the pulmonary artery-based determination has been shown to be superior to the bronchial tree-based determination. The suitability of the segment approximation method for application in the planning of segment resections in clinical practice has already been verified in experimental cases. However, automation of the method accompanied by an evaluation on a larger number of test cases is required before application in the daily clinical routine.« less

  8. [Computer-assisted education in problem-solving in neurology; a randomized educational study].

    PubMed

    Weverling, G J; Stam, J; ten Cate, T J; van Crevel, H

    1996-02-24

    To determine the effect of computer-based medical teaching (CBMT) as a supplementary method to teach clinical problem-solving during the clerkship in neurology. Randomized controlled blinded study. Academic Medical Centre, Amsterdam, the Netherlands. 103 Students were assigned at random to a group with access to CBMT and a control group. CBMT consisted of 20 computer-simulated patients with neurological diseases, and was permanently available during five weeks to students in the CBMT group. The ability to recognize and solve neurological problems was assessed with two free-response tests, scored by two blinded observers. The CBMT students scored significantly better on the test related to the CBMT cases (mean score 7.5 on a zero to 10 point scale; control group 6.2; p < 0.001). There was no significant difference on the control test not related to the problems practised with CBMT. CBMT can be an effective method for teaching clinical problem-solving, when used as a supplementary teaching facility during a clinical clerkship. The increased ability to solve problems learned by CBMT had no demonstrable effect on the performance with other neurological problems.

  9. Test Cases for a Rectangular Supercritical Wing Undergoing Pitching Oscillations

    NASA Technical Reports Server (NTRS)

    Bennett, Robert M.

    2000-01-01

    Steady and unsteady measured pressures for a Rectangular Supercritical Wing (RSW) undergoing pitching oscillations have been presented. From the several hundred compiled data points, 27 static and 36 pitching oscillation cases have been proposed for computational Test Cases to illustrate the trends with Mach number, reduced frequency, and angle of attack. The wing was designed to be a simple configuration for Computational Fluid Dynamics (CFD) comparisons. The wing had an unswept rectangular planform plus a tip of revolution, a panel aspect ratio of 2.0, a twelve per cent thick supercritical airfoil section, and no twist. The model was tested over a wide range of Mach numbers, from 0.27 to 0.90, corresponding to low subsonic flows up to strong transonic flows. The higher Mach numbers are well beyond the design Mach number such as might be required for flutter verification beyond cruise conditions. The pitching oscillations covered a broad range of reduced frequencies. Some early calculations for this wing are given for lifting pressure as calculated from a linear lifting surface program and from a transonic small perturbation program. The unsteady results were given primarily for a mild transonic condition at M = 0.70. For these cases the agreement with the data was only fair, possibly resulting from the omission of viscous effects. Supercritical airfoil sections are known to be sensitive to viscous effects (for example, one case cited). Calculations using a higher level code with the full potential equations have been presented for one of the same cases, and with the Euler equations. The agreement around the leading edge was improved, but overall the agreement was not completely satisfactory. Typically for low-aspect-ratio rectangular wings, transonic shock waves on the wing tend to sweep forward from root to tip such that there are strong three-dimensional effects. It might also be noted that for most of the test, the model was tested with free transition, but a few points were taken with an added transition strip for comparison. Some unpublished results of a rigid wing of the same airfoil and planform that was tested on the pitch and plunge apparatus mount system (PAPA) showed effects of the lower surface transition Strip on flutter at the lower subsonic Mach numbers. Significant effects of a transition strip were also obtained on a wing with a thicker supercritical section on the PAPA mount system. Both of these flutter tests on the PAPA resulted in very low reduced frequencies that may be a factor in this influence of the transition strip. However, these results indicate that correlation studies for RSW may require some attention to the estimation of transition location to accurately treat viscous effects. In this report several Test Cases are selected to illustrate trends for a variety of different conditions with emphasis on transonic flow effects. An overview of the model and tests is given and the standard formulary for these data is listed. Sample data points are presented in both tabular and graphical form. A complete tabulation and plotting of all the Test Cases is given. Only the static pressures and the real and imaginary parts of the first harmonic of the unsteady pressures are available. All the data for the test are available in electronic file form. The Test Cases are also available as separate electronic files.

  10. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE PAGES

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.; ...

    2016-11-07

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  11. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  12. A weighted generalized score statistic for comparison of predictive values of diagnostic tests.

    PubMed

    Kosinski, Andrzej S

    2013-03-15

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting. Copyright © 2012 John Wiley & Sons, Ltd.

  13. A weighted generalized score statistic for comparison of predictive values of diagnostic tests

    PubMed Central

    Kosinski, Andrzej S.

    2013-01-01

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations which are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we present, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic which incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, it always reduces to the score statistic in the independent samples situation, and it preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the weighted generalized score test statistic in a general GEE setting. PMID:22912343

  14. Virtual stress testing of fracture stability in soldiers with severely comminuted tibial fractures.

    PubMed

    Petfield, Joseph L; Hayeck, Garry T; Kopperdahl, David L; Nesti, Leon J; Keaveny, Tony M; Hsu, Joseph R

    2017-04-01

    Virtual stress testing (VST) provides a non-invasive estimate of the strength of a healing bone through a biomechanical analysis of a patient's computed tomography (CT) scan. We asked whether VST could improve management of patients who had a tibia fracture treated with external fixation. In a retrospective case-control study of 65 soldier-patients who had tibia fractures treated with an external fixator, we performed VST utilizing CT scans acquired prior to fixator removal. The strength of the healing bone and the amount of tissue damage after application of an overload were computed for various virtual loading cases. Logistic regression identified computed outcomes with the strongest association to clinical events related to nonunion within 2 months after fixator removal. Clinical events (n = 9) were associated with a low tibial strength for compression loading (p < 0.05, AUC = 0.74) or a low proportion of failed cortical bone tissue for torsional loading (p < 0.005, AUC = 0.84). Using post-hoc thresholds of a compressive strength of four times body-weight and a proportional of failed cortical bone tissue of 5%, the test identified all nine patients who failed clinically (100% sensitivity; 40.9% positive predictive value) and over three fourths of those (43 of 56) who progressed to successful healing (76.8% specificity; 100% negative predictive value). In this study, VST identified all patients who progressed to full, uneventful union after fixator removal; thus, we conclude that this new test has the potential to provide a quantitative, objective means of identifying tibia-fracture patients who can safely resume weight bearing. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:805-811, 2017. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  15. Long-range interactions and parallel scalability in molecular simulations

    NASA Astrophysics Data System (ADS)

    Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko

    2007-01-01

    Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.

  16. Comparison of Computational and Experimental Results for a Transonic Variable-Speed Power-Turbine Blade Operating with Low Inlet Turbulence Levels

    NASA Technical Reports Server (NTRS)

    Booth, David; Flegel, Ashlie

    2015-01-01

    A computational assessment of the aerodynamic performance of the midspan section of a variable-speed power-turbine blade is described. The computation comprises a periodic single blade that represents the 2-D Midspan section VSPT blade that was tested in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. A commercial, off-the-shelf (COTS) software package, Pointwise and CFD++, was used for the grid generation and RANS and URANS computations. The CFD code, which offers flexibility in terms of turbulence and transition modeling options, was assessed in terms of blade loading, loss, and turning against test data from the transonic tunnel. Simulations were assessed at positive and negative incidence angles that represent the turbine cruise and take-off design conditions. The results indicate that the secondary flow induced at the positive incidence cruise condition results in a highly loaded case and transitional flow on the blade is observed. The negative incidence take-off condition is unloaded and the flow is very two-dimensional. The computational results demonstrate the predictive capability of the gridding technique and COTS software for a linear transonic turbine blade cascade with large incidence angle variation.

  17. Comparison of Computational and Experimental Results for a Transonic Variable-speed Power-Turbine Blade Operating with Low Inlet Turbulence Levels

    NASA Technical Reports Server (NTRS)

    Booth, David T.; Flegel, Ashlie B.

    2015-01-01

    A computational assessment of the aerodynamic performance of the midspan section of a variable-speed power-turbine blade is described. The computation comprises a periodic single blade that represents the 2-D Midspan section VSPT blade that was tested in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. A commercial, off-the-shelf (COTS) software package, Pointwise and CFD++, was used for the grid generation and RANS and URANS computations. The CFD code, which offers flexibility in terms of turbulence and transition modeling options, was assessed in terms of blade loading, loss, and turning against test data from the transonic tunnel. Simulations were assessed at positive and negative incidence angles that represent the turbine cruise and take-off design conditions. The results indicate that the secondary flow induced at the positive incidence cruise condition results in a highly loaded case and transitional flow on the blade is observed. The negative incidence take-off condition is unloaded and the flow is very two-dimensional. The computational results demonstrate the predictive capability of the gridding technique and COTS software for a linear transonic turbine blade cascade with large incidence angle variation.

  18. A Big Data Approach to Analyzing Market Volatility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Bethel, E. Wes; Gu, Ming

    2013-06-05

    Understanding the microstructure of the financial market requires the processing of a vast amount of data related to individual trades, and sometimes even multiple levels of quotes. Analyzing such a large volume of data requires tremendous computing power that is not easily available to financial academics and regulators. Fortunately, public funded High Performance Computing (HPC) power is widely available at the National Laboratories in the US. In this paper we demonstrate that the HPC resource and the techniques for data-intensive sciences can be used to greatly accelerate the computation of an early warning indicator called Volume-synchronized Probability of Informed tradingmore » (VPIN). The test data used in this study contains five and a half year's worth of trading data for about 100 most liquid futures contracts, includes about 3 billion trades, and takes 140GB as text files. By using (1) a more efficient file format for storing the trading records, (2) more effective data structures and algorithms, and (3) parallelizing the computations, we are able to explore 16,000 different ways of computing VPIN in less than 20 hours on a 32-core IBM DataPlex machine. Our test demonstrates that a modest computer is sufficient to monitor a vast number of trading activities in real-time – an ability that could be valuable to regulators. Our test results also confirm that VPIN is a strong predictor of liquidity-induced volatility. With appropriate parameter choices, the false positive rates are about 7% averaged over all the futures contracts in the test data set. More specifically, when VPIN values rise above a threshold (CDF > 0.99), the volatility in the subsequent time windows is higher than the average in 93% of the cases.« less

  19. Preliminary Computational Analysis of the (HIRENASD) Configuration in Preparation for the Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Chwalowski, Pawel; Florance, Jennifer P.; Heeg, Jennifer; Wieseman, Carol D.; Perry, Boyd P.

    2011-01-01

    This paper presents preliminary computational aeroelastic analysis results generated in preparation for the first Aeroelastic Prediction Workshop (AePW). These results were produced using FUN3D software developed at NASA Langley and are compared against the experimental data generated during the HIgh REynolds Number Aero- Structural Dynamics (HIRENASD) Project. The HIRENASD wind-tunnel model was tested in the European Transonic Windtunnel in 2006 by Aachen University0s Department of Mechanics with funding from the German Research Foundation. The computational effort discussed here was performed (1) to obtain a preliminary assessment of the ability of the FUN3D code to accurately compute physical quantities experimentally measured on the HIRENASD model and (2) to translate the lessons learned from the FUN3D analysis of HIRENASD into a set of initial guidelines for the first AePW, which includes test cases for the HIRENASD model and its experimental data set. This paper compares the computational and experimental results obtained at Mach 0.8 for a Reynolds number of 7 million based on chord, corresponding to the HIRENASD test conditions No. 132 and No. 159. Aerodynamic loads and static aeroelastic displacements are compared at two levels of the grid resolution. Harmonic perturbation numerical results are compared with the experimental data using the magnitude and phase relationship between pressure coefficients and displacement. A dynamic aeroelastic numerical calculation is presented at one wind-tunnel condition in the form of the time history of the generalized displacements. Additional FUN3D validation results are also presented for the AGARD 445.6 wing data set. This wing was tested in the Transonic Dynamics Tunnel and is commonly used in the preliminary benchmarking of computational aeroelastic software.

  20. The CMC/3DPNS computer program for prediction of three-dimensional, subsonic, turbulent aerodynamic juncture region flow. Volume 3: Programmers' manual

    NASA Technical Reports Server (NTRS)

    Orzechowski, J. A.

    1982-01-01

    The CMC fluid mechanics program system was developed to transmit the theoretical evolution of finite element numerical solution methodology, applied to nonlinear field problems into a versatile computer code for comprehensive flow field analysis. A detailed view of the code from the standpoint of a computer programmer's use is presented. A system macroflow chart and detailed flow charts of several routines necessary to interact with a theoretican/user to modify the operation of this program are presented. All subroutines and details of usage, primarily for input and output routines are described. Integer and real scalars and a cross reference list denoting subroutine usage for these scalars are outlined. Entry points in dynamic storage vector IZ; the lengths of each vector accompanying the scalar definitions are described. A listing of the routines peculiar to the standard test case and a listing of the input deck and printout for this case are included.

  1. Decomposition of Magnetic Field Boundary Conditions into Parts Produced by Internal and External Sources

    NASA Astrophysics Data System (ADS)

    Lazanja, David; Boozer, Allen

    2006-10-01

    Given the total magnetic field on a toroidal plasma surface, a method for decomposing the field into a part due to internal currents (often the plasma) and a part due to external currents is presented. The method exploits Laplace theory which is valid in the vacuum region between the plasma surface and the chamber walls. The method is developed for the full three dimensional case which is necessary for studying stellarator plasma configurations. A change in the plasma shape is produced by the total normal field perturbation on the plasma surface. This method allows a separation of the total normal field perturbation into a part produced by external currents and a part produced by the plasma response. There are immediate applications to coil design. The computational procedure is based on Merkel's 1986 work on vacuum field computations. Several test cases are presented for toroidal surfaces which verify the method and computational robustness of the code.

  2. Short arc orbit determination and imminent impactors in the Gaia era

    NASA Astrophysics Data System (ADS)

    Spoto, F.; Del Vigna, A.; Milani, A.; Tommei, G.; Tanga, P.; Mignard, F.; Carry, B.; Thuillot, W.; David, P.

    2018-06-01

    Short-arc orbit determination is crucial when an asteroid is first discovered. In these cases usually the observations are so few that the differential correction procedure may not converge. We developed an initial orbit computation method, based on systematic ranging, which is an orbit determination technique that systematically explores a raster in the topocentric range and range-rate space region inside the admissible region. We obtained a fully rigorous computation of the probability for the asteroid that could impact the Earth within a few days from the discovery without any a priori assumption. We tested our method on the two past impactors, 2008 TC3 and 2014 AA, on some very well known cases, and on two particular objects observed by the European Space Agency Gaia mission.

  3. Getting ready for petaflop capacities and beyond: a utility perspective

    NASA Astrophysics Data System (ADS)

    Hamelin, J. F.; Berthou, J. Y.

    2008-07-01

    Why should EDF, the leading producer and marketer of electricity in Europe, start adding teraflops to its terawatt-hours and become involved in high-performance computing (HPC)? In this paper we answer this question through examples of major opportunities that HPC brings to our business today and, we hope well into the future of petaflop and exaflop computing. Five cases are presented dealing with nondestructive testing, nuclear fuel management, mechanical behavior of nuclear fuel assemblies, water management, and energy management. For each case we show the benefits brought by HPC, describe the current level of numerical simulation performance, and discuss the perspectives for future steps. We also present the general background that explains why EDF is moving to this technology and briefly comment on the development of user-oriented simulation platforms.

  4. Wind Tunnel Interference Effects on Tilt Rotor Testing Using Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Koning, Witold J. F.

    2016-01-01

    Experimental techniques to measure rotorcraft aerodynamic performance are widely used. However, most of them are either unable to capture interference effects from bodies, or require an extremely large computational budget. The objective of the present research is to develop an XV-15 Tiltrotor Research Aircraft rotor model for investigation of wind tunnel wall interference using a novel Computational Fluid Dynamics (CFD) solver for rotorcraft, RotCFD. In RotCFD, a mid-fidelity Unsteady Reynolds Averaged Navier-Stokes (URANS) solver is used with an incompressible flow model and a realizable k-e turbulence model. The rotor is, however, not modeled using a computationally expensive, unsteady viscous body-fitted grid, but is instead modeled using a blade-element model (BEM) with a momentum source approach. Various flight modes of the XV-15 isolated rotor, including hover, tilt, and airplane mode, have been simulated and correlated to existing experimental and theoretical data. The rotor model is subsequently used for wind tunnel wall interference simulations in the National Full-Scale Aerodynamics Complex (NFAC) at Ames Research Center in California. The results from the validation of the isolated rotor performance showed good correlation with experimental and theoretical data. The results were on par with known theoretical analyses. In RotCFD the setup, grid generation, and running of cases is faster than many CFD codes, which makes it a useful engineering tool. Performance predictions need not be as accurate as high-fidelity CFD codes, as long as wall effects can be properly simulated. For both test sections of the NFAC wall, interference was examined by simulating the XV-15 rotor in the test section of the wind tunnel and with an identical grid but extended boundaries in free field. Both cases were also examined with an isolated rotor or with the rotor mounted on the modeled geometry of the Tiltrotor Test Rig (TTR). A "quasi linear trim" was used to trim the thrust for the rotor to compare the power as a unique variable. Power differences between free field and wind tunnel cases were found from -7 to 0 percent in the 80- by 120-Foot Wind Tunnel and -1.6 to 4.8 percent in the 40- by 80-Foot Wind Tunnel, depending on the TTR orientation, tunnel velocity, and blade setting. The TTR will be used in 2016 to test the Bell 609 rotor in a similar fashion to the research in this report.

  5. Space shuttle main engine fault detection using neural networks

    NASA Technical Reports Server (NTRS)

    Bishop, Thomas; Greenwood, Dan; Shew, Kenneth; Stevenson, Fareed

    1991-01-01

    A method for on-line Space Shuttle Main Engine (SSME) anomaly detection and fault typing using a feedback neural network is described. The method involves the computation of features representing time-variance of SSME sensor parameters, using historical test case data. The network is trained, using backpropagation, to recognize a set of fault cases. The network is then able to diagnose new fault cases correctly. An essential element of the training technique is the inclusion of randomly generated data along with the real data, in order to span the entire input space of potential non-nominal data.

  6. Source Stacking for Numerical Wavefield Computations - Application to Global Scale Seismic Mantle Tomography

    NASA Astrophysics Data System (ADS)

    MacLean, L. S.; Romanowicz, B. A.; French, S.

    2015-12-01

    Seismic wavefield computations using the Spectral Element Method are now regularly used to recover tomographic images of the upper mantle and crust at the local, regional, and global scales (e.g. Fichtner et al., GJI, 2009; Tape et al., Science 2010; Lekic and Romanowicz, GJI, 2011; French and Romanowicz, GJI, 2014). However, the heaviness of the computations remains a challenge, and contributes to limiting the resolution of the produced images. Using source stacking, as suggested by Capdeville et al. (GJI,2005), can considerably speed up the process by reducing the wavefield computations to only one per each set of N sources. This method was demonstrated through synthetic tests on low frequency datasets, and therefore should work for global mantle tomography. However, the large amplitudes of surface waves dominates the stacked seismograms and these cases can no longer be separated by windowing in the time domain. We have developed a processing approach that helps address this issue and demonstrate its usefulness through a series of synthetic tests performed at long periods (T >60 s) on toy upper mantle models. The summed synthetics are computed using the CSEM code (Capdeville et al., 2002). As for the inverse part of the procedure, we use a quasi-Newton method, computing Frechet derivatives and Hessian using normal mode perturbation theory.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghaei, Faranak; Tan, Maxine; Liu, Hong

    Purpose: To identify a new clinical marker based on quantitative kinetic image features analysis and assess its feasibility to predict tumor response to neoadjuvant chemotherapy. Methods: The authors assembled a dataset involving breast MR images acquired from 68 cancer patients before undergoing neoadjuvant chemotherapy. Among them, 25 patients had complete response (CR) and 43 had partial and nonresponse (NR) to chemotherapy based on the response evaluation criteria in solid tumors. The authors developed a computer-aided detection scheme to segment breast areas and tumors depicted on the breast MR images and computed a total of 39 kinetic image features from bothmore » tumor and background parenchymal enhancement regions. The authors then applied and tested two approaches to classify between CR and NR cases. The first one analyzed each individual feature and applied a simple feature fusion method that combines classification results from multiple features. The second approach tested an attribute selected classifier that integrates an artificial neural network (ANN) with a wrapper subset evaluator, which was optimized using a leave-one-case-out validation method. Results: In the pool of 39 features, 10 yielded relatively higher classification performance with the areas under receiver operating characteristic curves (AUCs) ranging from 0.61 to 0.78 to classify between CR and NR cases. Using a feature fusion method, the maximum AUC = 0.85 ± 0.05. Using the ANN-based classifier, AUC value significantly increased to 0.96 ± 0.03 (p < 0.01). Conclusions: This study demonstrated that quantitative analysis of kinetic image features computed from breast MR images acquired prechemotherapy has potential to generate a useful clinical marker in predicting tumor response to chemotherapy.« less

  8. Use of adaptive walls in 2D tests

    NASA Technical Reports Server (NTRS)

    Archambaud, J. P.; Chevallier, J. P.

    1984-01-01

    A new method for computing the wall effects gives precise answers to some questions arising in adaptive wall concept applications: length of adapted regions, fairings with up and downstream regions, residual misadjustments effects, reference conditions. The acceleration of the iterative process convergence and the development of an efficient technology used in CERT T2 wind tunnels give in a single run the required test conditions. Samples taken from CAST 7 tests demonstrate the efficiency of the whole process to obtain significant results with considerations of tridimensional case extension.

  9. A standard test case suite for two-dimensional linear transport on the sphere: results from a collection of state-of-the-art schemes

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.

    2013-09-01

    Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent, The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data-sets are made available to facilitate the process of model evaluation and scheme intercomparison.

  10. A standard test case suite for two-dimensional linear transport on the sphere: results from a collection of state-of-the-art schemes

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.

    2014-01-01

    Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent. The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data sets are made available to facilitate the process of model evaluation and scheme intercomparison.

  11. Broadband Noise Predictions Based on a New Aeroacoustic Formulation

    NASA Technical Reports Server (NTRS)

    Casper, J.; Farassat, F.

    2002-01-01

    A new analytic result in acoustics called 'Formulation 1B,' proposed by Farassat, is used to compute the loading noise from an unsteady surface pressure distribution on a thin airfoil in the time domain. This formulation is a new solution of the Ffowcs Williams-Hawkings equation with the loading source term. The formulation contains a far-field surface integral that depends on the time derivative and the surface gradient of the pressure on the airfoil, as well as a contour integral on the boundary of the airfoil surface. As a first test case, the new formulation is used to compute the noise radiated from a flat plate, moving through a sinusoidal gust of constant frequency. The unsteady surface pressure for this test case is specified analytically from a result that is based on linear airfoil theory. This test case is used to examine the velocity scaling properties of Formulation 1B, and to demonstrate its equivalence to Formulation 1A, of Farassat. The new acoustic formulation, again with an analytic surface pressure, is then used to predict broadband noise radiated from an airfoil immersed in homogeneous turbulence. The results are compared with experimental data previously reported by Paterson and Amiet. Good agreement between predictions and measurements is obtained. The predicted results also agree very well with those of Paterson and Amiet, who used a frequency-domain approach. Finally, an alternative form of Formulation 1B is described for statistical analysis of broadband noise.

  12. Test methods for optical disk media characteristics (for 356 mm ruggedized magneto-optic media)

    NASA Technical Reports Server (NTRS)

    Podio, Fernando L.

    1991-01-01

    Standard test methods for computer storage media characteristics are essential and allow for conformance to media interchange standards. The test methods were developed for 356 mm two-sided laminated glass substrate with a magneto-optic active layer media technology. These test methods may be used for testing other media types, but in each case their applicability must be evaluated. Test methods are included for a series of different media characteristics, including operational, nonoperational, and storage environments; mechanical and physical characteristics; and substrate, recording layer, and preformat characteristics. Tests for environmental qualification and media lifetimes are also included. The best methods include testing conditions, testing procedures, a description of the testing setup, and the required calibration procedures.

  13. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  14. The paramedian diencephalic syndrome: a dynamic phenomenon.

    PubMed

    Meissner, I; Sapir, S; Kokmen, E; Stein, S D

    1987-01-01

    The paramedian diencephalic syndrome is characterized by a clinical triad: hypersomnolent apathy, amnesic syndrome, and impaired vertical gaze. We studied 4 cases with computed tomography evidence of bilateral diencephalic infarctions. Each case began abruptly with hypersomnolent apathy followed by fluctuations from appropriate affect, full orientation, and alertness to labile mood, confabulation, and apathy. Speech varied from hypophonia to normal; handwriting varied from legible script to gross scrawl. Psychological testing revealed poor learning and recall, with low performance scores. In 3 patients the predominant abnormality was in downward gaze.

  15. An arbitrary grid CFD algorithm for configuration aerodynamics analysis. Volume 1: Theory and validations

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Iannelli, G. S.; Manhardt, Paul D.; Orzechowski, J. A.

    1993-01-01

    This report documents the user input and output data requirements for the FEMNAS finite element Navier-Stokes code for real-gas simulations of external aerodynamics flowfields. This code was developed for the configuration aerodynamics branch of NASA ARC, under SBIR Phase 2 contract NAS2-124568 by Computational Mechanics Corporation (COMCO). This report is in two volumes. Volume 1 contains the theory for the derived finite element algorithm and describes the test cases used to validate the computer program described in the Volume 2 user guide.

  16. JETSPIN: A specific-purpose open-source software for simulations of nanofiber electrospinning

    NASA Astrophysics Data System (ADS)

    Lauricella, Marco; Pontrelli, Giuseppe; Coluzza, Ivan; Pisignano, Dario; Succi, Sauro

    2015-12-01

    We present the open-source computer program JETSPIN, specifically designed to simulate the electrospinning process of nanofibers. Its capabilities are shown with proper reference to the underlying model, as well as a description of the relevant input variables and associated test-case simulations. The various interactions included in the electrospinning model implemented in JETSPIN are discussed in detail. The code is designed to exploit different computational architectures, from single to parallel processor workstations. This paper provides an overview of JETSPIN, focusing primarily on its structure, parallel implementations, functionality, performance, and availability.

  17. Simulation of blast action on civil structures using ANSYS Autodyn

    NASA Astrophysics Data System (ADS)

    Fedorova, N. N.; Valger, S. A.; Fedorov, A. V.

    2016-10-01

    The paper presents the results of 3D numerical simulations of shock wave spreading in cityscape area. ANSYS Autodyne software is used for the computations. Different test cases are investigated numerically. On the basis of the computations, the complex transient flowfield structure formed in the vicinity of prismatic bodies was obtained and analyzed. The simulation results have been compared to the experimental data. The ability of two numerical schemes is studied to correctly predict the pressure history in several gauges placed on walls of the obstacles.

  18. An arbitrary grid CFD algorithm for configuration aerodynamics analysis. Volume 2: FEMNAS user guide

    NASA Technical Reports Server (NTRS)

    Manhardt, Paul D.; Orzechowski, J. A.; Baker, A. J.

    1992-01-01

    This report documents the user input and output data requirements for the FEMNAS finite element Navier-Stokes code for real-gas simulations of external aerodynamics flowfields. This code was developed for the configuration aerodynamics branch of NASA ARC, under SBIR Phase 2 contract NAS2-124568 by Computational Mechanics Corporation (COMCO). This report is in two volumes. Volume 1 contains the theory for the derived finite element algorithm and describes the test cases used to validate the computer program described in the Volume 2 user guide.

  19. The investigation of tethered satellite system dynamics

    NASA Technical Reports Server (NTRS)

    Lorenzini, E.

    1985-01-01

    Progress in tethered satellite system dynamics research is reported. A retrieval rate control law with no angular feedback to investigate the system's dynamic response was studied. The initial conditions for the computer code which simulates the satellite's rotational dynamics were extended to a generic orbit. The model of the satellite thrusters was modified to simulate a pulsed thrust, by making the SKYHOOK integrator suitable for dealing with delta functions without loosing computational efficiency. Tether breaks were simulated with the high resolution computer code SLACK3. Shuttle's maneuvers were tested. The electric potential around a severed conductive tether with insulator, in the case of a tether breakage at 20 km from the Shuttle, was computed. The electrodynamic hazards due to the breakage of the TSS electrodynamic tether in a plasma are evaluated.

  20. Computation of multi-dimensional viscous supersonic jet flow

    NASA Technical Reports Server (NTRS)

    Kim, Y. N.; Buggeln, R. C.; Mcdonald, H.

    1986-01-01

    A new method has been developed for two- and three-dimensional computations of viscous supersonic flows with embedded subsonic regions adjacent to solid boundaries. The approach employs a reduced form of the Navier-Stokes equations which allows solution as an initial-boundary value problem in space, using an efficient noniterative forward marching algorithm. Numerical instability associated with forward marching algorithms for flows with embedded subsonic regions is avoided by approximation of the reduced form of the Navier-Stokes equations in the subsonic regions of the boundary layers. Supersonic and subsonic portions of the flow field are simultaneously calculated by a consistently split linearized block implicit computational algorithm. The results of computations for a series of test cases relevant to internal supersonic flow is presented and compared with data. Comparison between data and computation are in general excellent thus indicating that the computational technique has great promise as a tool for calculating supersonic flow with embedded subsonic regions. Finally, a User's Manual is presented for the computer code used to perform the calculations.

  1. Detecting Genomic Clustering of Risk Variants from Sequence Data: Cases vs. Controls

    PubMed Central

    Schaid, Daniel J.; Sinnwell, Jason P.; McDonnell, Shannon K.; Thibodeau, Stephen N.

    2013-01-01

    As the ability to measure dense genetic markers approaches the limit of the DNA sequence itself, taking advantage of possible clustering of genetic variants in, and around, a gene would benefit genetic association analyses, and likely provide biological insights. The greatest benefit might be realized when multiple rare variants cluster in a functional region. Several statistical tests have been developed, one of which is based on the popular Kulldorff scan statistic for spatial clustering of disease. We extended another popular spatial clustering method – Tango’s statistic – to genomic sequence data. An advantage of Tango’s method is that it is rapid to compute, and when single test statistic is computed, its distribution is well approximated by a scaled chi-square distribution, making computation of p-values very rapid. We compared the Type-I error rates and power of several clustering statistics, as well as the omnibus sequence kernel association test (SKAT). Although our version of Tango’s statistic, which we call “Kernel Distance” statistic, took approximately half the time to compute than the Kulldorff scan statistic, it had slightly less power than the scan statistic. Our results showed that the Ionita-Laza version of Kulldorff’s scan statistic had the greatest power over a range of clustering scenarios. PMID:23842950

  2. An analytical procedure for evaluating shuttle abort staging aerodynamic characteristics

    NASA Technical Reports Server (NTRS)

    Meyer, R.

    1973-01-01

    An engineering analysis and computer code (AERSEP) for predicting Space Shuttle Orbiter - HO Tank longitudinal aerodynamic characteristics during abort separation has been developed. Computed results are applicable at Mach numbers above 2 for angle-of-attack between plus or minus 10 degrees. No practical restrictions on orbiter-tank relative positioning are indicated for tank-under-orbiter configurations. Input data requirements and computer running times are minimal facilitating program use for parametric studies, test planning, and trajectory analysis. In a majority of cases AERSEP Orbiter-Tank interference predictions are as accurate as state-of-the-art estimates for interference-free or isolated-vehicle configurations. AERSEP isolated-orbiter predictions also show excellent correlation with data.

  3. Utility of an emulation and simulation computer model for air revitalization system hardware design, development, and test

    NASA Technical Reports Server (NTRS)

    Yanosy, J. L.; Rowell, L. F.

    1985-01-01

    Efforts to make increasingly use of suitable computer programs in the design of hardware have the potential to reduce expenditures. In this context, NASA has evaluated the benefits provided by software tools through an application to the Environmental Control and Life Support (ECLS) system. The present paper is concerned with the benefits obtained by an employment of simulation tools in the case of the Air Revitalization System (ARS) of a Space Station life support system. Attention is given to the ARS functions and components, a computer program overview, a SAND (solid amine water desorbed) bed model description, a model validation, and details regarding the simulation benefits.

  4. Electron number probability distributions for correlated wave functions.

    PubMed

    Francisco, E; Martín Pendás, A; Blanco, M A

    2007-03-07

    Efficient formulas for computing the probability of finding exactly an integer number of electrons in an arbitrarily chosen volume are only known for single-determinant wave functions [E. Cances et al., Theor. Chem. Acc. 111, 373 (2004)]. In this article, an algebraic method is presented that extends these formulas to the case of multideterminant wave functions and any number of disjoint volumes. The derived expressions are applied to compute the probabilities within the atomic domains derived from the space partitioning based on the quantum theory of atoms in molecules. Results for a series of test molecules are presented, paying particular attention to the effects of electron correlation and of some numerical approximations on the computed probabilities.

  5. Transition of a Three-Dimensional Unsteady Viscous Flow Analysis from a Research Environment to the Design Environment

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne; Dorney, Daniel J.; Huber, Frank; Sheffler, David A.; Turner, James E. (Technical Monitor)

    2001-01-01

    The advent of advanced computer architectures and parallel computing have led to a revolutionary change in the design process for turbomachinery components. Two- and three-dimensional steady-state computational flow procedures are now routinely used in the early stages of design. Unsteady flow analyses, however, are just beginning to be incorporated into design systems. This paper outlines the transition of a three-dimensional unsteady viscous flow analysis from the research environment into the design environment. The test case used to demonstrate the analysis is the full turbine system (high-pressure turbine, inter-turbine duct and low-pressure turbine) from an advanced turboprop engine.

  6. Artificial viscosity to cure the carbuncle phenomenon: The three-dimensional case

    NASA Astrophysics Data System (ADS)

    Rodionov, Alexander V.

    2018-05-01

    The carbuncle phenomenon (also known as the shock instability) has remained a serious computational challenge since it was first noticed and described [1,2]. In [3] the author presented a summary on this subject and proposed a new technique for curing the problem. Its idea is to introduce some dissipation in the form of right-hand sides of the Navier-Stokes equations into the basic method of solving Euler equations; in so doing, the molecular viscosity coefficient is replaced by the artificial viscosity coefficient. The new cure for the carbuncle flaw was tested and tuned for the case of using first-order schemes in two-dimensional simulations. Its efficiency was demonstrated on several well-known test problems. In this paper we extend the technique of [3] to the case of three-dimensional simulations.

  7. Computing Integrated Ratings from Heterogeneous Phenotypic Assessments: A Case Study of Lettuce Postharvest Quality and Downy Mildew Resistance

    USDA-ARS?s Scientific Manuscript database

    Comparing performance of a large number of accessions simultaneously is not always possible. Typically, only subsets of all accessions are tested in separate trials with only some (or none) of the accessions overlapping between subsets. Using standard statistical approaches to combine data from such...

  8. To Enhance Collaborative Learning and Practice Network Knowledge with a Virtualization Laboratory and Online Synchronous Discussion

    ERIC Educational Resources Information Center

    Hwang, Wu-Yuin; Kongcharoen, Chaknarin; Ghinea, Gheorghita

    2014-01-01

    Recently, various computer networking courses have included additional laboratory classes in order to enhance students' learning achievement. However, these classes need to establish a suitable laboratory where each student can connect network devices to configure and test functions within different network topologies. In this case, the Linux…

  9. Why Computational Models Are Better than Verbal Theories: The Case of Nonword Repetition

    ERIC Educational Resources Information Center

    Jones, Gary; Gobet, Fernand; Freudenthal, Daniel; Watson, Sarah E.; Pine, Julian M.

    2014-01-01

    Tests of nonword repetition (NWR) have often been used to examine children's phonological knowledge and word learning abilities. However, theories of NWR primarily explain performance either in terms of phonological working memory or long-term knowledge, with little consideration of how these processes interact. One theoretical account that…

  10. FLEXWAL: A computer program for predicting the wall modifications for two-dimensional, solid, adaptive-wall tunnels

    NASA Technical Reports Server (NTRS)

    Everhart, J. L.

    1983-01-01

    A program called FLEXWAL for calculating wall modifications for solid, adaptive-wall wind tunnels is presented. The method used is the iterative technique of NASA TP-2081 and is applicable to subsonic and transonic test conditions. The program usage, program listing, and a sample case are given.

  11. Aerothermal modeling program, phase 1

    NASA Technical Reports Server (NTRS)

    Sturgess, G. J.

    1983-01-01

    The physical modeling embodied in the computational fluid dynamics codes is discussed. The objectives were to identify shortcomings in the models and to provide a program plan to improve the quantitative accuracy. The physical models studied were for: turbulent mass and momentum transport, heat release, liquid fuel spray, and gaseous radiation. The approach adopted was to test the models against appropriate benchmark-quality test cases from experiments in the literature for the constituent flows that together make up the combustor real flow.

  12. User's manual for a 0.3-m TCT wall interference assessment/correction procedure: 8- by 24-inch airfoil test section

    NASA Technical Reports Server (NTRS)

    Gumbert, C. R.

    1985-01-01

    A transonic Wall-Interference Assessment/Correction (WIAC) procedure has been developed and verified for the 8- by 24-inch airfoil test section of the Langley 0.3-m Transonic Cryogenic Tunnel. This report is a user's manual for the correction procedure. It includes a listing of the computer procedure file as well as input for and results from a step-by-step sample case.

  13. MONTE CARLO SIMULATIONS OF PERIODIC PULSED REACTOR WITH MOVING GEOMETRY PARTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Yan; Gohar, Yousry

    2015-11-01

    In a periodic pulsed reactor, the reactor state varies periodically from slightly subcritical to slightly prompt supercritical for producing periodic power pulses. Such periodic state change is accomplished by a periodic movement of specific reactor parts, such as control rods or reflector sections. The analysis of such reactor is difficult to perform with the current reactor physics computer programs. Based on past experience, the utilization of the point kinetics approximations gives considerable errors in predicting the magnitude and the shape of the power pulse if the reactor has significantly different neutron life times in different zones. To accurately simulate themore » dynamics of this type of reactor, a Monte Carlo procedure using the transfer function TRCL/TR of the MCNP/MCNPX computer programs is utilized to model the movable reactor parts. In this paper, two algorithms simulating the geometry part movements during a neutron history tracking have been developed. Several test cases have been developed to evaluate these procedures. The numerical test cases have shown that the developed algorithms can be utilized to simulate the reactor dynamics with movable geometry parts.« less

  14. A new multistage groundwater transport inverse method: presentation, evaluation, and implications

    USGS Publications Warehouse

    Anderman, Evan R.; Hill, Mary C.

    1999-01-01

    More computationally efficient methods of using concentration data are needed to estimate groundwater flow and transport parameters. This work introduces and evaluates a three‐stage nonlinear‐regression‐based iterative procedure in which trial advective‐front locations link decoupled flow and transport models. Method accuracy and efficiency are evaluated by comparing results to those obtained when flow‐ and transport‐model parameters are estimated simultaneously. The new method is evaluated as conclusively as possible by using a simple test case that includes distinct flow and transport parameters, but does not include any approximations that are problem dependent. The test case is analytical; the only flow parameter is a constant velocity, and the transport parameters are longitudinal and transverse dispersivity. Any difficulties detected using the new method in this ideal situation are likely to be exacerbated in practical problems. Monte‐Carlo analysis of observation error ensures that no specific error realization obscures the results. Results indicate that, while this, and probably other, multistage methods do not always produce optimal parameter estimates, the computational advantage may make them useful in some circumstances, perhaps as a precursor to using a simultaneous method.

  15. Development of new flux splitting schemes. [computational fluid dynamics algorithms

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1992-01-01

    Maximizing both accuracy and efficiency has been the primary objective in designing a numerical algorithm for computational fluid dynamics (CFD). This is especially important for solutions of complex three dimensional systems of Navier-Stokes equations which often include turbulence modeling and chemistry effects. Recently, upwind schemes have been well received for their capability in resolving discontinuities. With this in mind, presented are two new flux splitting techniques for upwind differencing. The first method is based on High-Order Polynomial Expansions (HOPE) of the mass flux vector. The second new flux splitting is based on the Advection Upwind Splitting Method (AUSM). The calculation of the hypersonic conical flow demonstrates the accuracy of the splitting in resolving the flow in the presence of strong gradients. A second series of tests involving the two dimensional inviscid flow over a NACA 0012 airfoil demonstrates the ability of the AUSM to resolve the shock discontinuity at transonic speed. A third case calculates a series of supersonic flows over a circular cylinder. Finally, the fourth case deals with tests of a two dimensional shock wave/boundary layer interaction.

  16. A-priori testing of sub-grid models for chemically reacting nonpremixed turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Jimenez, J.; Linan, A.; Rogers, M. M.; Higuera, F. J.

    1996-01-01

    The beta-assumed-pdf approximation of (Cook & Riley 1994) is tested as a subgrid model for the LES computation of nonpremixed turbulent reacting flows, in the limit of cold infinitely fast chemistry, for two plane turbulent mixing layers with different degrees of intermittency. Excellent results are obtained for the computation of integrals properties such as product mass fraction, and the model is applied to other quantities such as powers of the temperature and the pdf of the scalar itself. Even in these cases the errors are small enough to be useful in practical applications. The analysis is extended to slightly out of equilibrium problems such as the generation of radicals, and formulated in terms of the pdf of the scalar gradients. It is shown that the conditional gradient distribution is universal in a wide range of cases whose limits are established. Within those limits, engineering approximations to the radical concentration are also possible. It is argued that the experiments in this paper are essentially in the limit of infinite Reynolds number.

  17. A comparison of traditional textbook and interactive computer learning of neuromuscular block.

    PubMed

    Ohrn, M A; van Oostrom, J H; van Meurs, W L

    1997-03-01

    We designed an educational software package, RELAX, for teaching first-year anesthesiology residents about the pharmacology and clinical management of neuromuscular blockade. The software uses an interactive, problem-based approach and moves the user through cases in an operating room environment. It can be run on personal computers with Microsoft Windows (Microsoft Corp., Redmond, WA) and combines video, graphics, and text with mouse-driven user input. We utilized test scores 1) to determine whether our software was beneficial to be the educational progress of anesthesiology residents and 2) to compare computer-based learning with textbook learning. Twenty-three residents were divided into two groups matched for age and sex, and a pretest was administered to all 23 residents. There was no significant difference (P > 0.05) in the pretest scores of the two groups. Three weeks later, both groups were subjected to an educational intervention; one with our computer software and the other with selected textbooks. Both groups took a posttest immediately after the intervention. The test scores of the computer group improved significantly more (P < 0.05) than those of the textbook group. Although prior to the study the two groups showed no statistical difference in their familiarity with computers, the computer group reported much higher satisfaction with their learning experience than did the textbook group (P < 0.0001).

  18. Applying a CAD-generated imaging marker to assess short-term breast cancer risk

    NASA Astrophysics Data System (ADS)

    Mirniaharikandehei, Seyedehnafiseh; Zarafshani, Ali; Heidari, Morteza; Wang, Yunzhi; Aghaei, Faranak; Zheng, Bin

    2018-02-01

    Although whether using computer-aided detection (CAD) helps improve radiologists' performance in reading and interpreting mammograms is controversy due to higher false-positive detection rates, objective of this study is to investigate and test a new hypothesis that CAD-generated false-positives, in particular, the bilateral summation of false-positives, is a potential imaging marker associated with short-term breast cancer risk. An image dataset involving negative screening mammograms acquired from 1,044 women was retrospectively assembled. Each case involves 4 images of craniocaudal (CC) and mediolateral oblique (MLO) view of the left and right breasts. In the next subsequent mammography screening, 402 cases were positive for cancer detected and 642 remained negative. A CAD scheme was applied to process all "prior" negative mammograms. Some features from CAD scheme were extracted, which include detection seeds, the total number of false-positive regions, an average of detection scores and the sum of detection scores in CC and MLO view images. Then the features computed from two bilateral images of left and right breasts from either CC or MLO view were combined. In order to predict the likelihood of each testing case being positive in the next subsequent screening, two logistic regression models were trained and tested using a leave-one-case-out based cross-validation method. Data analysis demonstrated the maximum prediction accuracy with an area under a ROC curve of AUC=0.65+/-0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of [2.95, 6.83]. The results also illustrated an increasing trend in the adjusted odds ratio and risk prediction scores (p<0.01). Thus, the study showed that CAD-generated false-positives might provide a new quantitative imaging marker to help assess short-term breast cancer risk.

  19. Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2005-01-01

    A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  20. Wind-US Results for the AIAA 2nd Propulsion Aerodynamics Workshop

    NASA Technical Reports Server (NTRS)

    Dippold, Vance III; Foster, Lancert; Mankbadi, Mina

    2014-01-01

    This presentation contains Wind-US results presented at the 2nd Propulsion Aerodynamics Workshop. The workshop was organized by the American Institute of Aeronautics and Astronautics, Air Breathing Propulsion Systems Integration Technical Committee with the purpose of assessing the accuracy of computational fluid dynamics for air breathing propulsion applications. Attendees included representatives from government, industry, academia, and commercial software companies. Participants were encouraged to explore and discuss all aspects of the simulation process including the effects of mesh type and refinement, solver numerical schemes, and turbulence modeling. The first set of challenge cases involved computing the thrust and discharge coefficients for a 25deg conical nozzle for a range of nozzle pressure ratios between 1.4 and 7.0. Participants were also asked to simulate two cases in which the 25deg conical nozzle was bifurcated by a solid plate, resulting in vortex shedding (NPR=1.6) and shifted plume shock (NPR=4.0). A second set of nozzle cases involved computing the discharge and thrust coefficients for a convergent dual stream nozzle for a range of subsonic nozzle pressure ratios. The workshop committee also compared the plume mixing of these cases across various codes and models. The final test case was a serpentine inlet diffuser with an outlet to inlet area ratio of 1.52 and an offset of 1.34 times the inlet diameter. Boundary layer profiles, wall static pressure, and total pressure at downstream rake locations were examined.

  1. Root-like enamel pearl: a case report

    PubMed Central

    2014-01-01

    Introduction In general, enamel pearls are found in maxillary molars as a small globule of enamel. However, this case report describes an enamel pearl with a prolate spheroid shape which is 1.8mm wide and 8mm long. The different type of enamel pearl found in my clinic has significantly improved our understanding of enamel pearl etiology and pathophysiology. Case presentation A 42-year-old Han Chinese woman with severe toothache received treatment in my Department of Endodontics. She had no significant past medical history. A dental examination revealed extensive distal decay in her left mandibular first molar, tenderness to percussion and palpation of the periradicular zone, and found a deep periodontal pocket on the buccal lateral. Vitality testing was negative. Periapical radiographic images revealed radiolucency around the mesial apex. Cone beam computed tomography detected an opaque enamel pearl in the furcation area with a prolate spheroid shape of 1.8mm wide and 8mm long. Conclusion The enamel pearl described in this case report is like a very long dental root. Cone beam computed tomography may be used for evaluating enamel pearls. PMID:25008098

  2. Zig-zag tape influence in NREL Phase VI wind turbine

    NASA Astrophysics Data System (ADS)

    Gomez-Iradi, Sugoi; Munduate, Xabier

    2014-06-01

    Two bladed 10 metre diameter wind turbine was tested in the 24.4m × 36.6m NASA-Ames wind tunnel (Phase VI). These experiments have been extensively used for validation purposes for CFD and other engineering tools. The free transition case (S), has been, and is, the most employed one for validation purposes, and consist in a 3° pitch case with a rotational speed of 72rpm upwind configuration with and without yaw misalignment. However, there is another less visited case (M) where identical configuration was tested but with the inclusion of a zig-zag tape. This was called transition fixed sequence. This paper shows the differences between the free and the fix transition cases, that should be more appropriate for comparison with fully turbulent simulations. Steady k-ω SST fully turbulent computations performed with WMB CFD method are compared with the experiments showing, better predictions in the attached flow region when it is compared with the transition fixed experiments. This work wants to prove the utility of M case (transition fixed) and show its differences respect the S case (free transition) for validation purposes.

  3. Development of a thermal and structural analysis procedure for cooled radial turbines

    NASA Technical Reports Server (NTRS)

    Kumar, Ganesh N.; Deanna, Russell G.

    1988-01-01

    A procedure for computing the rotor temperature and stress distributions in a cooled radial turbine is considered. Existing codes for modeling the external mainstream flow and the internal cooling flow are used to compute boundary conditions for the heat transfer and stress analyses. An inviscid, quasi three-dimensional code computes the external free stream velocity. The external velocity is then used in a boundary layer analysis to compute the external heat transfer coefficients. Coolant temperatures are computed by a viscous one-dimensional internal flow code for the momentum and energy equation. These boundary conditions are input to a three-dimensional heat conduction code for calculation of rotor temperatures. The rotor stress distribution may be determined for the given thermal, pressure and centrifugal loading. The procedure is applied to a cooled radial turbine which will be tested at the NASA Lewis Research Center. Representative results from this case are included.

  4. gpuSPHASE-A shared memory caching implementation for 2D SPH using CUDA

    NASA Astrophysics Data System (ADS)

    Winkler, Daniel; Meister, Michael; Rezavand, Massoud; Rauch, Wolfgang

    2017-04-01

    Smoothed particle hydrodynamics (SPH) is a meshless Lagrangian method that has been successfully applied to computational fluid dynamics (CFD), solid mechanics and many other multi-physics problems. Using the method to solve transport phenomena in process engineering requires the simulation of several days to weeks of physical time. Based on the high computational demand of CFD such simulations in 3D need a computation time of years so that a reduction to a 2D domain is inevitable. In this paper gpuSPHASE, a new open-source 2D SPH solver implementation for graphics devices, is developed. It is optimized for simulations that must be executed with thousands of frames per second to be computed in reasonable time. A novel caching algorithm for Compute Unified Device Architecture (CUDA) shared memory is proposed and implemented. The software is validated and the performance is evaluated for the well established dambreak test case.

  5. Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization

    NASA Technical Reports Server (NTRS)

    Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.

    2014-01-01

    Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.

  6. Exploring a new bilateral focal density asymmetry based image marker to predict breast cancer risk

    NASA Astrophysics Data System (ADS)

    Aghaei, Faranak; Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Wang, Yunzhi; Qiu, Yuchen; Liu, Hong; Zheng, Bin

    2017-03-01

    Although breast density has been widely considered an important breast cancer risk factor, it is not very effective to predict risk of developing breast cancer in a short-term or harboring cancer in mammograms. Based on our recent studies to build short-term breast cancer risk stratification models based on bilateral mammographic density asymmetry, we in this study explored a new quantitative image marker based on bilateral focal density asymmetry to predict the risk of harboring cancers in mammograms. For this purpose, we assembled a testing dataset involving 100 positive and 100 negative cases. In each of positive case, no any solid masses are visible on mammograms. We developed a computer-aided detection (CAD) scheme to automatically detect focal dense regions depicting on two bilateral mammograms of left and right breasts. CAD selects one focal dense region with the maximum size on each image and computes its asymmetrical ratio. We used this focal density asymmetry as a new imaging marker to divide testing cases into two groups of higher and lower focal density asymmetry. The first group included 70 cases in which 62.9% are positive, while the second group included 130 cases in which 43.1% are positive. The odds ratio is 2.24. As a result, this preliminary study supported the feasibility of applying a new focal density asymmetry based imaging marker to predict the risk of having mammography-occult cancers. The goal is to assist radiologists more effectively and accurately detect early subtle cancers using mammography and/or other adjunctive imaging modalities in the future.

  7. Broadband Fan Noise Prediction System for Turbofan Engines. Volume 3; Validation and Test Cases

    NASA Technical Reports Server (NTRS)

    Morin, Bruce L.

    2010-01-01

    Pratt & Whitney has developed a Broadband Fan Noise Prediction System (BFaNS) for turbofan engines. This system computes the noise generated by turbulence impinging on the leading edges of the fan and fan exit guide vane, and noise generated by boundary-layer turbulence passing over the fan trailing edge. BFaNS has been validated on three fan rigs that were tested during the NASA Advanced Subsonic Technology Program (AST). The predicted noise spectra agreed well with measured data. The predicted effects of fan speed, vane count, and vane sweep also agreed well with measurements. The noise prediction system consists of two computer programs: Setup_BFaNS and BFaNS. Setup_BFaNS converts user-specified geometry and flow-field information into a BFaNS input file. From this input file, BFaNS computes the inlet and aft broadband sound power spectra generated by the fan and FEGV. The output file from BFaNS contains the inlet, aft and total sound power spectra from each noise source. This report is the third volume of a three-volume set documenting the Broadband Fan Noise Prediction System: Volume 1: Setup_BFaNS User s Manual and Developer s Guide; Volume 2: BFaNS User s Manual and Developer s Guide; and Volume 3: Validation and Test Cases. The present volume begins with an overview of the Broadband Fan Noise Prediction System, followed by validation studies that were done on three fan rigs. It concludes with recommended improvements and additional studies for BFaNS.

  8. Risk-Based, Hypothesis-Driven Framework for Hydrological Field Campaigns with Case Studies

    NASA Astrophysics Data System (ADS)

    Harken, B.; Rubin, Y.

    2014-12-01

    There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration or plume travel time. These predictions often have significant bearing on a decision that must be made. Examples include: how to allocate limited remediation resources between contaminated groundwater sites or where to place a waste repository site. Answering such questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in EPM predictions stems from uncertainty in model parameters, which can be reduced by measurements taken in field campaigns. The costly nature of field measurements motivates a rational basis for determining a measurement strategy that is optimal with respect to the uncertainty in the EPM prediction. The tool of hypothesis testing allows this uncertainty to be quantified by computing the significance of the test resulting from a proposed field campaign. The significance of the test gives a rational basis for determining the optimality of a proposed field campaign. This hypothesis testing framework is demonstrated and discussed using various synthetic case studies. This study involves contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a specified location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical amount of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. The optimality of different field campaigns is assessed by computing the significance of the test resulting from each one. Evaluating the level of significance caused by a field campaign involves steps including likelihood-based inverse modeling and semi-analytical conditional particle tracking.

  9. Improving performance of breast cancer risk prediction using a new CAD-based region segmentation scheme

    NASA Astrophysics Data System (ADS)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Qiu, Yuchen; Zheng, Bin

    2018-02-01

    Objective of this study is to develop and test a new computer-aided detection (CAD) scheme with improved region of interest (ROI) segmentation combined with an image feature extraction framework to improve performance in predicting short-term breast cancer risk. A dataset involving 570 sets of "prior" negative mammography screening cases was retrospectively assembled. In the next sequential "current" screening, 285 cases were positive and 285 cases remained negative. A CAD scheme was applied to all 570 "prior" negative images to stratify cases into the high and low risk case group of having cancer detected in the "current" screening. First, a new ROI segmentation algorithm was used to automatically remove useless area of mammograms. Second, from the matched bilateral craniocaudal view images, a set of 43 image features related to frequency characteristics of ROIs were initially computed from the discrete cosine transform and spatial domain of the images. Third, a support vector machine model based machine learning classifier was used to optimally classify the selected optimal image features to build a CAD-based risk prediction model. The classifier was trained using a leave-one-case-out based cross-validation method. Applying this improved CAD scheme to the testing dataset, an area under ROC curve, AUC = 0.70+/-0.04, which was significantly higher than using the extracting features directly from the dataset without the improved ROI segmentation step (AUC = 0.63+/-0.04). This study demonstrated that the proposed approach could improve accuracy on predicting short-term breast cancer risk, which may play an important role in helping eventually establish an optimal personalized breast cancer paradigm.

  10. Study on the effects of flow in the volute casing on the performance of a sirocco fan

    NASA Astrophysics Data System (ADS)

    Adachi, Tsutomu; Sugita, Naohiro; Ohomori, Satoshi

    2004-08-01

    The flow at the exit from the runner blade of a centrifugal fan with forward curved blades (a sirocco fan) sometimes separates and becomes unstable. We have conducted many researches on the impeller shape of a sirocco fan, proper inlet and exit blade angles were considered to obtain optimum performance. In this paper, the casing shape were decided by changing the circumferential angle, magnifying angle and the width, 21 sorts of casings were used. Performance tests, inner flow velocity and pressure distributions were measured as well. Computational fluid dynamic calculations were also made and compared with the experimental results. Finally, the most suitable casing shape for best performance is considered.

  11. Automatic computation of the travelling wave solutions to nonlinear PDEs

    NASA Astrophysics Data System (ADS)

    Liang, Songxin; Jeffrey, David J.

    2008-05-01

    Various extensions of the tanh-function method and their implementations for finding explicit travelling wave solutions to nonlinear partial differential equations (PDEs) have been reported in the literature. However, some solutions are often missed by these packages. In this paper, a new algorithm and its implementation called TWS for solving single nonlinear PDEs are presented. TWS is implemented in MAPLE 10. It turns out that, for PDEs whose balancing numbers are not positive integers, TWS works much better than existing packages. Furthermore, TWS obtains more solutions than existing packages for most cases. Program summaryProgram title:TWS Catalogue identifier:AEAM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAM_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:1250 No. of bytes in distributed program, including test data, etc.:78 101 Distribution format:tar.gz Programming language:Maple 10 Computer:A laptop with 1.6 GHz Pentium CPU Operating system:Windows XP Professional RAM:760 Mbytes Classification:5 Nature of problem:Finding the travelling wave solutions to single nonlinear PDEs. Solution method:Based on tanh-function method. Restrictions:The current version of this package can only deal with single autonomous PDEs or ODEs, not systems of PDEs or ODEs. However, the PDEs can have any finite number of independent space variables in addition to time t. Unusual features:For PDEs whose balancing numbers are not positive integers, TWS works much better than existing packages. Furthermore, TWS obtains more solutions than existing packages for most cases. Additional comments:It is easy to use. Running time:Less than 20 seconds for most cases, between 20 to 100 seconds for some cases, over 100 seconds for few cases. References: [1] E.S. Cheb-Terrab, K. von Bulow, Comput. Phys. Comm. 90 (1995) 102. [2] S.A. Elwakil, S.K. El-Labany, M.A. Zahran, R. Sabry, Phys. Lett. A 299 (2002) 179. [3] E. Fan, Phys. Lett. 277 (2000) 212. [4] W. Malfliet, Amer. J. Phys. 60 (1992) 650. [5] W. Malfliet, W. Hereman, Phys. Scripta 54 (1996) 563. [6] E.J. Parkes, B.R. Duffy, Comput. Phys. Comm. 98 (1996) 288.

  12. Patients' acceptance of Internet-based home asthma telemonitoring.

    PubMed

    Finkelstein, J; Hripcsak, G; Cabrera, M R

    1998-01-01

    We studied asthma patients from a low-income inner-city community without previous computer experience. The patients were given portable spirometers to perform spirometry tests and palmtop computers to enter symptoms in a diary, to exchange messages with physician and to review test results. The self-testing was performed at home on a daily basis. The results were transmitted to the hospital information system immediately after completion of each test. Physician could review results using an Internet Web browser from any location. A constantly active decision support server monitored all data traffic and dispatched alerts when certain clinical conditions were met. Seventeen patients, out of 19 invited, agreed to participate in the study and have been monitored for three weeks. They have been surveyed then using standardized questionnaire. Most of the patients (82.4%) characterized self-testing procedures as "not complicated at all." In 70.6% of cases self-testing did not interfere with usual activities, and 82.4% of patients felt the self-testing required a "very little" amount of their time. All patients stated that it is important for them to know that the results can be reviewed by professional staff in a timely manner. However, only 29.5% of patients reviewed their results at least once a week at home independently. The majority of the patients (94.1%) were strongly interested in using home asthma telemonitoring in the future. We concluded that Internet-based home asthma telemonitoring can be successfully implemented in the group of patients without previous computer background.

  13. A uniform object-oriented solution to the eigenvalue problem for real symmetric and Hermitian matrices

    NASA Astrophysics Data System (ADS)

    Castro, María Eugenia; Díaz, Javier; Muñoz-Caro, Camelia; Niño, Alfonso

    2011-09-01

    We present a system of classes, SHMatrix, to deal in a unified way with the computation of eigenvalues and eigenvectors in real symmetric and Hermitian matrices. Thus, two descendant classes, one for the real symmetric and other for the Hermitian cases, override the abstract methods defined in a base class. The use of the inheritance relationship and polymorphism allows handling objects of any descendant class using a single reference of the base class. The system of classes is intended to be the core element of more sophisticated methods to deal with large eigenvalue problems, as those arising in the variational treatment of realistic quantum mechanical problems. The present system of classes allows computing a subset of all the possible eigenvalues and, optionally, the corresponding eigenvectors. Comparison with well established solutions for analogous eigenvalue problems, as those included in LAPACK, shows that the present solution is competitive against them. Program summaryProgram title: SHMatrix Catalogue identifier: AEHZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2616 No. of bytes in distributed program, including test data, etc.: 127 312 Distribution format: tar.gz Programming language: Standard ANSI C++. Computer: PCs and workstations. Operating system: Linux, Windows. Classification: 4.8. Nature of problem: The treatment of problems involving eigensystems is a central topic in the quantum mechanical field. Here, the use of the variational approach leads to the computation of eigenvalues and eigenvectors of real symmetric and Hermitian Hamiltonian matrices. Realistic models with several degrees of freedom leads to large (sometimes very large) matrices. Different techniques, such as divide and conquer, can be used to factorize the matrices in order to apply a parallel computing approach. However, it is still interesting to have a core procedure able to tackle the computation of eigenvalues and eigenvectors once the matrix has been factorized to pieces of enough small size. Several available software packages, such as LAPACK, tackled this problem under the traditional imperative programming paradigm. In order to ease the modelling of complex quantum mechanical models it could be interesting to apply an object-oriented approach to the treatment of the eigenproblem. This approach offers the advantage of a single, uniform treatment for the real symmetric and Hermitian cases. Solution method: To reach the above goals, we have developed a system of classes: SHMatrix. SHMatrix is composed by an abstract base class and two descendant classes, one for real symmetric matrices and the other for the Hermitian case. The object-oriented characteristics of inheritance and polymorphism allows handling both cases using a single reference of the base class. The basic computing strategy applied in SHMatrix allows computing subsets of eigenvalues and (optionally) eigenvectors. The tests performed show that SHMatrix is competitive, and more efficient for large matrices, than the equivalent routines of the LAPACK package. Running time: The examples included in the distribution take only a couple of seconds to run.

  14. A transient laboratory method for determining the hydraulic properties of 'tight' rocks-I. Theory

    USGS Publications Warehouse

    Hsieh, P.A.; Tracy, J.V.; Neuzil, C.E.; Bredehoeft, J.D.; Silliman, Stephen E.

    1981-01-01

    Transient pulse testing has been employed increasingly in the laboratory to measure the hydraulic properties of rock samples with low permeability. Several investigators have proposed a mathematical model in terms of an initial-boundary value problem to describe fluid flow in a transient pulse test. However, the solution of this problem has not been available. In analyzing data from the transient pulse test, previous investigators have either employed analytical solutions that are derived with the use of additional, restrictive assumptions, or have resorted to numerical methods. In Part I of this paper, a general, analytical solution for the transient pulse test is presented. This solution is graphically illustrated by plots of dimensionless variables for several cases of interest. The solution is shown to contain, as limiting cases, the more restrictive analytical solutions that the previous investigators have derived. A method of computing both the permeability and specific storage of the test sample from experimental data will be presented in Part II. ?? 1981.

  15. SU-E-T-175: Clinical Evaluations of Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chi, Y; Li, Y; Tian, Z

    2015-06-15

    Purpose: Pencil-beam or superposition-convolution type dose calculation algorithms are routinely used in inverse plan optimization for intensity modulated radiation therapy (IMRT). However, due to their limited accuracy in some challenging cases, e.g. lung, the resulting dose may lose its optimality after being recomputed using an accurate algorithm, e.g. Monte Carlo (MC). It is the objective of this study to evaluate the feasibility and advantages of a new method to include MC in the treatment planning process. Methods: We developed a scheme to iteratively perform MC-based beamlet dose calculations and plan optimization. In the MC stage, a GPU-based dose engine wasmore » used and the particle number sampled from a beamlet was proportional to its optimized fluence from the previous step. We tested this scheme in four lung cancer IMRT cases. For each case, the original plan dose, plan dose re-computed by MC, and dose optimized by our scheme were obtained. Clinically relevant dosimetric quantities in these three plans were compared. Results: Although the original plan achieved a satisfactory PDV dose coverage, after re-computing doses using MC method, it was found that the PTV D95% were reduced by 4.60%–6.67%. After re-optimizing these cases with our scheme, the PTV coverage was improved to the same level as in the original plan, while the critical OAR coverages were maintained to clinically acceptable levels. Regarding the computation time, it took on average 144 sec per case using only one GPU card, including both MC-based beamlet dose calculation and treatment plan optimization. Conclusion: The achieved dosimetric gains and high computational efficiency indicate the feasibility and advantages of the proposed MC-based IMRT optimization method. Comprehensive validations in more patient cases are in progress.« less

  16. Automating usability of ATLAS Distributed Computing resources

    NASA Astrophysics Data System (ADS)

    Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration

    2014-06-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  17. Is it Code Imperfection or 'garbage in Garbage Out'? Outline of Experiences from a Comprehensive Adr Code Verification

    NASA Astrophysics Data System (ADS)

    Zamani, K.; Bombardelli, F. A.

    2013-12-01

    ADR equation describes many physical phenomena of interest in the field of water quality in natural streams and groundwater. In many cases such as: density driven flow, multiphase reactive transport, and sediment transport, either one or a number of terms in the ADR equation may become nonlinear. For that reason, numerical tools are the only practical choice to solve these PDEs. All numerical solvers developed for transport equation need to undergo code verification procedure before they are put in to practice. Code verification is a mathematical activity to uncover failures and check for rigorous discretization of PDEs and implementation of initial/boundary conditions. In the context computational PDE verification is not a well-defined procedure on a clear path. Thus, verification tests should be designed and implemented with in-depth knowledge of numerical algorithms and physics of the phenomena as well as mathematical behavior of the solution. Even test results need to be mathematically analyzed to distinguish between an inherent limitation of algorithm and a coding error. Therefore, it is well known that code verification is a state of the art, in which innovative methods and case-based tricks are very common. This study presents full verification of a general transport code. To that end, a complete test suite is designed to probe the ADR solver comprehensively and discover all possible imperfections. In this study we convey our experiences in finding several errors which were not detectable with routine verification techniques. We developed a test suit including hundreds of unit tests and system tests. The test package has gradual increment in complexity such that tests start from simple and increase to the most sophisticated level. Appropriate verification metrics are defined for the required capabilities of the solver as follows: mass conservation, convergence order, capabilities in handling stiff problems, nonnegative concentration, shape preservation, and spurious wiggles. Thereby, we provide objective, quantitative values as opposed to subjective qualitative descriptions as 'weak' or 'satisfactory' agreement with those metrics. We start testing from a simple case of unidirectional advection, then bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. For all of the mentioned cases we conduct mesh convergence tests. These tests compare the results' order of accuracy versus the formal order of accuracy of discretization. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available we utilize Symmetry, Complete Richardson Extrapolation and Method of False Injection to uncover bugs. Detailed discussions of capabilities of the mentioned code verification techniques are given. Auxiliary subroutines for automation of the test suit and report generation are designed. All in all, the test package is not only a robust tool for code verification but also it provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport.

  18. Computational materials design of crystalline solids.

    PubMed

    Butler, Keith T; Frost, Jarvist M; Skelton, Jonathan M; Svane, Katrine L; Walsh, Aron

    2016-11-07

    The modelling of materials properties and processes from first principles is becoming sufficiently accurate as to facilitate the design and testing of new systems in silico. Computational materials science is both valuable and increasingly necessary for developing novel functional materials and composites that meet the requirements of next-generation technology. A range of simulation techniques are being developed and applied to problems related to materials for energy generation, storage and conversion including solar cells, nuclear reactors, batteries, fuel cells, and catalytic systems. Such techniques may combine crystal-structure prediction (global optimisation), data mining (materials informatics) and high-throughput screening with elements of machine learning. We explore the development process associated with computational materials design, from setting the requirements and descriptors to the development and testing of new materials. As a case study, we critically review progress in the fields of thermoelectrics and photovoltaics, including the simulation of lattice thermal conductivity and the search for Pb-free hybrid halide perovskites. Finally, a number of universal chemical-design principles are advanced.

  19. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atanassov, E.; Dimitrov, D., E-mail: d.slavov@bas.bg, E-mail: emanouil@parallel.bas.bg, E-mail: gurov@bas.bg; Gurov, T.

    2015-10-28

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for optionmore » pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.« less

  20. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    NASA Astrophysics Data System (ADS)

    Atanassov, E.; Dimitrov, D.; Gurov, T.

    2015-10-01

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  1. Application of genetic algorithm for the simultaneous identification of atmospheric pollution sources

    NASA Astrophysics Data System (ADS)

    Cantelli, A.; D'Orta, F.; Cattini, A.; Sebastianelli, F.; Cedola, L.

    2015-08-01

    A computational model is developed for retrieving the positions and the emission rates of unknown pollution sources, under steady state conditions, starting from the measurements of the concentration of the pollutants. The approach is based on the minimization of a fitness function employing a genetic algorithm paradigm. The model is tested considering both pollutant concentrations generated through a Gaussian model in 25 points in a 3-D test case domain (1000m × 1000m × 50 m) and experimental data such as the Prairie Grass field experiments data in which about 600 receptors were located along five concentric semicircle arcs and the Fusion Field Trials 2007. The results show that the computational model is capable to efficiently retrieve up to three different unknown sources.

  2. Optimization of knowledge-based systems and expert system building tools

    NASA Technical Reports Server (NTRS)

    Yasuda, Phyllis; Mckellar, Donald

    1993-01-01

    The objectives of the NASA-AMES Cooperative Agreement were to investigate, develop, and evaluate, via test cases, the system parameters and processing algorithms that constrain the overall performance of the Information Sciences Division's Artificial Intelligence Research Facility. Written reports covering various aspects of the grant were submitted to the co-investigators for the grant. Research studies concentrated on the field of artificial intelligence knowledge-based systems technology. Activities included the following areas: (1) AI training classes; (2) merging optical and digital processing; (3) science experiment remote coaching; (4) SSF data management system tests; (5) computer integrated documentation project; (6) conservation of design knowledge project; (7) project management calendar and reporting system; (8) automation and robotics technology assessment; (9) advanced computer architectures and operating systems; and (10) honors program.

  3. Is routine postoperative gastrografin study needed after laparoscopic sleeve gastrectomy? Experience of 712 cases.

    PubMed

    Wahby, M; Salama, A F; Elezaby, A F; Belgrami, F; Abd Ellatif, M E; El-Kaffas, H F; Al-Katary, M

    2013-11-01

    The current standard of care is to perform a postoperative gastrografin study following laparoscopic sleeve gastrectomy (LSG) to detect leakage or obstruction. This study evaluated the usefulness of this routine procedure. A retrospective chart review was performed in December 2012. All patients had routine intraoperative methylene blue testing to check for possible leakage from the staple line, and any leaking points were oversewn. We also performed postoperative contrast study (gastrografin) routinely in the first 24-48 h for all patients. From June 2007 to December 2012, 712 cases underwent LSG during the study period. Patients included in this study were 556 women (78.1%) and 156 men (21.9%). The mean age was 35 years. The mean BMI was 48 kg/m2. The operative time was 107 ± 29 min, and there were no conversions to open surgery. Intraoperative methylene blue test detected leakage in 28 cases (3.93%). Postoperative contrast study (gastrografin) was negative for leakage in all cases. Computed tomography (CT) scan with oral contrast study detected leakage in 1.4% (ten cases); none of these cases were detected by regular contrast study. Our study showed that intraoperative methylene blue test for leakage is a very sensitive and effective method for detecting leakage during sleeve gastrectomy and should be done routinely in all cases. Routine postoperative contrast study is not needed to detect leakage unless clinically indicated in selected cases, and in such cases contrast-enhanced CT scans are the modality of choice.

  4. Comparative analysis of economic models in selected solar energy computer programs

    NASA Astrophysics Data System (ADS)

    Powell, J. W.; Barnes, K. A.

    1982-01-01

    The economic evaluation models in five computer programs widely used for analyzing solar energy systems (F-CHART 3.0, F-CHART 4.0, SOLCOST, BLAST, and DOE-2) are compared. Differences in analysis techniques and assumptions among the programs are assessed from the point of view of consistency with the Federal requirements for life cycle costing (10 CFR Part 436), effect on predicted economic performance, and optimal system size, case of use, and general applicability to diverse systems types and building types. The FEDSOL program developed by the National Bureau of Standards specifically to meet the Federal life cycle cost requirements serves as a basis for the comparison. Results of the study are illustrated in test cases of two different types of Federally owned buildings: a single family residence and a low rise office building.

  5. Computer-aided resource planning and scheduling for radiological services

    NASA Astrophysics Data System (ADS)

    Garcia, Hong-Mei C.; Yun, David Y.; Ge, Yiqun; Khan, Javed I.

    1996-05-01

    There exists tremendous opportunity in hospital-wide resource optimization based on system integration. This paper defines the resource planning and scheduling requirements integral to PACS, RIS and HIS integration. An multi-site case study is conducted to define the requirements. A well-tested planning and scheduling methodology, called Constrained Resource Planning model, has been applied to the chosen problem of radiological service optimization. This investigation focuses on resource optimization issues for minimizing the turnaround time to increase clinical efficiency and customer satisfaction, particularly in cases where the scheduling of multiple exams are required for a patient. How best to combine the information system efficiency and human intelligence in improving radiological services is described. Finally, an architecture for interfacing a computer-aided resource planning and scheduling tool with the existing PACS, HIS and RIS implementation is presented.

  6. Parallel deterministic transport sweeps of structured and unstructured meshes with overloaded mesh decompositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pautz, Shawn D.; Bailey, Teresa S.

    Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less

  7. On the numerical computation of nonlinear force-free magnetic fields. [from solar photosphere

    NASA Technical Reports Server (NTRS)

    Wu, S. T.; Sun, M. T.; Chang, H. M.; Hagyard, M. J.; Gary, G. A.

    1990-01-01

    An algorithm has been developed to extrapolate nonlinear force-free magnetic fields from the photosphere, given the proper boundary conditions. This paper presents the results of this work, describing the mathematical formalism that was developed, the numerical techniques employed, and comments on the stability criteria and accuracy developed for these numerical schemes. An analytical solution is used for a benchmark test; the results show that the computational accuracy for the case of a nonlinear force-free magnetic field was on the order of a few percent (less than 5 percent). This newly developed scheme was applied to analyze a solar vector magnetogram, and the results were compared with the results deduced from the classical potential field method. The comparison shows that additional physical features of the vector magnetogram were revealed in the nonlinear force-free case.

  8. Parallel deterministic transport sweeps of structured and unstructured meshes with overloaded mesh decompositions

    DOE PAGES

    Pautz, Shawn D.; Bailey, Teresa S.

    2016-11-29

    Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less

  9. A computational investigation of fuel mixing in a hypersonic scramjet

    NASA Technical Reports Server (NTRS)

    Fathauer, Brett W.; Rogers, R. C.

    1993-01-01

    A parabolized, Navier-Stokes code, SHIP3D, is used to numerically investigate the mixing between air injection and hydrogen injection from a swept ramp injector configuration into either a mainstream low-enthalpy flow or a hypervelocity test flow. The mixing comparisons between air and hydrogen injection reveal the importance of matching injectant-to-mainstream mass flow ratios. In flows with the same injectant-to-mainstream dynamic pressure ratio, the mixing definition was altered for the air injection cases. Comparisons of the computed results indicate that the air injection cases overestimate the mixing performance associated with hydrogen injection simulation. A lifting length parameter, to account for the time a fluid particle transverses through the mixing region, is defined and used to establish a connection of injectant mixing in hypervelocity flows, based on nonreactive, low-enthalpy flows.

  10. Theoretical studies of Resonance Enhanced Stimulated Raman Scattering (RESRS) of frequency doubled Alexandrite laser wavelength in cesium vapor

    NASA Technical Reports Server (NTRS)

    Lawandy, Nabil M.

    1987-01-01

    The third phase of research will focus on the propagation and energy extraction of the pump and SERS beams in a variety of configurations including oscillator structures. In order to address these questions a numerical code capable of allowing for saturation and full transverse beam evolution is required. The method proposed is based on a discretized propagation energy extraction model which uses a Kirchoff integral propagator coupled to the three level Raman model already developed. The model will have the resolution required by diffraction limits and will use the previous density matrix results in the adiabatic following limit. Owing to its large computational requirements, such a code must be implemented on a vector array processor. One code on the Cyber is being tested by using previously understood two-level laser models as guidelines for interpreting the results. Two tests were implemented: the evolution of modes in a passive resonator and the evolution of a stable state of the adiabatically eliminated laser equations. These results show mode shapes and diffraction losses for the first case and relaxation oscillations for the second one. Finally, in order to clarify the computing methodology used to exploit the speed of the Cyber's computational speed, the time it takes to perform both of the computations previously mentioned to run on the Cyber and VAX 730 must be measured. Also included is a short description of the current laser model (CAVITY.FOR) and a flow chart of the test computations.

  11. Inlet flow test calibration for a small axial compressor rig. Part 2: CFD compared with experimental results

    NASA Technical Reports Server (NTRS)

    Miller, D. P.; Prahst, P. S.

    1995-01-01

    An axial compressor test rig has been designed for the operation of small turbomachines. A flow test was run to calibrate and determine the source and magnitudes of the loss mechanisms in the compressor inlet for a highly loaded two-stage axial compressor test. Several flow conditions and inlet guide vane (IGV) angle settings were established, for which detailed surveys were completed. Boundary layer bleed was also provided along the casing of the inlet behind the support struts and ahead of the IGV. Several computational fluid dynamics (CFD) calculations were made for selected flow conditions established during the test. Good agreement between the CFD and test data were obtained for these test conditions.

  12. Analysis of subsonic wind tunnel with variation shape rectangular and octagonal on test section

    NASA Astrophysics Data System (ADS)

    Rhakasywi, D.; Ismail; Suwandi, A.; Fadhli, A.

    2018-02-01

    The need for good design in the aerodynamics field required a wind tunnel design. The wind tunnel design required in this case is capable of generating laminar flow. In this research searched for wind tunnel models with rectangular and octagonal variations with objectives to generate laminar flow in the test section. The research method used numerical approach of CFD (Computational Fluid Dynamics) and manual analysis to analyze internal flow in test section. By CFD simulation results and manual analysis to generate laminar flow in the test section is a design that has an octagonal shape without filled for optimal design.

  13. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    PubMed

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  14. Smart Sampling and HPC-based Probabilistic Look-ahead Contingency Analysis Implementation and its Evaluation with Real-world Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Etingov, Pavel V.; Ren, Huiying

    This paper describes a probabilistic look-ahead contingency analysis application that incorporates smart sampling and high-performance computing (HPC) techniques. Smart sampling techniques are implemented to effectively represent the structure and statistical characteristics of uncertainty introduced by different sources in the power system. They can significantly reduce the data set size required for multiple look-ahead contingency analyses, and therefore reduce the time required to compute them. High-performance-computing (HPC) techniques are used to further reduce computational time. These two techniques enable a predictive capability that forecasts the impact of various uncertainties on potential transmission limit violations. The developed package has been tested withmore » real world data from the Bonneville Power Administration. Case study results are presented to demonstrate the performance of the applications developed.« less

  15. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2006-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  16. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2005-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  17. Two-layer convective heating prediction procedures and sensitivities for blunt body reentry vehicles

    NASA Technical Reports Server (NTRS)

    Bouslog, Stanley A.; An, Michael Y.; Wang, K. C.; Tam, Luen T.; Caram, Jose M.

    1993-01-01

    This paper provides a description of procedures typically used to predict convective heating rates to hypersonic reentry vehicles using the two-layer method. These procedures were used to compute the pitch-plane heating distributions to the Apollo geometry for a wind tunnel test case and for three flight cases. Both simple engineering methods and coupled inviscid/boundary layer solutions were used to predict the heating rates. The sensitivity of the heating results in the choice of metrics, pressure distributions, boundary layer edge conditions, and wall catalycity used in the heating analysis were evaluated. Streamline metrics, pressure distributions, and boundary layer edge properties were defined from perfect gas (wind tunnel case) and chemical equilibrium and nonequilibrium (flight cases) inviscid flow-field solutions. The results of this study indicated that the use of CFD-derived metrics and pressures provided better predictions of heating when compared to wind tunnel test data. The study also showed that modeling entropy layer swallowing and ionization had little effect on the heating predictions.

  18. Transire, a Program for Generating Solid-State Interface Structures

    DTIC Science & Technology

    2017-09-14

    function-based electron transport property calculator. Three test cases are presented to demonstrate the usage of Transire: the misorientation of the...graphene bilayer, the interface energy as a function of misorientation of copper grain boundaries, and electron transport transmission across the...gallium nitride/silicon carbide interface. 15. SUBJECT TERMS crystalline interface, electron transport, python, computational chemistry, grain boundary

  19. Correcting for Indirect Range Restriction in Meta-Analysis: Testing a New Meta-Analytic Procedure

    ERIC Educational Resources Information Center

    Le, Huy; Schmidt, Frank L.

    2006-01-01

    Using computer simulation, the authors assessed the accuracy of J. E. Hunter, F. L. Schmidt, and H. Le's (2006) procedure for correcting for indirect range restriction, the most common type of range restriction, in comparison with the conventional practice of applying the Thorndike Case II correction for direct range restriction. Hunter et…

  20. "It's Harder Than We Thought It Would Be": A Comparative Case Study of Expert-Novice Experimentation Strategies.

    ERIC Educational Resources Information Center

    Hmelo-Silver, Cindy E.; Nagarajan, Anandi; Day, Roger S.

    2002-01-01

    Compares a group of expert cancer researchers with four groups of fourth year medical students (the "novice" groups) engaged in the task of designing a clinical trial to test a new cancer drug using a computer-based modeling tool, the Oncology Thinking Cap. (Contains 24 references.) (Author/YDS)

  1. TRANDESNF: A computer program for transonic airfoil design and analysis in nonuniform flow

    NASA Technical Reports Server (NTRS)

    Chang, J. F.; Lan, C. Edward

    1987-01-01

    The use of a transonic airfoil code for analysis, inverse design, and direct optimization of an airfoil immersed in propfan slipstream is described. A summary of the theoretical method, program capabilities, input format, output variables, and program execution are described. Input data of sample test cases and the corresponding output are given.

  2. Aerothermal modeling program, phase 1

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.; Reynolds, R.; Ball, I.; Berry, R.; Johnson, K.; Mongia, H.

    1983-01-01

    Aerothermal submodels used in analytical combustor models are analyzed. The models described include turbulence and scalar transport, gaseous full combustion, spray evaporation/combustion, soot formation and oxidation, and radiation. The computational scheme is discussed in relation to boundary conditions and convergence criteria. Also presented is the data base for benchmark quality test cases and an analysis of simple flows.

  3. Inferring the origin of populations introduced from a genetically structured native range by approximate Bayesian computation: case study of the invasive ladybird Harmonia axyridis

    USDA-ARS?s Scientific Manuscript database

    The correct identification of the source population of an invasive species is a prerequisite for defining and testing different hypotheses concerning the environmental and evolutionary factors responsible for biological invasions. The native area of invasive species may be large, barely known and/or...

  4. Description of the NASA Hypobaric Decompression Sickness Database (1982-1998)

    NASA Technical Reports Server (NTRS)

    Wessel, J. H., III; Conkin, J.

    2008-01-01

    The availability of high-speed computers, data analysis software, and internet communication are compelling reasons to describe and make available computer databases from many disciplines. Methods: Human research using hypobaric chambers to understand and then prevent decompression sickness (DCS) during space walks has been conducted at the Johnson Space Center (JSC) from 1982 to 1998. The data are archived in the NASA Hypobaric Decompression Sickness Database, within an Access 2003 Relational Database. Results: There are 548 records from 237 individuals that participated in 31 unique tests. Each record includes physical characteristics, the denitrogenation procedure that was tested, and the outcome of the test, such as the report of a DCS symptom and the intensity of venous gas emboli (VGE) detected with an ultrasound Doppler bubble detector as they travel in the venous blood along the pulmonary artery on the way to the lungs. We documented 84 cases of DCS and 226 cases where VGE were detected. The test altitudes were 10.2, 10.1, 6.5, 6.0, and 4.3 pounds per square inch absolute (psia). 346 records are from tests conducted at 4.3 psia, the operating pressure of the current U.S. space suit. 169 records evaluate the Staged 10.2 psia Decompression Protocol used by the Space Shuttle Program. The mean exposure time at altitude was 242.3 minutes (SD = 80.6), with a range from 120 to 360 minutes. Among our test subjects, 96 records of exposures are females. The mean age of all test subjects was 31.8 years (SD = 7.17), with a range from 20 to 54 years. Discussion: These data combined with other published databases and evaluated with metaanalysis techniques would extend our understanding about DCS. A better understanding about the cause and prevention of DCS would benefit astronauts, aviators, and divers.

  5. Sonographic Findings in Necrotizing Fasciitis: Two Ends of the Spectrum.

    PubMed

    Shyy, William; Knight, Roneesha S; Goldstein, Ruth; Isaacs, Eric D; Teismann, Nathan A

    2016-10-01

    Necrotizing fasciitis is a rare but serious disease, and early diagnosis is essential to reducing its substantial morbidity and mortality. The 2 cases presented show that the key clinical and radiographic features of necrotizing fasciitis exist along a continuum of severity at initial presentation; thus, this diagnosis should not be prematurely ruled out in cases that do not show the dramatic features familiar to most clinicians. Although computed tomography and magnetic resonance imaging are considered the most effective imaging modalities, the cases described here illustrate how sonography should be recommended as an initial imaging test to make a rapid diagnosis and initiate therapy.

  6. Neuropsychological Function in a Case of Dandy-Walker Variant in a 68-Year-Old Veteran.

    PubMed

    Gross, Patricia L; Kays, Jill L; Shura, Robert D

    2016-01-01

    Dandy-Walker syndrome (DWS) is a congenital brain malformation that is characterized by partial or complete agenesis of the cerebellar vermis and cystic dilatation of the 4th ventricle that shifts ventrolaterally to displace the cerebellar hemispheres. This case is a 68-year-old male veteran with complaints of new-onset cognitive disorder who was found to have previously unsuspected DWS on head computed tomography. This is one of the first case studies to present complete neuropsychological test results in a veteran with DWS. Despite the level of abnormality on imaging, the veteran functioned well until onset of mild cognitive impairments in late adulthood.

  7. Improved Statistics for Genome-Wide Interaction Analysis

    PubMed Central

    Ueki, Masao; Cordell, Heather J.

    2012-01-01

    Recently, Wu and colleagues [1] proposed two novel statistics for genome-wide interaction analysis using case/control or case-only data. In computer simulations, their proposed case/control statistic outperformed competing approaches, including the fast-epistasis option in PLINK and logistic regression analysis under the correct model; however, reasons for its superior performance were not fully explored. Here we investigate the theoretical properties and performance of Wu et al.'s proposed statistics and explain why, in some circumstances, they outperform competing approaches. Unfortunately, we find minor errors in the formulae for their statistics, resulting in tests that have higher than nominal type 1 error. We also find minor errors in PLINK's fast-epistasis and case-only statistics, although theory and simulations suggest that these errors have only negligible effect on type 1 error. We propose adjusted versions of all four statistics that, both theoretically and in computer simulations, maintain correct type 1 error rates under the null hypothesis. We also investigate statistics based on correlation coefficients that maintain similar control of type 1 error. Although designed to test specifically for interaction, we show that some of these previously-proposed statistics can, in fact, be sensitive to main effects at one or both loci, particularly in the presence of linkage disequilibrium. We propose two new “joint effects” statistics that, provided the disease is rare, are sensitive only to genuine interaction effects. In computer simulations we find, in most situations considered, that highest power is achieved by analysis under the correct genetic model. Such an analysis is unachievable in practice, as we do not know this model. However, generally high power over a wide range of scenarios is exhibited by our joint effects and adjusted Wu statistics. We recommend use of these alternative or adjusted statistics and urge caution when using Wu et al.'s originally-proposed statistics, on account of the inflated error rate that can result. PMID:22496670

  8. An Overview of Recent Developments in Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Bennett, Robert M.; Edwards, John W.

    2004-01-01

    The motivation for Computational Aeroelasticity (CA) and the elements of one type of the analysis or simulation process are briefly reviewed. The need for streamlining and improving the overall process to reduce elapsed time and improve overall accuracy is discussed. Further effort is needed to establish the credibility of the methodology, obtain experience, and to incorporate the experience base to simplify the method for future use. Experience with the application of a variety of Computational Aeroelasticity programs is summarized for the transonic flutter of two wings, the AGARD 445.6 wing and a typical business jet wing. There is a compelling need for a broad range of additional flutter test cases for further comparisons. Some existing data sets that may offer CA challenges are presented.

  9. Computer Aided Enzyme Design and Catalytic Concepts

    PubMed Central

    Frushicheva, Maria P.; Mills, Matthew J. L.; Schopf, Patrick; Singh, Manoj K.; Warshel, Arieh

    2014-01-01

    Gaining a deeper understanding of enzyme catalysis is of great practical and fundamental importance. Over the years it has become clear that despite advances made in experimental mutational studies, a quantitative understanding of enzyme catalysis will not be possible without the use of computer modeling approaches. While we believe that electrostatic preorganization is by far the most important catalytic factor, convincing the wider scientific community of this may require the demonstration of effective rational enzyme design. Here we make the point that the main current advances in enzyme design are basically advances in directed evolution and that computer aided enzyme design must involve approaches that can reproduce catalysis in well-defined test cases. Such an approach is provided by the empirical valence bond method. PMID:24814389

  10. A charge- and energy-conserving implicit, electrostatic particle-in-cell algorithm on mapped computational meshes

    NASA Astrophysics Data System (ADS)

    Chacón, L.; Chen, G.; Barnes, D. C.

    2013-01-01

    We describe the extension of the recent charge- and energy-conserving one-dimensional electrostatic particle-in-cell algorithm in Ref. [G. Chen, L. Chacón, D.C. Barnes, An energy- and charge-conserving, implicit electrostatic particle-in-cell algorithm, Journal of Computational Physics 230 (2011) 7018-7036] to mapped (body-fitted) computational meshes. The approach maintains exact charge and energy conservation properties. Key to the algorithm is a hybrid push, where particle positions are updated in logical space, while velocities are updated in physical space. The effectiveness of the approach is demonstrated with a challenging numerical test case, the ion acoustic shock wave. The generalization of the approach to multiple dimensions is outlined.

  11. Users' manual for the Langley high speed propeller noise prediction program (DFP-ATP)

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Tarkenton, G. M.

    1989-01-01

    The use of the Dunn-Farassat-Padula Advanced Technology Propeller (DFP-ATP) noise prediction program which computes the periodic acoustic pressure signature and spectrum generated by propellers moving with supersonic helical tip speeds is described. The program has the capacity of predicting noise produced by a single-rotation propeller (SRP) or a counter-rotation propeller (CRP) system with steady or unsteady blade loading. The computational method is based on two theoretical formulations developed by Farassat. One formulation is appropriate for subsonic sources, and the other for transonic or supersonic sources. Detailed descriptions of user input, program output, and two test cases are presented, as well as brief discussions of the theoretical formulations and computational algorithms employed.

  12. Unstructured Finite Volume Computational Thermo-Fluid Dynamic Method for Multi-Disciplinary Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul

    1998-01-01

    This paper describes a finite volume computational thermo-fluid dynamics method to solve for Navier-Stokes equations in conjunction with energy equation and thermodynamic equation of state in an unstructured coordinate system. The system of equations have been solved by a simultaneous Newton-Raphson method and compared with several benchmark solutions. Excellent agreements have been obtained in each case and the method has been found to be significantly faster than conventional Computational Fluid Dynamic(CFD) methods and therefore has the potential for implementation in Multi-Disciplinary analysis and design optimization in fluid and thermal systems. The paper also describes an algorithm of design optimization based on Newton-Raphson method which has been recently tested in a turbomachinery application.

  13. Use of computer games as an intervention for stroke.

    PubMed

    Proffitt, Rachel M; Alankus, Gazihan; Kelleher, Caitlin L; Engsberg, Jack R

    2011-01-01

    Current rehabilitation for persons with hemiparesis after stroke requires high numbers of repetitions to be in accordance with contemporary motor learning principles. The motivational characteristics of computer games can be harnessed to create engaging interventions for persons with hemiparesis after stroke that incorporate this high number of repetitions. The purpose of this case report was to test the feasibility of using computer games as a 6-week home therapy intervention to improve upper extremity function for a person with stroke. One person with left upper extremity hemiparesis after stroke participated in a 6-week home therapy computer game intervention. The games were customized to her preferences and abilities and modified weekly. Her performance was tracked and analyzed. Data from pre-, mid-, and postintervention testing using standard upper extremity measures and the Reaching Performance Scale (RPS) were analyzed. After 3 weeks, the participant demonstrated increased upper extremity range of motion at the shoulder and decreased compensatory trunk movements during reaching tasks. After 6 weeks, she showed functional gains in activities of daily living (ADLs) and instrumental ADLs despite no further improvements on the RPS. Results indicate that computer games have the potential to be a useful intervention for people with stroke. Future work will add additional support to quantify the effectiveness of the games as a home therapy intervention for persons with stroke.

  14. Experience with Aero- and Fluid-Dynamic Testing for Engineering and CFD Validation

    NASA Technical Reports Server (NTRS)

    Ross, James C.

    2016-01-01

    Ever since computations have been used to simulate aerodynamics the need to ensure that the computations adequately represent real life has followed. Many experiments have been performed specifically for validation and as computational methods have improved, so have the validation experiments. Validation is also a moving target because computational methods improve requiring validation for the new aspect of flow physics that the computations aim to capture. Concurrently, new measurement techniques are being developed that can help capture more detailed flow features pressure sensitive paint (PSP) and particle image velocimetry (PIV) come to mind. This paper will present various wind-tunnel tests the author has been involved with and how they were used for validation of various kinds of CFD. A particular focus is the application of advanced measurement techniques to flow fields (and geometries) that had proven to be difficult to predict computationally. Many of these difficult flow problems arose from engineering and development problems that needed to be solved for a particular vehicle or research program. In some cases the experiments required to solve the engineering problems were refined to provide valuable CFD validation data in addition to the primary engineering data. All of these experiments have provided physical insight and validation data for a wide range of aerodynamic and acoustic phenomena for vehicles ranging from tractor-trailers to crewed spacecraft.

  15. Improving experimental phases for strong reflections prior to density modification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uervirojnangkoorn, Monarin; University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck; Hilgenfeld, Rolf, E-mail: hilgenfeld@biochem.uni-luebeck.de

    A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the mapsmore » can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  16. Development and evaluation of learning module on clinical decision-making in Prosthodontics.

    PubMed

    Deshpande, Saee; Lambade, Dipti; Chahande, Jayashree

    2015-01-01

    Best practice strategies for helping students learn the reasoning skills of problem solving and critical thinking (CT) remain a source of conjecture, particularly with regard to CT. The dental education literature is fundamentally devoid of research on the cognitive components of clinical decision-making. This study was aimed to develop and evaluate the impact of blended learning module on clinical decision-making skills of dental graduates for planning prosthodontics rehabilitation. An interactive teaching module consisting of didactic lectures on clinical decision-making and a computer-assisted case-based treatment planning software was developed Its impact on cognitive knowledge gain in clinical decision-making was evaluated using an assessment involving problem-based multiple choice questions and paper-based case scenarios. Mean test scores were: Pretest (17 ± 1), posttest 1 (21 ± 2) and posttest 2 (43 ± 3). Comparison of mean scores was done with one-way ANOVA test. There was overall significant difference in between mean scores at all the three points (P < 0.001). A pair-wise comparison of mean scores was done with Bonferroni test. The mean difference is significant at the 0.05 level. The pair-wise comparison shows that posttest 2 score is significantly higher than posttest 1 and posttest 1 is significantly higher than pretest that is, pretest 2 > posttest 1 > pretest. Blended teaching methods employing didactic lectures on the clinical decision-making as well as computer assisted case-based learning can be used to improve quality of clinical decision-making in prosthodontic rehabilitation for dental graduates.

  17. EPIBLASTER-fast exhaustive two-locus epistasis detection strategy using graphical processing units

    PubMed Central

    Kam-Thong, Tony; Czamara, Darina; Tsuda, Koji; Borgwardt, Karsten; Lewis, Cathryn M; Erhardt-Lehmann, Angelika; Hemmer, Bernhard; Rieckmann, Peter; Daake, Markus; Weber, Frank; Wolf, Christiane; Ziegler, Andreas; Pütz, Benno; Holsboer, Florian; Schölkopf, Bernhard; Müller-Myhsok, Bertram

    2011-01-01

    Detection of epistatic interaction between loci has been postulated to provide a more in-depth understanding of the complex biological and biochemical pathways underlying human diseases. Studying the interaction between two loci is the natural progression following traditional and well-established single locus analysis. However, the added costs and time duration required for the computation involved have thus far deterred researchers from pursuing a genome-wide analysis of epistasis. In this paper, we propose a method allowing such analysis to be conducted very rapidly. The method, dubbed EPIBLASTER, is applicable to case–control studies and consists of a two-step process in which the difference in Pearson's correlation coefficients is computed between controls and cases across all possible SNP pairs as an indication of significant interaction warranting further analysis. For the subset of interactions deemed potentially significant, a second-stage analysis is performed using the likelihood ratio test from the logistic regression to obtain the P-value for the estimated coefficients of the individual effects and the interaction term. The algorithm is implemented using the parallel computational capability of commercially available graphical processing units to greatly reduce the computation time involved. In the current setup and example data sets (211 cases, 222 controls, 299468 SNPs; and 601 cases, 825 controls, 291095 SNPs), this coefficient evaluation stage can be completed in roughly 1 day. Our method allows for exhaustive and rapid detection of significant SNP pair interactions without imposing significant marginal effects of the single loci involved in the pair. PMID:21150885

  18. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  19. Improved numerical methods for turbulent viscous flows aerothermal modeling program, phase 2

    NASA Technical Reports Server (NTRS)

    Karki, K. C.; Patankar, S. V.; Runchal, A. K.; Mongia, H. C.

    1988-01-01

    The details of a study to develop accurate and efficient numerical schemes to predict complex flows are described. In this program, several discretization schemes were evaluated using simple test cases. This assessment led to the selection of three schemes for an in-depth evaluation based on two-dimensional flows. The scheme with the superior overall performance was incorporated in a computer program for three-dimensional flows. To improve the computational efficiency, the selected discretization scheme was combined with a direct solution approach in which the fluid flow equations are solved simultaneously rather than sequentially.

  20. A numerical code for the simulation of non-equilibrium chemically reacting flows on hybrid CPU-GPU clusters

    NASA Astrophysics Data System (ADS)

    Kudryavtsev, Alexey N.; Kashkovsky, Alexander V.; Borisov, Semyon P.; Shershnev, Anton A.

    2017-10-01

    In the present work a computer code RCFS for numerical simulation of chemically reacting compressible flows on hybrid CPU/GPU supercomputers is developed. It solves 3D unsteady Euler equations for multispecies chemically reacting flows in general curvilinear coordinates using shock-capturing TVD schemes. Time advancement is carried out using the explicit Runge-Kutta TVD schemes. Program implementation uses CUDA application programming interface to perform GPU computations. Data between GPUs is distributed via domain decomposition technique. The developed code is verified on the number of test cases including supersonic flow over a cylinder.

  1. Evaluation of an Intelligent Tutoring System in Pathology: Effects of External Representation on Performance Gains, Metacognition, and Acceptance

    PubMed Central

    Crowley, Rebecca S.; Legowski, Elizabeth; Medvedeva, Olga; Tseytlin, Eugene; Roh, Ellen; Jukic, Drazen

    2007-01-01

    Objective Determine effects of computer-based tutoring on diagnostic performance gains, meta-cognition, and acceptance using two different problem representations. Describe impact of tutoring on spectrum of diagnostic skills required for task performance. Identify key features of student-tutor interaction contributing to learning gains. Design Prospective, between-subjects study, controlled for participant level of training. Resident physicians in two academic pathology programs spent four hours using one of two interfaces which differed mainly in external problem representation. The case-focused representation provided an open-learning environment in which students were free to explore evidence-hypothesis relationships within a case, but could not visualize the entire diagnostic space. The knowledge-focused representation provided an interactive representation of the entire diagnostic space, which more tightly constrained student actions. Measurements Metrics included results of pretest, post-test and retention-test for multiple choice and case diagnosis tests, ratios of performance to student reported certainty, results of participant survey, learning curves, and interaction behaviors during tutoring. Results Students had highly significant learning gains after one tutoring session. Learning was retained at one week. There were no differences between the two interfaces in learning gains on post-test or retention test. Only students in the knowledge-focused interface exhibited significant metacognitive gains from pretest to post-test and pretest to retention test. Students rated the knowledge-focused interface significantly higher than the case-focused interface. Conclusions Cognitive tutoring is associated with improved diagnostic performance in a complex medical domain. The effect is retained at one-week post-training. Knowledge-focused external problem representation shows an advantage over case-focused representation for metacognitive effects and user acceptance. PMID:17213494

  2. Evaluation of an intelligent tutoring system in pathology: effects of external representation on performance gains, metacognition, and acceptance.

    PubMed

    Crowley, Rebecca S; Legowski, Elizabeth; Medvedeva, Olga; Tseytlin, Eugene; Roh, Ellen; Jukic, Drazen

    2007-01-01

    Determine effects of computer-based tutoring on diagnostic performance gains, meta-cognition, and acceptance using two different problem representations. Describe impact of tutoring on spectrum of diagnostic skills required for task performance. Identify key features of student-tutor interaction contributing to learning gains. Prospective, between-subjects study, controlled for participant level of training. Resident physicians in two academic pathology programs spent four hours using one of two interfaces which differed mainly in external problem representation. The case-focused representation provided an open-learning environment in which students were free to explore evidence-hypothesis relationships within a case, but could not visualize the entire diagnostic space. The knowledge-focused representation provided an interactive representation of the entire diagnostic space, which more tightly constrained student actions. Metrics included results of pretest, post-test and retention-test for multiple choice and case diagnosis tests, ratios of performance to student reported certainty, results of participant survey, learning curves, and interaction behaviors during tutoring. Students had highly significant learning gains after one tutoring session. Learning was retained at one week. There were no differences between the two interfaces in learning gains on post-test or retention test. Only students in the knowledge-focused interface exhibited significant metacognitive gains from pretest to post-test and pretest to retention test. Students rated the knowledge-focused interface significantly higher than the case-focused interface. Cognitive tutoring is associated with improved diagnostic performance in a complex medical domain. The effect is retained at one-week post-training. Knowledge-focused external problem representation shows an advantage over case-focused representation for metacognitive effects and user acceptance.

  3. Using generalized additive (mixed) models to analyze single case designs.

    PubMed

    Shadish, William R; Zuur, Alain F; Sullivan, Kristynn J

    2014-04-01

    This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  4. Characterization and Implementation of a Real-World Target Tracking Algorithm on Field Programmable Gate Arrays with Kalman Filter Test Case

    DTIC Science & Technology

    2008-03-01

    to predict its exact position. To locate Ceres, Carl Friedrich Gauss , a mere 24 years old at the time, developed a method called least-squares...dividend to produce the quotient. This method converges to the reciprocal quadratically [11]. For the special case of: 1 H × P (:, :, k)×H ′ + R (3.9) the...high-speed computation of reciprocals within the overall system. The Newton-Raphson method is also expanded for use in calculat- ing square-roots in

  5. Transonic Unsteady Aerodynamics and Aeroelasticity Held in San Diego, California on 7-11 October 1991 (L’Aerodynamique Instationnaire Transsonique el l’Aeroelasticite)

    DTIC Science & Technology

    1992-03-01

    Specific goals were: these conditions was evident at the 1977 meeting in the shock oscillation calculations of Magnus and Yoshihara (Ref 7) and* Application... Magnus . for the minimum flutter speed at M = 0.96 and a favorable, - A case of self-excited shock oscillation about a 14% though unconservarive, agreement...Computational Test cases for Type 11 flows should be es- 7. Magnus , R. and Yoshihara, H.: The Transonic Oscillating tablished (unsteady separating and

  6. Satellite Orbit Under Influence of a Drag - Analytical Approach

    NASA Astrophysics Data System (ADS)

    Martinović, M. M.; Šegan, S. D.

    2017-12-01

    The report studies some changes in orbital elements of the artificial satellites of Earth under influence of atmospheric drag. In order to develop possibilities of applying the results in many future cases, an analytical interpretation of the orbital element perturbations is given via useful, but very long expressions. The development is based on the TD88 air density model, recently upgraded with some additional terms. Some expressions and formulae were developed by the computer algebra system Mathematica and tested in some hypothetical cases. The results have good agreement with iterative (numerical) approach.

  7. Computer-assisted surgery of the paranasal sinuses: technical and clinical experience with 368 patients, using the Vector Vision Compact system.

    PubMed

    Stelter, K; Andratschke, M; Leunig, A; Hagedorn, H

    2006-12-01

    This paper presents our experience with a navigation system for functional endoscopic sinus surgery. In this study, we took particular note of the surgical indications and risks and the measurement precision and preparation time required, and we present one brief case report as an example. Between 2000 and 2004, we performed functional endoscopic sinus surgery on 368 patients at the Ludwig Maximilians University, Munich, Germany. We used the Vector Vision Compact system (BrainLAB) with laser registration. The indications for surgery ranged from severe nasal polyps and chronic sinusitis to malignant tumours of the paranasal sinuses and skull base. The time needed for data preparation was less than five minutes. The time required for preparation and patient registration depended on the method used and the experience of the user. In the later cases, it took 11 minutes on average, using Z-Touch registration. The clinical plausibility test produced an average deviation of 1.3 mm. The complications of system use comprised one intra-operative re-registration (18 per cent) and one complete failure (5 per cent). Despite the assistance of an accurate working computer, the anterior ethmoidal artery was incised in one case. However, in all 368 cases, we experienced no cerebrospinal fluid leaks, optic nerve lesions, retrobulbar haematomas or intracerebral bleeding. There were no deaths. From our experience with computer-guided surgical procedures, we conclude that computer-guided navigational systems are so accurate that the risk of misleading the surgeon is minimal. In the future, their use in certain specialized procedures will be not only sensible but mandatory. We recommend their use not only in difficult surgical situations but also in routine procedures and for surgical training.

  8. A simple test of association for contingency tables with multiple column responses.

    PubMed

    Decady, Y J; Thomas, D R

    2000-09-01

    Loughin and Scherer (1998, Biometrics 54, 630-637) investigated tests of association in two-way tables when one of the categorical variables allows for multiple-category responses from individual respondents. Standard chi-squared tests are invalid in this case, and they developed a bootstrap test procedure that provides good control of test levels under the null hypothesis. This procedure and some others that have been proposed are computationally involved and are based on techniques that are relatively unfamiliar to many practitioners. In this paper, the methods introduced by Rao and Scott (1981, Journal of the American Statistical Association 76, 221-230) for analyzing complex survey data are used to develop a simple test based on a corrected chi-squared statistic.

  9. GBOOST: a GPU-based tool for detecting gene-gene interactions in genome-wide case control studies.

    PubMed

    Yung, Ling Sing; Yang, Can; Wan, Xiang; Yu, Weichuan

    2011-05-01

    Collecting millions of genetic variations is feasible with the advanced genotyping technology. With a huge amount of genetic variations data in hand, developing efficient algorithms to carry out the gene-gene interaction analysis in a timely manner has become one of the key problems in genome-wide association studies (GWAS). Boolean operation-based screening and testing (BOOST), a recent work in GWAS, completes gene-gene interaction analysis in 2.5 days on a desktop computer. Compared with central processing units (CPUs), graphic processing units (GPUs) are highly parallel hardware and provide massive computing resources. We are, therefore, motivated to use GPUs to further speed up the analysis of gene-gene interactions. We implement the BOOST method based on a GPU framework and name it GBOOST. GBOOST achieves a 40-fold speedup compared with BOOST. It completes the analysis of Wellcome Trust Case Control Consortium Type 2 Diabetes (WTCCC T2D) genome data within 1.34 h on a desktop computer equipped with Nvidia GeForce GTX 285 display card. GBOOST code is available at http://bioinformatics.ust.hk/BOOST.html#GBOOST.

  10. Computing tools for implementing standards for single-case designs.

    PubMed

    Chen, Li-Ting; Peng, Chao-Ying Joanne; Chen, Ming-E

    2015-11-01

    In the single-case design (SCD) literature, five sets of standards have been formulated and distinguished: design standards, assessment standards, analysis standards, reporting standards, and research synthesis standards. This article reviews computing tools that can assist researchers and practitioners in meeting the analysis standards recommended by the What Works Clearinghouse: Procedures and Standards Handbook-the WWC standards. These tools consist of specialized web-based calculators or downloadable software for SCD data, and algorithms or programs written in Excel, SAS procedures, SPSS commands/Macros, or the R programming language. We aligned these tools with the WWC standards and evaluated them for accuracy and treatment of missing data, using two published data sets. All tools were tested to be accurate. When missing data were present, most tools either gave an error message or conducted analysis based on the available data. Only one program used a single imputation method. This article concludes with suggestions for an inclusive computing tool or environment, additional research on the treatment of missing data, and reasonable and flexible interpretations of the WWC standards. © The Author(s) 2015.

  11. Volcano Monitoring: A Case Study in Pervasive Computing

    NASA Astrophysics Data System (ADS)

    Peterson, Nina; Anusuya-Rangappa, Lohith; Shirazi, Behrooz A.; Song, Wenzhan; Huang, Renjie; Tran, Daniel; Chien, Steve; Lahusen, Rick

    Recent advances in wireless sensor network technology have provided robust and reliable solutions for sophisticated pervasive computing applications such as inhospitable terrain environmental monitoring. We present a case study for developing a real-time pervasive computing system, called OASIS for optimized autonomous space in situ sensor-web, which combines ground assets (a sensor network) and space assets (NASA’s earth observing (EO-1) satellite) to monitor volcanic activities at Mount St. Helens. OASIS’s primary goals are: to integrate complementary space and in situ ground sensors into an interactive and autonomous sensorweb, to optimize power and communication resource management of the sensorweb and to provide mechanisms for seamless and scalable fusion of future space and in situ components. The OASIS in situ ground sensor network development addresses issues related to power management, bandwidth management, quality of service management, topology and routing management, and test-bed design. The space segment development consists of EO-1 architectural enhancements, feedback of EO-1 data into the in situ component, command and control integration, data ingestion and dissemination and field demonstrations.

  12. A computational fluid dynamics simulation of the hypersonic flight of the Pegasus(TM) vehicle using an artificial viscosity model and a nonlinear filtering method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Mendoza, John Cadiz

    1995-01-01

    The computational fluid dynamics code, PARC3D, is tested to see if its use of non-physical artificial dissipation affects the accuracy of its results. This is accomplished by simulating a shock-laminar boundary layer interaction and several hypersonic flight conditions of the Pegasus(TM) launch vehicle using full artificial dissipation, low artificial dissipation, and the Engquist filter. Before the filter is applied to the PARC3D code, it is validated in one-dimensional and two-dimensional form in a MacCormack scheme against the Riemann and convergent duct problem. For this explicit scheme, the filter shows great improvements in accuracy and computational time as opposed to the nonfiltered solutions. However, for the implicit PARC3D code it is found that the best estimate of the Pegasus experimental heat fluxes and surface pressures is the simulation utilizing low artificial dissipation and no filter. The filter does improve accuracy over the artificially dissipative case but at a computational expense greater than that achieved by the low artificial dissipation case which has no computational time penalty and shows better results. For the shock-boundary layer simulation, the filter does well in terms of accuracy for a strong impingement shock but not as well for weaker shock strengths. Furthermore, for the latter problem the filter reduces the required computational time to convergence by 18.7 percent.

  13. Test-retest reliability of a computer-assisted self-administered questionnaire on early life exposure in a nasopharyngeal carcinoma case-control study.

    PubMed

    Mai, Zhi-Ming; Lin, Jia-Huang; Chiang, Shing-Chun; Ngan, Roger Kai-Cheong; Kwong, Dora Lai-Wan; Ng, Wai-Tong; Ng, Alice Wan-Ying; Yuen, Kam-Tong; Ip, Kai-Ming; Chan, Yap-Hang; Lee, Anne Wing-Mui; Ho, Sai-Yin; Lung, Maria Li; Lam, Tai-Hing

    2018-05-04

    We evaluated the reliability of early life nasopharyngeal carcinoma (NPC) aetiology factors in the questionnaire of an NPC case-control study in Hong Kong during 2014-2017. 140 subjects aged 18+ completed the same computer-assisted questionnaire twice, separated by at least 2 weeks. The questionnaire included most known NPC aetiology factors and the present analysis focused on early life exposure. Test-retest reliability of all the 285 questionnaire items was assessed in all subjects and in 5 subgroups defined by cases/controls, sex, time between 1 st and 2 nd questionnaire (2-29/≥30 weeks), education (secondary or less/postsecondary), and age (25-44/45-59/60+ years) at the first questionnaire. The reliability of items on dietary habits, body figure, skin tone and sun exposure in early life periods (age 6-12 and 13-18) was moderate-to-almost perfect, and most other items had fair-to-substantial reliability in all life periods (age 6-12, 13-18 and 19-30, and 10 years ago). Differences in reliability by strata of the 5 subgroups were only observed in a few items. This study is the first to report the reliability of an NPC questionnaire, and make the questionnaire available online. Overall, our questionnaire had acceptable reliability, suggesting that previous NPC study results on the same risk factors would have similar reliability.

  14. Sonic Boom Computations for a Mach 1.6 Cruise Low Boom Configuration and Comparisons with Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa A.; Cliff, Susan E.; Wilcox, Floyd; Nemec, Marian; Bangert, Linda; Aftosmis, Michael J.; Parlette, Edward

    2011-01-01

    Accurate analysis of sonic boom pressure signatures using computational fluid dynamics techniques remains quite challenging. Although CFD shows accurate predictions of flow around complex configurations, generating grids that can resolve the sonic boom signature far away from the body is a challenge. The test case chosen for this study corresponds to an experimental wind-tunnel test that was conducted to measure the sonic boom pressure signature of a low boom configuration designed by Gulfstream Aerospace Corporation. Two widely used NASA codes, USM3D and AERO, are examined for their ability to accurately capture sonic boom signature. Numerical simulations are conducted for a free-stream Mach number of 1.6, angle of attack of 0.3 and Reynolds number of 3.85x10(exp 6) based on model reference length. Flow around the low boom configuration in free air and inside the Langley Unitary plan wind tunnel are computed. Results from the numerical simulations are compared with wind tunnel data. The effects of viscous and turbulence modeling along with tunnel walls on the computed sonic boom signature are presented and discussed.

  15. Development of a high-specific-speed centrifugal compressor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodgers, C.

    1997-07-01

    This paper describes the development of a subscale single-stage centrifugal compressor with a dimensionless specific speed (Ns) of 1.8, originally designed for full-size application as a high volume flow, low pressure ratio, gas booster compressor. The specific stage is noteworthy in that it provides a benchmark representing the performance potential of very high-specific-speed compressors, of which limited information is found in the open literature. Stage and component test performance characteristics are presented together with traverse results at the impeller exit. Traverse test results were compared with recent CFD computational predictions for an exploratory analytical calibration of a very high-specific-speed impellermore » geometry. The tested subscale (0.583) compressor essentially satisfied design performance expectations with an overall stage efficiency of 74% including, excessive exit casing losses. It was estimated that stage efficiency could be increased to 81% with exit casing losses halved.« less

  16. ECITE: A Testbed for Assessment of Technology Interoperability and Integration wiht Architecture Components

    NASA Astrophysics Data System (ADS)

    Graves, S. J.; Keiser, K.; Law, E.; Yang, C. P.; Djorgovski, S. G.

    2016-12-01

    ECITE (EarthCube Integration and Testing Environment) is providing both cloud-based computational testing resources and an Assessment Framework for Technology Interoperability and Integration. NSF's EarthCube program is funding the development of cyberinfrastructure building block components as technologies to address Earth science research problems. These EarthCube building blocks need to support integration and interoperability objectives to work towards a coherent cyberinfrastructure architecture for the program. ECITE is being developed to provide capabilities to test and assess the interoperability and integration across funded EarthCube technology projects. EarthCube defined criteria for interoperability and integration are applied to use cases coordinating science problems with technology solutions. The Assessment Framework facilitates planning, execution and documentation of the technology assessments for review by the EarthCube community. This presentation will describe the components of ECITE and examine the methodology of cross walking between science and technology use cases.

  17. Recovery Act. Development and Validation of an Advanced Stimulation Prediction Model for Enhanced Geothermal System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Marte

    The research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to: 1) Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation. 2) Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator. 3) Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the resultsmore » to improve understand of proppant flow and transport. 4) Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production. 5) Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include: 1) A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS, 2) Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock, 3) Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications, and 4) Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less

  18. Recovery Act. Development and Validation of an Advanced Stimulation Prediction Model for Enhanced Geothermal Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Marte

    2013-12-31

    This research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to; Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation; Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator; Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the results to improve understandmore » of proppant flow and transport; Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production; and Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include; A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS; Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock; Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications; and Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less

  19. f1: a code to compute Appell's F1 hypergeometric function

    NASA Astrophysics Data System (ADS)

    Colavecchia, F. D.; Gasaneo, G.

    2004-02-01

    In this work we present the FORTRAN code to compute the hypergeometric function F1( α, β1, β2, γ, x, y) of Appell. The program can compute the F1 function for real values of the variables { x, y}, and complex values of the parameters { α, β1, β2, γ}. The code uses different strategies to calculate the function according to the ideas outlined in [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29]. Program summaryTitle of the program: f1 Catalogue identifier: ADSJ Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSJ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computers: PC compatibles, SGI Origin2∗ Operating system under which the program has been tested: Linux, IRIX Programming language used: Fortran 90 Memory required to execute with typical data: 4 kbytes No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 52 325 Distribution format: tar gzip file External subprograms used: Numerical Recipes hypgeo [W.H. Press et al., Numerical Recipes in Fortran 77, Cambridge Univ. Press, 1996] or chyp routine of R.C. Forrey [J. Comput. Phys. 137 (1997) 79], rkf45 [L.F. Shampine and H.H. Watts, Rep. SAND76-0585, 1976]. Keywords: Numerical methods, special functions, hypergeometric functions, Appell functions, Gauss function Nature of the physical problem: Computing the Appell F1 function is relevant in atomic collisions and elementary particle physics. It is usually the result of multidimensional integrals involving Coulomb continuum states. Method of solution: The F1 function has a convergent-series definition for | x|<1 and | y|<1, and several analytic continuations for other regions of the variable space. The code tests the values of the variables and selects one of the precedent cases. In the convergence region the program uses the series definition near the origin of coordinates, and a numerical integration of the third-order differential parametric equation for the F1 function. Also detects several special cases according to the values of the parameters. Restrictions on the complexity of the problem: The code is restricted to real values of the variables { x, y}. Also, there are some parameter domains that are not covered. These usually imply differences between integer parameters that lead to negative integer arguments of Gamma functions. Typical running time: Depends basically on the variables. The computation of Table 4 of [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29] (64 functions) requires approximately 0.33 s in a Athlon 900 MHz processor.

  20. The inventions technology on water resources to support environmental engineering based infrastructure

    NASA Astrophysics Data System (ADS)

    Sunjoto, S.

    2017-03-01

    Since the Stockholm Declaration, declared on the United Nation Conference on the Human Environment in Sweden on 5-16 June 1972 and attended the 113 country delegations, all the infrastructure construction should comply the sustainable development. As a consequence, almost research and studies were directing to the environmental aspect of construction including on water resources engineering. This paper will present the inventions which are very useful for the design of infrastructure, especially on the Groundwater engineering. This field has been rapidly developed since the publication of the well known law of flow through porous materials by Henri Darcy in 1856 on his book "Les fontaine publiques de la ville de Dijon". This law states that the discharge through porous media is proportional to the product of the hydraulic gradient, the cross-sectional area normal to the flow and the coefficient of permeability of the material. Forchheimer in 1930 developed a breakthrough formula by simplifying solution in a steady state flow condition especially in the case of radial flow to compute the permeability coefficient of casing hole or tube test with zero inflow discharge. The outflow discharge on the holes is equal to shape factor of tip of casing (F) multiplied by coefficient of permeability of soils (K) and multiplied by hydraulic head (H). In 1988, Sunjoto derived an equation in unsteady state flow condition based on this formula. In 2002, Sunjoto developed several formulas of shape factor as the parameters of the equation. In the beginning this formula is implemented to compute for the dimension of recharge well as the best method of water conservation for the urban area. After a long research this formula can be implemented to compute the drawdown on pumping or coefficient of permeability of soil by pumping test. This method can substitute the former methods like Theis (1935), Cooper-Jacob (1946), Chow (1952), Glover (1966), Papadopulos-Cooper (1967), Todd (1980), Singh (2000) etc. The advantages of Sunjoto's equation compared to the former methods that it is simpler in equation, easier to compute, doesn't need graphical support, is accurate in result and doesn't need observation well in pumping test, due to its need on drawdown on well function duration of pumping data only.

  1. An EMSO data case study within the INDIGO-DC project

    NASA Astrophysics Data System (ADS)

    Monna, Stephen; Marcucci, Nicola M.; Marinaro, Giuditta; Fiore, Sandro; D'Anca, Alessandro; Antonacci, Marica; Beranzoli, Laura; Favali, Paolo

    2017-04-01

    We present our experience based on a case study within the INDIGO-DataCloud (INtegrating Distributed data Infrastructures for Global ExplOitation) project (www.indigo-datacloud.eu). The aim of INDIGO-DC is to develop a data and computing platform targeting scientific communities. Our case study is an example of activities performed by INGV using data from seafloor observatories that are nodes of the infrastructure EMSO (European Multidisciplinary Seafloor and water column Observatory)-ERIC (www.emso-eu.org). EMSO is composed of several deep-seafloor and water column observatories, deployed at key sites in the European waters, thus forming a widely distributed pan-European infrastructure. In our case study we consider data collected by the NEMO-SN1 observatory, one of the EMSO nodes used for geohazard monitoring, located in the Western Ionian Sea in proximity of Etna volcano. Starting from the case study, through an agile approach, we defined some requirements for INDIGO developers, and tested some of the proposed INDIGO solutions that are of interest for our research community. Given that EMSO is a distributed infrastructure, we are interested in INDIGO solutions that allow access to distributed data storage. Access should be both user-oriented and machine-oriented, and with the use of a common identity and access system. For this purpose, we have been testing: - ONEDATA (https://onedata.org), as global data management system. - INDIGO-IAM as Identity and Access Management system. Another aspect we are interested in is the efficient data processing, and we have focused on two types of INDIGO products: - Ophidia (http://ophidia.cmcc.it), a big data analytics framework for eScience for the analysis of multidimensional data. - A collection of INDIGO Services to run processes for scientific computing through the INDIGO Orchestrator.

  2. Simulation-based Testing of Control Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozmen, Ozgur; Nutaro, James J.; Sanyal, Jibonananda

    It is impossible to adequately test complex software by examining its operation in a physical prototype of the system monitored. Adequate test coverage can require millions of test cases, and the cost of equipment prototypes combined with the real-time constraints of testing with them makes it infeasible to sample more than a small number of these tests. Model based testing seeks to avoid this problem by allowing for large numbers of relatively inexpensive virtual prototypes that operate in simulation time at a speed limited only by the available computing resources. In this report, we describe how a computer system emulatormore » can be used as part of a model based testing environment; specifically, we show that a complete software stack including operating system and application software - can be deployed within a simulated environment, and that these simulations can proceed as fast as possible. To illustrate this approach to model based testing, we describe how it is being used to test several building control systems that act to coordinate air conditioning loads for the purpose of reducing peak demand. These tests involve the use of ADEVS (A Discrete Event System Simulator) and QEMU (Quick Emulator) to host the operational software within the simulation, and a building model developed with the MODELICA programming language using Buildings Library and packaged as an FMU (Functional Mock-up Unit) that serves as the virtual test environment.« less

  3. Mobile Cloud Computing with SOAP and REST Web Services

    NASA Astrophysics Data System (ADS)

    Ali, Mushtaq; Fadli Zolkipli, Mohamad; Mohamad Zain, Jasni; Anwar, Shahid

    2018-05-01

    Mobile computing in conjunction with Mobile web services drives a strong approach where the limitations of mobile devices may possibly be tackled. Mobile Web Services are based on two types of technologies; SOAP and REST, which works with the existing protocols to develop Web services. Both the approaches carry their own distinct features, yet to keep the constraint features of mobile devices in mind, the better in two is considered to be the one which minimize the computation and transmission overhead while offloading. The load transferring of mobile device to remote servers for execution called computational offloading. There are numerous approaches to implement computational offloading a viable solution for eradicating the resources constraints of mobile device, yet a dynamic method of computational offloading is always required for a smooth and simple migration of complex tasks. The intention of this work is to present a distinctive approach which may not engage the mobile resources for longer time. The concept of web services utilized in our work to delegate the computational intensive tasks for remote execution. We tested both SOAP Web services approach and REST Web Services for mobile computing. Two parameters considered in our lab experiments to test; Execution Time and Energy Consumption. The results show that RESTful Web services execution is far better than executing the same application by SOAP Web services approach, in terms of execution time and energy consumption. Conducting experiments with the developed prototype matrix multiplication app, REST execution time is about 200% better than SOAP execution approach. In case of energy consumption REST execution is about 250% better than SOAP execution approach.

  4. Experimental Investigation of Water Droplet Impingement on Airfoils, Finite Wings, and an S-duct Engine Inlet

    NASA Technical Reports Server (NTRS)

    Papadakis, Michael; Hung, Kuohsing E.; Vu, Giao T.; Yeong, Hsiung Wei; Bidwell, Colin S.; Breer, Martin D.; Bencic, Timothy J.

    2002-01-01

    Validation of trajectory computer codes, for icing analysis, requires experimental water droplet impingement data for a wide range of aircraft geometries as well as flow and icing conditions. This report presents improved experimental and data reduction methods for obtaining water droplet impingement data and provides a comprehensive water droplet impingement database for a range of test geometries including an MS(1)-0317 airfoil, a GLC-305 airfoil, an NACA 65(sub 2)-415 airfoil, a commercial transport tail section, a 36-inch chord natural laminar flow NLF(1)-0414 airfoil, a 48-inch NLF(1)-0414 section with a 25 percent chord simple flap, a state-of-the-art three-element high lift system, a NACA 64A008 finite span swept business jet tail, a full-scale business jet horizontal tail section, a 25 percent-scale business jet empennage, and an S-duct turboprop engine inlet. The experimental results were obtained at the NASA Glenn Icing Research Tunnel (IRT) for spray clouds with median volumetric diameter (MVD) of 11, 11.5, 21, 92, and 94 microns and for a range of angles of attack. The majority of the impingement experiments were conducted at an air speed of 175 mph corresponding to a Reynolds number of approximately 1.6 million per foot. The maximum difference of repeated tests from the average ranged from 0.24 to 12 percent for most of the experimental results presented. This represents a significant improvement in test repeatability compared to previous experimental studies. The increase in test repeatability was attributed to improvements made to the experimental and data reduction methods. Computations performed with the LEWICE-2D and LEWICE-3D computer codes for all test configurations are presented in this report. For the test cases involving median volumetric diameters of 11 and 21 microns, the correlation between the analytical and experimental impingement efficiency distributions was good. For the median volumetric diameters of 92 and 94-micron cases, however, the analysis produced higher impingement efficiencies and larger impingement limits than the experiment. It is speculated that this discrepancy is due to droplet splashing and breakup experienced by large droplets during impingement.

  5. Two-step sensitivity testing of parametrized and regionalized life cycle assessments: methodology and case study.

    PubMed

    Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie

    2013-06-04

    Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.

  6. Enhanced Representation of Turbulent Flow Phenomena in Large-Eddy Simulations of the Atmospheric Boundary Layer using Grid Refinement with Pseudo-Spectral Numerics

    NASA Astrophysics Data System (ADS)

    Torkelson, G. Q.; Stoll, R., II

    2017-12-01

    Large Eddy Simulation (LES) is a tool commonly used to study the turbulent transport of momentum, heat, and moisture in the Atmospheric Boundary Layer (ABL). For a wide range of ABL LES applications, representing the full range of turbulent length scales in the flow field is a challenge. This is an acute problem in regions of the ABL with strong velocity or scalar gradients, which are typically poorly resolved by standard computational grids (e.g., near the ground surface, in the entrainment zone). Most efforts to address this problem have focused on advanced sub-grid scale (SGS) turbulence model development, or on the use of massive computational resources. While some work exists using embedded meshes, very little has been done on the use of grid refinement. Here, we explore the benefits of grid refinement in a pseudo-spectral LES numerical code. The code utilizes both uniform refinement of the grid in horizontal directions, and stretching of the grid in the vertical direction. Combining the two techniques allows us to refine areas of the flow while maintaining an acceptable grid aspect ratio. In tests that used only refinement of the vertical grid spacing, large grid aspect ratios were found to cause a significant unphysical spike in the stream-wise velocity variance near the ground surface. This was especially problematic in simulations of stably-stratified ABL flows. The use of advanced SGS models was not sufficient to alleviate this issue. The new refinement technique is evaluated using a series of idealized simulation test cases of neutrally and stably stratified ABLs. These test cases illustrate the ability of grid refinement to increase computational efficiency without loss in the representation of statistical features of the flow field.

  7. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  8. Demo of three ways to use a computer to assist in lab

    NASA Technical Reports Server (NTRS)

    Neville, J. P.

    1990-01-01

    The objective is to help the slow learner and students with a language problem, or to challenge the advanced student. Technology has advanced to the point where images generated on a computer can easily be recorded on a VCR and used as a video tutorial. This transfer can be as simple as pointing a video camera at the screen and recording the image. For more clarity and professional results, a board may be inserted into a computer which will convert the signals directly to the TV standard. Using a computer program that generates movies one can animate various principles which would normally be impossible to show or would require time-lapse photography. For example, you might show the change in shape of grains as a piece of metal is cold worked and then show the recrystallization and grain growth as heat is applied. More imaginative titles and graphics are also possible using this technique. Remedial help may also be offered via computer to those who find a specific concept difficult. A printout of specific data, details of the theory or equipment set-up can be offered. Programs are now available that will help as well as test the student in specific areas so that a Keller type approach can be used with each student to insure each knows the subject before going on to the next topic. A computer can serve as an information source and contain the microstructures, physical data and availability of each material tested in the lab. With this source present unknowns can be evaluated and various tests simulated to create a simple or complex case study lab assignment.

  9. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  10. Thermal-hydraulic posttest analysis for the ANL/MCTF 360/sup 0/ model heat-exchanger water test under mixed convection. [LMFBR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, C.I.; Sha, W.T.; Kasza, K.E.

    As a result of the uncertainties in the understanding of the influence of thermal-buoyancy effects on the flow and heat transfer in Liquid Metal Fast Breeder Reactor heat exchangers and steam generators under off-normal operating conditions, an extensive experimental program is being conducted at Argonne National Laboratory to eliminate these uncertainties. Concurrently, a parallel analytical effort is also being pursued to develop a three-dimensional transient computer code (COMMIX-IHX) to study and predict heat exchanger performance under mixed, forced, and free convection conditions. This paper presents computational results from a heat exchanger simulation and compares them with the results from amore » test case exhibiting strong thermal buoyancy effects. Favorable agreement between experiment and code prediction is obtained.« less

  11. Parameter Estimation for a Turbulent Buoyant Jet Using Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Christopher, Jason D.; Wimer, Nicholas T.; Hayden, Torrey R. S.; Lapointe, Caelan; Grooms, Ian; Rieker, Gregory B.; Hamlington, Peter E.

    2016-11-01

    Approximate Bayesian Computation (ABC) is a powerful tool that allows sparse experimental or other "truth" data to be used for the prediction of unknown model parameters in numerical simulations of real-world engineering systems. In this presentation, we introduce the ABC approach and then use ABC to predict unknown inflow conditions in simulations of a two-dimensional (2D) turbulent, high-temperature buoyant jet. For this test case, truth data are obtained from a simulation with known boundary conditions and problem parameters. Using spatially-sparse temperature statistics from the 2D buoyant jet truth simulation, we show that the ABC method provides accurate predictions of the true jet inflow temperature. The success of the ABC approach in the present test suggests that ABC is a useful and versatile tool for engineering fluid dynamics research.

  12. Flow force and torque on submerged bodies in lattice-Boltzmann methods via momentum exchange.

    PubMed

    Giovacchini, Juan P; Ortiz, Omar E

    2015-12-01

    We review the momentum exchange method to compute the flow force and torque on a submerged body in lattice-Boltzmann methods by presenting an alternative derivation. Our derivation does not depend on a particular implementation of the boundary conditions at the body surface, and it relies on general principles. After the introduction of the momentum exchange method in lattice-Boltzmann methods, some formulations were introduced to compute the fluid force on static and moving bodies. These formulations were introduced in a rather intuitive, ad hoc way. In our derivation, we recover the proposals most frequently used, in some cases with minor corrections, gaining some insight into the two most used formulations. At the end, we present some numerical tests to compare different approaches on a well-known benchmark test that support the correctness of the formulas derived.

  13. HART-II Acoustic Predictions using a Coupled CFD/CSD Method

    NASA Technical Reports Server (NTRS)

    Boyd, D. Douglas, Jr.

    2009-01-01

    This paper documents results to date from the Rotorcraft Acoustic Characterization and Mitigation activity under the NASA Subsonic Rotary Wing Project. The primary goal of this activity is to develop a NASA rotorcraft impulsive noise prediction capability which uses first principles fluid dynamics and structural dynamics. During this effort, elastic blade motion and co-processing capabilities have been included in a recent version of the computational fluid dynamics code (CFD). The CFD code is loosely coupled to computational structural dynamics (CSD) code using new interface codes. The CFD/CSD coupled solution is then used to compute impulsive noise on a plane under the rotor using the Ffowcs Williams-Hawkings solver. This code system is then applied to a range of cases from the Higher Harmonic Aeroacoustic Rotor Test II (HART-II) experiment. For all cases presented, the full experimental configuration (i.e., rotor and wind tunnel sting mount) are used in the coupled CFD/CSD solutions. Results show good correlation between measured and predicted loading and loading time derivative at the only measured radial station. A contributing factor for a typically seen loading mean-value offset between measured data and predictions data is examined. Impulsive noise predictions on the measured microphone plane under the rotor compare favorably with measured mid-frequency noise for all cases. Flow visualization of the BL and MN cases shows that vortex structures generated in the prediction method are consist with measurements. Future application of the prediction method is discussed.

  14. Using regression equations built from summary data in the psychological assessment of the individual case: extension to multiple regression.

    PubMed

    Crawford, John R; Garthwaite, Paul H; Denham, Annie K; Chelune, Gordon J

    2012-12-01

    Regression equations have many useful roles in psychological assessment. Moreover, there is a large reservoir of published data that could be used to build regression equations; these equations could then be employed to test a wide variety of hypotheses concerning the functioning of individual cases. This resource is currently underused because (a) not all psychologists are aware that regression equations can be built not only from raw data but also using only basic summary data for a sample, and (b) the computations involved are tedious and prone to error. In an attempt to overcome these barriers, Crawford and Garthwaite (2007) provided methods to build and apply simple linear regression models using summary statistics as data. In the present study, we extend this work to set out the steps required to build multiple regression models from sample summary statistics and the further steps required to compute the associated statistics for drawing inferences concerning an individual case. We also develop, describe, and make available a computer program that implements these methods. Although there are caveats associated with the use of the methods, these need to be balanced against pragmatic considerations and against the alternative of either entirely ignoring a pertinent data set or using it informally to provide a clinical "guesstimate." Upgraded versions of earlier programs for regression in the single case are also provided; these add the point and interval estimates of effect size developed in the present article.

  15. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform tomore » solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is then used to validate the authors’ method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H and N patient cases and three prostate cases are used to demonstrate the advantages of the authors’ method. Results: The authors’ multi-GPU implementation can finish the optimization process within ∼1 min for the H and N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23–46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. Conclusions: The results demonstrate that the multi-GPU implementation of the authors’ column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors’ study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.« less

  16. Computed tomography of the lacrimal drainage system: retrospective study of 107 cases of dacryostenosis.

    PubMed

    Francis, I C; Kappagoda, M B; Cole, I E; Bank, L; Dunn, G D

    1999-05-01

    To evaluate the role of computed tomography in patients with dacryostenosis. One hundred seven cases of dacryostenosis (94 patients) were assessed by thorough clinical and lacrimal history and examination, and lacrimal region computerized tomography (CT). The lacrimal drainage system examination included the state and position of the puncta; Jones testing; lacrimal syringing; and, in the latter half of the study, telescopic nasal endoscopy. The patients were drawn from the hospital outpatients and private office of the operating lacrimal surgeon in this series (I.C.F.). Of the 107 cases, 79 either underwent dacryocystorhinostomy surgery or had this planned. In 14 of the 107 cases (12 patients), preoperative CT led to an alteration of patient management, usually referral to an otolaryngologist for further evaluation or treatment. In addition to the detection of two tumors extrinsic to the sac, conditions such as ethmoiditis, lacrimal sac mucoceles, soft tissue opacity in the nasolacrimal duct, gross nasal polyposis, fungal sinusitis, and a dacryolith were observed by CT. Similar to the role of functional endoscopic sinus surgery in otolaryngology, CT imaging will become increasingly important in the assessment of many patients with symptoms of lacrimal drainage obstruction.

  17. Computer-aided classification of patients with dementia of Alzheimer's type based on cerebral blood flow determined with arterial spin labeling technique

    NASA Astrophysics Data System (ADS)

    Yamashita, Yasuo; Arimura, Hidetaka; Yoshiura, Takashi; Tokunaga, Chiaki; Magome, Taiki; Monji, Akira; Noguchi, Tomoyuki; Toyofuku, Fukai; Oki, Masafumi; Nakamura, Yasuhiko; Honda, Hiroshi

    2010-03-01

    Arterial spin labeling (ASL) is one of promising non-invasive magnetic resonance (MR) imaging techniques for diagnosis of Alzheimer's disease (AD) by measuring cerebral blood flow (CBF). The aim of this study was to develop a computer-aided classification system for AD patients based on CBFs measured by the ASL technique. The average CBFs in cortical regions were determined as functional image features based on the CBF map image, which was non-linearly transformed to a Talairach brain atlas by using a free-form deformation. An artificial neural network (ANN) was trained with the CBF functional features in 10 cortical regions, and was employed for distinguishing patients with AD from control subjects. For evaluation of the method, we applied the proposed method to 20 cases including ten AD patients and ten control subjects, who were scanned a 3.0-Tesla MR unit. As a result, the area under the receiver operating characteristic curve obtained by the proposed method was 0.893 based on a leave-one-out-by-case test in identification of AD cases among 20 cases. The proposed method would be feasible for classification of patients with AD.

  18. Efficient computational nonlinear dynamic analysis using modal modification response technique

    NASA Astrophysics Data System (ADS)

    Marinone, Timothy; Avitabile, Peter; Foley, Jason; Wolfson, Janet

    2012-08-01

    Generally, structural systems contain nonlinear characteristics in many cases. These nonlinear systems require significant computational resources for solution of the equations of motion. Much of the model, however, is linear where the nonlinearity results from discrete local elements connecting different components together. Using a component mode synthesis approach, a nonlinear model can be developed by interconnecting these linear components with highly nonlinear connection elements. The approach presented in this paper, the Modal Modification Response Technique (MMRT), is a very efficient technique that has been created to address this specific class of nonlinear problem. By utilizing a Structural Dynamics Modification (SDM) approach in conjunction with mode superposition, a significantly smaller set of matrices are required for use in the direct integration of the equations of motion. The approach will be compared to traditional analytical approaches to make evident the usefulness of the technique for a variety of test cases.

  19. Computation of three-dimensional compressible boundary layers to fourth-order accuracy on wings and fuselages

    NASA Technical Reports Server (NTRS)

    Iyer, Venkit

    1990-01-01

    A solution method, fourth-order accurate in the body-normal direction and second-order accurate in the stream surface directions, to solve the compressible 3-D boundary layer equations is presented. The transformation used, the discretization details, and the solution procedure are described. Ten validation cases of varying complexity are presented and results of calculation given. The results range from subsonic flow to supersonic flow and involve 2-D or 3-D geometries. Applications to laminar flow past wing and fuselage-type bodies are discussed. An interface procedure is used to solve the surface Euler equations with the inviscid flow pressure field as the input to assure accurate boundary conditions at the boundary layer edge. Complete details of the computer program used and information necessary to run each of the test cases are given in the Appendix.

  20. A unified design space of synthetic stripe-forming networks

    PubMed Central

    Schaerli, Yolanda; Munteanu, Andreea; Gili, Magüi; Cotterell, James; Sharpe, James; Isalan, Mark

    2014-01-01

    Synthetic biology is a promising tool to study the function and properties of gene regulatory networks. Gene circuits with predefined behaviours have been successfully built and modelled, but largely on a case-by-case basis. Here we go beyond individual networks and explore both computationally and synthetically the design space of possible dynamical mechanisms for 3-node stripe-forming networks. First, we computationally test every possible 3-node network for stripe formation in a morphogen gradient. We discover four different dynamical mechanisms to form a stripe and identify the minimal network of each group. Next, with the help of newly established engineering criteria we build these four networks synthetically and show that they indeed operate with four fundamentally distinct mechanisms. Finally, this close match between theory and experiment allows us to infer and subsequently build a 2-node network that represents the archetype of the explored design space. PMID:25247316

  1. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE PAGES

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...

    2015-11-12

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  2. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  3. An experimental investigation of compressible three-dimensional boundary layer flow in annular diffusers

    NASA Technical Reports Server (NTRS)

    Om, Deepak; Childs, Morris E.

    1987-01-01

    An experimental study is described in which detailed wall pressure measurements have been obtained for compressible three-dimensional unseparated boundary layer flow in annular diffusers with and without normal shock waves. Detailed mean flow-field data were also obtained for the diffuser flow without a shock wave. Two diffuser flows with shock waves were investigated. In one case, the normal shock existed over the complete annulus whereas in the second case, the shock existed over a part of the annulus. The data obtained can be used to validate computational codes for predicting such flow fields. The details of the flow field without the shock wave show flow reversal in the circumferential direction on both inner and outer surfaces. However, there is a lag in the flow reversal between the inner nad the outer surfaces. This is an interesting feature of this flow and should be a good test for the computational codes.

  4. Computation of water hammer protection of modernized pumping station

    NASA Astrophysics Data System (ADS)

    Himr, Daniel

    2014-03-01

    Pumping station supplies water for irrigation. Maximal capacity 2 × 1.2m3·s-1 became insufficient, thus it was upgraded to 2 × 2m3·s-1. Paper is focused on design of protection against water hammer in case of sudden pumps trip. Numerical simulation of the most dangerous case (when pumps are giving the maximal flow rate) showed that existing air vessels were not able to protect the system and it would be necessary to add new vessels. Special care was paid to influence of their connection to the main pipeline, because the resistance of the connection has a significant impact on the scale of pressure pulsations. Finally, the pump trip was performed to verify if the system worked correctly. The test showed that pressure pulsations are lower (better) than computation predicted. This discrepancy was further analysed.

  5. A Simulation Study Comparing Procedures for Assessing Individual Educational Growth. Report No. 182.

    ERIC Educational Resources Information Center

    Richards, James M., Jr.

    A computer simulation procedure was developed to reproduce the overall pattern of results obtained in the Educational Testing Service Growth Study. Then simulated data for seven sets of 10,000 to 15,000 cases were analyzed, and findings compared on the basis of correlations between estimated and true growth scores. Findings showed that growth was…

  6. Analysis of chemical components from plant tissue samples

    NASA Technical Reports Server (NTRS)

    Laseter, J. L.

    1972-01-01

    Information is given on the type and concentration of sterols, free fatty acids, and total fatty acids in plant tissue samples. All samples were analyzed by gas chromatography and then by gas chromatography-mass spectrometry combination. In each case the mass spectral data was accumulated as a computer printout and plot. Typical gas chromatograms are included as well as tables describing test results.

  7. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    ERIC Educational Resources Information Center

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  8. xPerm: fast index canonicalization for tensor computer algebra

    NASA Astrophysics Data System (ADS)

    Martín-García, José M.

    2008-10-01

    We present a very fast implementation of the Butler-Portugal algorithm for index canonicalization with respect to permutation symmetries. It is called xPerm, and has been written as a combination of a Mathematica package and a C subroutine. The latter performs the most demanding parts of the computations and can be linked from any other program or computer algebra system. We demonstrate with tests and timings the effectively polynomial performance of the Butler-Portugal algorithm with respect to the number of indices, though we also show a case in which it is exponential. Our implementation handles generic tensorial expressions with several dozen indices in hundredths of a second, or one hundred indices in a few seconds, clearly outperforming all other current canonicalizers. The code has been already under intensive testing for several years and has been essential in recent investigations in large-scale tensor computer algebra. Program summaryProgram title: xPerm Catalogue identifier: AEBH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 93 582 No. of bytes in distributed program, including test data, etc.: 1 537 832 Distribution format: tar.gz Programming language: C and Mathematica (version 5.0 or higher) Computer: Any computer running C and Mathematica (version 5.0 or higher) Operating system: Linux, Unix, Windows XP, MacOS RAM:: 20 Mbyte Word size: 64 or 32 bits Classification: 1.5, 5 Nature of problem: Canonicalization of indexed expressions with respect to permutation symmetries. Solution method: The Butler-Portugal algorithm. Restrictions: Multiterm symmetries are not considered. Running time: A few seconds with generic expressions of up to 100 indices. The xPermDoc.nb notebook supplied with the distribution takes approximately one and a half hours to execute in full.

  9. PEGASUS 5: An Automated Pre-Processor for Overset-Grid CFD

    NASA Technical Reports Server (NTRS)

    Suhs, Norman E.; Rogers, Stuart E.; Dietz, William E.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    An all new, automated version of the PEGASUS software has been developed and tested. PEGASUS provides the hole-cutting and connectivity information between overlapping grids, and is used as the final part of the grid generation process for overset-grid computational fluid dynamics approaches. The new PEGASUS code (Version 5) has many new features: automated hole cutting; a projection scheme for fixing gaps in overset surfaces; more efficient interpolation search methods using an alternating digital tree; hole-size optimization based on adding additional layers of fringe points; and an automatic restart capability. The new code has also been parallelized using the Message Passing Interface standard. The parallelization performance provides efficient speed-up of the execution time by an order of magnitude, and up to a factor of 30 for very large problems. The results of three example cases are presented: a three-element high-lift airfoil, a generic business jet configuration, and a complete Boeing 777-200 aircraft in a high-lift landing configuration. Comparisons of the computed flow fields for the airfoil and 777 test cases between the old and new versions of the PEGASUS codes show excellent agreement with each other and with experimental results.

  10. Clinical results of computerized tomography-based simulation with laser patient marking.

    PubMed

    Ragan, D P; Forman, J D; He, T; Mesina, C F

    1996-02-01

    Accuracy of a patient treatment portal marking device and computerized tomography (CT) simulation have been clinically tested. A CT-based simulator has been assembled based on a commercial CT scanner. This includes visualization software and a computer-controlled laser drawing device. This laser drawing device is used to transfer the setup, central axis, and/or radiation portals from the CT simulator to the patient for appropriate patient skin marking. A protocol for clinical testing is reported. Twenty-five prospectively, sequentially accessioned patients have been analyzed. The simulation process can be completed in an average time of 62 min. Under many cases, the treatment portals can be designed and the patient marked in one session. Mechanical accuracy of the system was found to be within +/- 1mm. The portal projection accuracy in clinical cases is observed to be better than +/- 1.2 mm. Operating costs are equivalent to the conventional simulation process it replaces. Computed tomography simulation is a clinical accurate substitute for conventional simulation when used with an appropriate patient marking system and digitally reconstructed radiographs. Personnel time spent in CT simulation is equivalent to time in conventional simulation.

  11. EYE MOVEMENT RECORDING AND NONLINEAR DYNAMICS ANALYSIS – THE CASE OF SACCADES#

    PubMed Central

    Aştefănoaei, Corina; Pretegiani, Elena; Optican, L.M.; Creangă, Dorina; Rufa, Alessandra

    2015-01-01

    Evidence of a chaotic behavioral trend in eye movement dynamics was examined in the case of a saccadic temporal series collected from a healthy human subject. Saccades are highvelocity eye movements of very short duration, their recording being relatively accessible, so that the resulting data series could be studied computationally for understanding the neural processing in a motor system. The aim of this study was to assess the complexity degree in the eye movement dynamics. To do this we analyzed the saccadic temporal series recorded with an infrared camera eye tracker from a healthy human subject in a special experimental arrangement which provides continuous records of eye position, both saccades (eye shifting movements) and fixations (focusing over regions of interest, with rapid, small fluctuations). The semi-quantitative approach used in this paper in studying the eye functioning from the viewpoint of non-linear dynamics was accomplished by some computational tests (power spectrum, portrait in the state space and its fractal dimension, Hurst exponent and largest Lyapunov exponent) derived from chaos theory. A high complexity dynamical trend was found. Lyapunov largest exponent test suggested bi-stability of cellular membrane resting potential during saccadic experiment. PMID:25698889

  12. Planetary Radio Interferometry and Doppler Experiment (PRIDE) technique: A test case of the Mars Express Phobos Flyby. II. Doppler tracking: Formulation of observed and computed values, and noise budget

    NASA Astrophysics Data System (ADS)

    Bocanegra-Bahamón, T. M.; Molera Calvés, G.; Gurvits, L. I.; Duev, D. A.; Pogrebenko, S. V.; Cimò, G.; Dirkx, D.; Rosenblatt, P.

    2018-01-01

    Context. Closed-loop Doppler data obtained by deep space tracking networks, such as the NASA Deep Space Network (DSN) and the ESA tracking station network (Estrack), are routinely used for navigation and science applications. By shadow tracking the spacecraft signal, Earth-based radio telescopes involved in the Planetary Radio Interferometry and Doppler Experiment (PRIDE) can provide open-loop Doppler tracking data only when the dedicated deep space tracking facilities are operating in closed-loop mode. Aims: We explain the data processing pipeline in detail and discuss the capabilities of the technique and its potential applications in planetary science. Methods: We provide the formulation of the observed and computed values of the Doppler data in PRIDE tracking of spacecraft and demonstrate the quality of the results using an experiment with the ESA Mars Express spacecraft as a test case. Results: We find that the Doppler residuals and the corresponding noise budget of the open-loop Doppler detections obtained with the PRIDE stations compare to the closed-loop Doppler detections obtained with dedicated deep space tracking facilities.

  13. USM3D Simulations for Second Sonic Boom Workshop

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa; Carter, Melissa B.; Nayani, Sudheer N.; Cliff, Susan; Pearl, Jason M.

    2017-01-01

    The NASA Tetrahedral Unstructured Software System with the USM3D flow solver was used to compute test cases for the Second AIAA Sonic Boom Prediction Workshop. The intent of this report is to document the USM3D results for SBPW2 test cases. The test cases included an axisymmetric equivalent area body, a JAXA wing body, a NASA low boom supersonic configuration modeled with flow through nacelles and engine boundary conditions. All simulations were conducted for a free stream Mach number of 1.6, zero degrees angle of attack, and a Reynolds number of 5.7 million per meter. Simulations were conducted on tetrahedral grids provided by the workshop committee, as well as a family of grids generated by an in-house approach for sonic boom analyses known as BoomGrid using current best practices. The near-field pressure signatures were extracted and propagated to the ground with the atmospheric propagation code, sBOOM. The USM3D near-field pressure signatures, corresponding sBOOM ground signatures, and loudness levels on the ground are compared with mean values from other workshop participants.

  14. Multi-frame image processing with panning cameras and moving subjects

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric

    2014-06-01

    Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.

  15. Powassan Virus-A New Reemerging Tick-Borne Disease.

    PubMed

    Fatmi, Syed Soheb; Zehra, Rija; Carpenter, David O

    2017-01-01

    Powassan virus is a neurovirulent flavivirus consisting of two lineages causing meningoencephalitis. It is the only member of the tick-borne encephalitis serogroup which is present in mainland North America. With a total number of 27 cases from 1958 to 1998 and 98 cases from 1999 to 2016, reported cases have increased by 671% over the last 18 years. Powassan infection is transmitted by different tick species in different geographical regions. Ixodes scapularis is the primary vector that transmits the virus on the East Coast of US and Ixodes cookei in the Midwest and Canada, while Hemaphysalis longicornis is the vector in Russia. Powassan has no singular pathognomonic finding and presents with a wide spectrum of symptoms including severe neurological symptoms. The clinical challenge lies within the management of the disease as there is no standard diagnostic protocol and most cases are only diagnosed after a patient goes through an extensive workup for other infectious disease. The diagnosis is established by a combination of imaging and serologic tests. In case of Powassan meningoencephalitis, computed tomography scan and magnetic resonance imaging show vascular insults, which are also seen in cases of tick-borne encephalitis virus, another flavivirus of medical importance. Serologic tests are the gold standard for diagnosis, although testing is not widely available and only state health departments and Center for Disease Control and Prevention can perform Powassan-specific IgM antibody testing utilizing enzyme-linked immunosorbent assay and immunofluorescence antibody. Powassan is also of veterinary medical importance. Wildlife animals act as a reservoir to the pathogens, hence possessing threat to humans and domestic animals. This review highlights Powassan's neurotropic presentation, epidemiology, diagnostic challenges, and prevalence. Strong emphasis is placed on establishing diagnostic protocols, widespread Powassan-specific IgM testing, role of the vector in disease presentation, and necessary preventive research.

  16. Method to predict external store carriage characteristics at transonic speeds

    NASA Technical Reports Server (NTRS)

    Rosen, Bruce S.

    1988-01-01

    Development of a computational method for prediction of external store carriage characteristics at transonic speeds is described. The geometric flexibility required for treatment of pylon-mounted stores is achieved by computing finite difference solutions on a five-level embedded grid arrangement. A completely automated grid generation procedure facilitates applications. Store modeling capability consists of bodies of revolution with multiple fore and aft fins. A body-conforming grid improves the accuracy of the computed store body flow field. A nonlinear relaxation scheme developed specifically for modified transonic small disturbance flow equations enhances the method's numerical stability and accuracy. As a result, treatment of lower aspect ratio, more highly swept and tapered wings is possible. A limited supersonic freestream capability is also provided. Pressure, load distribution, and force/moment correlations show good agreement with experimental data for several test cases. A detailed computer program description for the Transonic Store Carriage Loads Prediction (TSCLP) Code is included.

  17. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molzahn, Daniel K.

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  18. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE PAGES

    Molzahn, Daniel K.

    2017-03-15

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  19. Full Monte Carlo-Based Biologic Treatment Plan Optimization System for Intensity Modulated Carbon Ion Therapy on Graphics Processing Unit.

    PubMed

    Qin, Nan; Shen, Chenyang; Tsai, Min-Yu; Pinto, Marco; Tian, Zhen; Dedes, Georgios; Pompos, Arnold; Jiang, Steve B; Parodi, Katia; Jia, Xun

    2018-01-01

    One of the major benefits of carbon ion therapy is enhanced biological effectiveness at the Bragg peak region. For intensity modulated carbon ion therapy (IMCT), it is desirable to use Monte Carlo (MC) methods to compute the properties of each pencil beam spot for treatment planning, because of their accuracy in modeling physics processes and estimating biological effects. We previously developed goCMC, a graphics processing unit (GPU)-oriented MC engine for carbon ion therapy. The purpose of the present study was to build a biological treatment plan optimization system using goCMC. The repair-misrepair-fixation model was implemented to compute the spatial distribution of linear-quadratic model parameters for each spot. A treatment plan optimization module was developed to minimize the difference between the prescribed and actual biological effect. We used a gradient-based algorithm to solve the optimization problem. The system was embedded in the Varian Eclipse treatment planning system under a client-server architecture to achieve a user-friendly planning environment. We tested the system with a 1-dimensional homogeneous water case and 3 3-dimensional patient cases. Our system generated treatment plans with biological spread-out Bragg peaks covering the targeted regions and sparing critical structures. Using 4 NVidia GTX 1080 GPUs, the total computation time, including spot simulation, optimization, and final dose calculation, was 0.6 hour for the prostate case (8282 spots), 0.2 hour for the pancreas case (3795 spots), and 0.3 hour for the brain case (6724 spots). The computation time was dominated by MC spot simulation. We built a biological treatment plan optimization system for IMCT that performs simulations using a fast MC engine, goCMC. To the best of our knowledge, this is the first time that full MC-based IMCT inverse planning has been achieved in a clinically viable time frame. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Transient Three-Dimensional Side Load Analysis of Out-of-Round Film Cooled Nozzles

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Lin, Jeff; Ruf, Joe; Guidos, Mike

    2010-01-01

    The objective of this study is to investigate the effect of nozzle out-of-roundness on the transient startup side loads at a high altitude, with an anchored computational methodology. The out-of-roundness could be the result of asymmetric loads induced by hardware attached to the nozzle, asymmetric internal stresses induced by previous tests, and deformation, such as creep, from previous tests. The rocket engine studied encompasses a regeneratively cooled thrust chamber and a film cooled nozzle extension with film coolant distributed from a turbine exhaust manifold. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, and a transient inlet history based on an engine system simulation. Transient startup computations were performed with the out-of-roundness achieved by four different degrees of ovalization: one perfectly round, one slightly out-of-round, one more out-of-round, and one significantly out-of-round. The results show that the separation-line-jump is the peak side load physics for the round, slightly our-of-round, and more out-of-round cases, and the peak side load increases as the degree of out-of-roundness increases. For the significantly out-of-round nozzle, however, the peak side load reduces to comparable to that of the round nozzle and the separation line jump is not the peak side load physics. The counter-intuitive result of the significantly out-of-round case is found to be related to a side force reduction mechanism that splits the effect of the separation-line-jump into two parts, not only in the circumferential direction and most importantly in time.

Top